Real-time recording and decoding of brain signals allow to control a large variety of systems (Clerc 2016), such as wheelchairs, exoskeletons, robotic arms or other types of Brain-Computer Interface (BCI) devices. With electroencephalography (EEG), the brain signal is recorded at the surface of the head (on the scalp), offering a simple setup that does not require surgery as it is the case for invasive recording methods. The signal quality of EEG is lower than with invasive methods and the recording is very sensitive to noise, nonetheless possible applications offer promising results (van Erp 2012). As technologies and signal processing techniques are more and more mature, out-of-the-lab applications and commercial systems are the focus of growing interests (Clerc 2016). These applications and systems rely on a small number of electrodes for recording and low-cost hardware for signal processing. Thus the denoising and classification algorithms should work online and with a reasonable computational load. One of the most challenging issues with EEG-based BCI is to harness the individual variability of brain signals, which could change from hour-to-hour for a user and are highly variable from one user to the other.

Among all the methods considered in the literature for EEG signal processing, the ones relying on covariance matrices were shown numerically to achieve good performances (Yger 2016). In this approach, a portion of the EEG signal is represented by a covariance matrix, whose elements correspond to the covariance of the signals recorded with different electrodes, possibly filtered around different frequencies. The fact that covariance matrices belong to a non-Euclidean space – the manifold of symmetric positive definite (SPD) matrices – calls for efficient classifiers adapted to that geometry.

In this paper, we work with the Minimum Distance to Mean (MDM) classifier, initially proposed in (Barachant 2010). This classifier assigns covariance matrices to the class with the closest mean. The classification results were shown to depend heavily on the mean and distance definition used, and many possibilities were compared in (Kalunga 2015). In the following we will distinguish offline settings, where the classifier’s parameters are selected and evaluated using all available data, block-online settings, where the classifier is parametrized on a first batch of data (usually the beginning of a session) and evaluated on another batch of data (the rest of the session), and online setting, where there is no data available from the user and the classifier is assessed directly on new data. We equip here this classifier with a new family of means based on the so-called inductive mean, which has the main advantage of being computed incrementally, a key property when working in an online setting. This property was already used in (Ho 2013) for k-means clustering. We show numerically that the use of these new means achieves a classification accuracy in a block-online framework comparable to the most accurate nonparametric mean: the Riemannian barycenter with respect to the affine-invariant metric (less than \(1\%\) of difference on average), while their computation cost is lower. We also propose a variant of the online classification algorithm proposed in (Kalunga 2016). In our algorithm, the means of the classes are adapted online, following an incremental learning scheme. Starting from classes learned with other users, the goal is to enable the algorithm to progressively fit with the observed data of a new user.

The paper is organized as follows. Section \ref{sec:blockonline} is devoted to block-online classification: we define the MDM classifier and the family of means we use, and compare numerically the classification results with other state-of-the-art methods. In Section \ref{sec:online}, we present our incremental learning algorithm for online classification.

The proposed approaches are applied on steady-state visual evoked potential (SSVEP), that is brain response to visual stimuli, but are valid on other kinds of BCI stimuli. In a SSVEP experiment, blinking LEDs are placed at different locations in the visual field of a user. The LEDs are blinking at \(F\) different frequencies (\(\mathrm{freq}_{1}\), …, \(\mathrm{freq}_{F}\)). The subject is either asked to focus on one specific blinking LED (with a known frequency) or to focus on a location without LED (resting state). The blinking LED elicit induced oscillations in the brain, which are visible in the EEG. The goal is to determine based on the EEG if the user is focusing on a blinking LED and if so, on which one.

We summarize in Algorithm \ref{alg:MDMalgorithm} the block-online classification method proposed in (Kalunga 2015). Each time that the user is asked to focus on a stimulus, the portion of the EEG recording following the cue onset (the time at which the user was instructed to focus on the blinking LED) is first transformed into a covariance matrix and then classified using the MDM classifier. The means of the classes are estimated based on a collection of labelled data, according to the offline training scheme detailed in Algorithm \ref{alg:OfflineTraining}.

\label{alg:MDMalgorithm}

Inputs : \(\bar{\Sigma}^{(k)}\), the mean of the class \(k\), for \(k=1,\dots,K\) (obtained using an offline training) and an unlabelled EEG trial \(X\in\mathbb{R}^{C\times N}\).

Output: \(\hat{k}\), the predicted label of \(X\).

- 1.
Compute \(\hat{\Sigma}\), an estimate of the covariance matrix of \(X\) (see Section \ref{sec:estim_cov_mat}).

- 2.
Define the class label associated to trial \(X\) as \(\hat{k}=\operatorname*{\arg\!\min}_{k=1,\dots,K}\delta(\hat{\Sigma},\bar{\Sigma}^{(k)})\), where \(\delta(\Sigma_{1},\Sigma_{2})=||\Sigma_{1}^{-1/2}\Sigma_{2}\Sigma_{1}^{-1/2}||_{\mathrm{F}}\) is the Riemannian distance between \(\Sigma_{1}\) and \(\Sigma_{2}\).

Inputs : \(X_{i}\in\mathbb{R}^{C\times N}\), for \(i=1,\dots,l\), a set of labelled EEG trials, and \(\mathcal{I}(k)\), \(k=1,\dots,K\), a set of indices of trials belonging to class \(k\).

Output: \(\bar{\Sigma}^{(k)}\), the mean of the class \(k\), for \(k=1,\dots,K\).

- 1.
Compute \(\hat{\Sigma}_{i}\), an estimate of the covariance matrix of \(X_{i}\), for \(i=1,\dots,l\) (see Section \ref{sec:estim_cov_mat}).

- 2.
For k = 1:K do

- 3.
\(\quad\) Compute the center of class \(\bar{\Sigma}^{(k)}=\mu(\{\hat{\Sigma}_{i}|i\in\mathcal{I}(k)\})\) (see Section \ref{sec:INdMean}).

- 4.
return \(\bar{\Sigma}^{(k)}\) for \(k=1,\dots,K\)

\label{alg:OfflineTraining}