Introduction

The article studied title “Transfer learning: a Riemannian geometry framework with applications to Brain-Computer Interfaces”. It was presented to the magazine “IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING” in 2016, although it has not been published yet, so its content may change in the future.
This article was realized by the researches Paolo Zanini, Marco Congedo and Christian Jutten of the University Grenoble Alpes, and Salem Said, Yannick Berthoumieu of the University of Bordeaux, both universities located in France.
The research team organized this article in VI sections, starting with an introduction where they present the problem to be solved, the related work investigated, and their proposals to solve the presented problem. In Section II the article explains the basic concepts of Riemannian geometry (Space properties of the SPD matrices, the probabilistic distribution of the space, and the classification rules for these matrices). Then, the section III describes the brain-computer interfaces (BCI) paradigms in detail, analyzing the two different EEG-based BCI datasets, depending on the framework used (MI and ERP). After that, the section IV describes the proposed Riemannian transfer learning methods, and finally the article ends with sections V and VI, where the results obtained with the two datasets were analyzed and the conclusion of the results obtained is presented.

Context of the Work

This scientific article is realized to work in the branch of Brain-Computer Interfaces, focusing on non-invasive BCI or in other words EEG-based BCI.
In the investigation branch of the BCI, the article is in charge of studying the problem of transferring learning in the context of the data classification in an EEG-based BCI. More specifically, the problem treated is based on how to perform a Cross-Session and Cross-Subject BCI learning.
This technology is proposed to eliminate the needs of calibrate the BCI in each run or with different patients, since it is an inconvenience for those who perform the tests because they lose part of their time, and for the patients since they are not generally willing to perform repeated calibration sessions.

Positioning

To solve the aforementioned problem, the scientific article focuses on the previous works regarding the different ways of classifying the data and how to transfer the learning of a BCI.
To make the classification of the data of a BCI, the article start investigating the standard ways of classification in this technology, which generally consists of two stages \cite{Blankertz_2011},  \cite{Gouy_Pailler_2010}. The first stage deals with the extraction or filtering of the characteristics, through frequency treatments \cite{geometry} or spatial filters \cite{Farquhar_2009},  \cite{Gouy_Pailler_2010} , and \cite{Bini_2013}. In the second stage, the filtered characteristics enter to a machine learning algorithm where the classification is performed, this is generally done using a Linear Discriminate Analysis (LDA)  \cite{Blankertz_2011} . However, to perform the classification, the article follows the approach presented in \cite{Barachant_2012} using the signal covariance matrices as a feature of interest.
Straightaway, regarding the problem of the learning transfer of a BCI \cite{Pan_2010}, there are different approaches that can be found in \cite{Congedo_2017}\cite{Bonnabel_2013}\cite{platform}\cite{Wu_2013} and \cite{analysis}, however the article is focused specifically on the Cross-Session and Cross-Subject BCI learning to solve the problem of transfer learning.
To perform the Cross-Session and Cross-Subject BCI learning, a database of previous users is used to initialize the classifier. It should be noted that this classifier is initialized using a Cross-Session approach when the user had not previous sessions (Which is why the data of other patients is used), and using a Cross-Subject approach when data from previous sessions of the same patient are used.
Finally, in this type of BCI learning it must be taken into account that the Cross-Session and Cross-Subject changes can be understood as geometric transformations of the covariance matrices, or according to the article "Shift". These “Shift” present the problems described in \cite{Reuderink_2011}, and solved with the presented approach in \cite{applications} and \cite{Reuderink_2011} .

Contributions

The authors of this article propose to contribute in the data classification and transfer of BCI learning in the following way.
To classify the data, the research team followed the approach presented in \cite{Barachant_2012} , where a classification of a BCI using Riemannian geometry is proposed. Starting off on this approach, the research team made an improvement respecting to the classifier used in this method, which is changed by a probabilistic classifier, presented in \cite{Said_2017} and used for BCI in \cite{Zanini_2016}.
Regarding the transfer of learning, the article focuses on proposing a solution to the main problem related to this subject explained in \cite{Pan_2010} . To achieve this, the approaches proposed by  \cite{Congedo_2017} ,  \cite{Bonnabel_2013},  \cite{platform}, and \cite{Wu_2013}, are not performed, since they require a calibration step in each run or do not have a workable initialization  \cite{analysis} . For these reasons, the article proposes to perform a Cross-Session and Cross-Subject BCI learning to eliminate the calibration time for each test, using a database of past users to initialize the classifier. This is done using the data of old users (Cross-Session) for a new user, and using the data of previous sessions for known users (Cross-Subject).
Finally, the article proposes to solve the problem associated with "Shift" (Spoken in the Positioning part), following a similar approach to the presented in \cite{Reuderink_2011} , but in the Riemannian framework. This is done assuming that “different source configurations and electrode positions induces shifts of covariance matrices with respect to a reference (Resting) state, but that when the brain is engaged in a specific task, covariance matrices move over the SPD manifold in the same direction”, in words of the research team \cite{Zanini_2017}.