Riemannian Approaches in Brain-Computer Interfaces: A Review
read more
Citations
A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces: A 10-year Update
Temporally Constrained Sparse Group Spatial Patterns for Motor Imagery BCI
Current Status, Challenges, and Possible Solutions of EEG-Based Brain-Computer Interface: A Comprehensive Review
Transfer Learning for Brain–Computer Interfaces: A Euclidean Space Data Alignment Approach
Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features.
References
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
An introduction to kernel-based learning algorithms
Learning with kernels : Support vector machines, regularization, optimization, and beyond
The Geometry of Algorithms with Orthogonality Constraints
Optimization Algorithms on Matrix Manifolds
Related Papers (5)
Frequently Asked Questions (17)
Q2. What future works have the authors mentioned in the paper "Riemannian approaches in brain-computer interfaces: a review" ?
Altogether, the authors are convinced that Riemannian approaches are very promising for BCI design and could become, in the future, the new standard for EEG signals classification.
Q3. What is the simplest way to create a Riemannian manifold?
Given a differential manifold, a common way to create a Riemannian manifold is to embed the tangent space with the usual scalar product of the ambient space.
Q4. What is the advantage of the calibration procedure?
The calibration procedure also benefits from the geometry of SPD matrices with morerobust interpolations between data providing actually usable supplementary artificial instances.
Q5. What is the common limitation of Riemannian approaches?
Most of Riemannian approaches involve solving a computationally demanding Riemannian optimization problem [24] that could be leveraged by the use of stochastic gradient approaches adapted to manifolds [67] when a sufficient amount of data is available.
Q6. How can the authors improve the performance of a BCI system?
as demonstrated in [68], dictionary learning techniques can be used for taking into account both temporal and spatial dimension of the signals and by doing so, it can improve the performances of a BCI system.
Q7. What is the space of SPD matrices?
The space of SPD matrices, noted Pn = {X ∈ Rn×n|X = X>, X 0} and composed of symmetric matrices of strictly positive eigenvalues, can be successfully applied for manipulating covariance matrices from EEG signals.
Q8. What is the limitation to be remedied in order to quit laboratory?
Another limitation to be remedied in order to quit laboratory is the real-time processing of EEG signals for the methods presented in this survey.
Q9. What is the Riemannian version of the PCA algorithm for SPD matrices?
Using the same parametrized mapping from Pn to Pm and a related cost function, a Riemannian version of the PCA algorithm tailored for SPD matrices has been proposed in [20]:W = argmax W∈Gr(n,p) ∑ i δ2 ( W>SiW,W >S̄W ) . (10)In a metric space, the Fréchet variance extend the concept of variance in the same way the Fréchet extends the concept of mean.
Q10. What is the way to classify SPD matrices?
To benefit from both the Riemannian framework and available classification algorithms, Barachant et al proposed to project the SPD matrices onto the tangent space of the Riemannian manifold, where they can be vectorized and thus used as input to an LDA or SVM [15].
Q11. What is the minimum path between the two points for Euclidean distance?
The minimal path between the two points for Euclidean distance is a straight line, whereas the computed AIRM distances draw curves.
Q12. What is the kernel trick for a classifier?
Applying the kernel trick to an SVM leads to the following decision function:h(x) = b+ ∑ i αiyi 〈φ(xi), φ(x)〉︸ ︷︷ ︸ k(xi,x)(14)This made it possible to use SVM to perform non-linear classification, e.g., by using Gaussian kernels, or to classify directly graphs or trees by using dedicated kernels for graphs or trees.
Q13. How can the whole processing pipeline be designed around covariance matrices?
since the BCI pipeline does involve covariance matrix estimation and manipulation, the whole processing pipeline can also be designed around such covariance matrices, using Riemannian geometry, by directly classifying those matrices.
Q14. What is the main way to extract features from EEG data?
From the literature, the authors have identified two main ways of extracting features for EEG data:• signal energy-based features • sample based features.
Q15. What is the dimension of the SPD matrices?
this vector has dimension n(n + 1)/2 (with n the number of rows/columns of the SPD matrices), which might be larger than the number of available training trials in many BCI contexts.
Q16. What is the appealing approach to take in order to build dictionaries?
It would be then very appealing to take the Riemannian nature of covariance matrices or subspaces in order to build dictionaries.
Q17. What is the link between the spatial covariance and the sources?
as the CSP is based on the extraction of sources from the class-covariance of the signals, there exists a strong link between the spatial covariance and the sources.