scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Single-Trial Kernel-Based Functional Connectivity for Enhanced Feature Extraction in Motor-Related Tasks

13 Apr 2021-Sensors (Multidisciplinary Digital Publishing Institute)-Vol. 21, Iss: 8, pp 2750
TL;DR: In this article, a kernel-based functional connectivity measure was proposed to deal with inter/intra-subject variability in motor-related tasks by extracting the functional connectivity between EEG channels through their Gaussian kernel cross-spectral distribution.
Abstract: Motor learning is associated with functional brain plasticity, involving specific functional connectivity changes in the neural networks. However, the degree of learning new motor skills varies among individuals, which is mainly due to the between-subject variability in brain structure and function captured by electroencephalographic (EEG) recordings. Here, we propose a kernel-based functional connectivity measure to deal with inter/intra-subject variability in motor-related tasks. To this end, from spatio-temporal-frequency patterns, we extract the functional connectivity between EEG channels through their Gaussian kernel cross-spectral distribution. Further, we optimize the spectral combination weights within a sparse-based l2-norm feature selection framework matching the motor-related labels that perform the dimensionality reduction of the extracted connectivity features. From the validation results in three databases with motor imagery and motor execution tasks, we conclude that the single-trial Gaussian functional connectivity measure provides very competitive classifier performance values, being less affected by feature extraction parameters, like the sliding time window, and avoiding the use of prior linear spatial filtering. We also provide interpretability for the clustered functional connectivity patterns and hypothesize that the proposed kernel-based metric is promising for evaluating motor skills.
Citations
More filters
Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: The complex Pearson correlation coefficient (CPCC), which provides information on connectivity with and without consideration of the volume conduction effect, is proposed and compared to the most commonly used undirected connectivity analysis methods, which are phase locking value (PLV) and weighted phase lag index (wPLI).
Abstract: In the background of all human thinking—acting and reacting are sets of connections between different neurons or groups of neurons. We studied and evaluated these connections using electroencephalography (EEG) brain signals. In this paper, we propose the use of the complex Pearson correlation coefficient (CPCC), which provides information on connectivity with and without consideration of the volume conduction effect. Although the Pearson correlation coefficient is a widely accepted measure of the statistical relationships between random variables and the relationships between signals, it is not being used for EEG data analysis. Its meaning for EEG is not straightforward and rarely well understood. In this work, we compare it to the most commonly used undirected connectivity analysis methods, which are phase locking value (PLV) and weighted phase lag index (wPLI). First, the relationship between the measures is shown analytically. Then, it is illustrated by a practical comparison using synthetic and real EEG data. The relationships between the observed connectivity measures are described in terms of the correlation values between them, which are, for the absolute values of CPCC and PLV, not lower that 0.97, and for the imaginary component of CPCC and wPLI—not lower than 0.92, for all observed frequency bands. Results show that the CPCC includes information of both other measures balanced in a single complex-numbered index.

22 citations

Journal ArticleDOI
29 Jun 2021-Sensors
TL;DR: In this paper, an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, was introduced to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection.
Abstract: Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the intrinsic dependencies (even nonlinear relationships) between human body joints. Furthermore, the same human action may have variations because the individual alters their movement and therefore the inter/intraclass variability. Here, we introduce an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection. Obtained results demonstrate how EHECCO represents and discriminates joint probability distributions as kernel-based evaluation of input time series within a tensor reproducing kernel Hilbert space (RKHS). Our approach achieves competitive classification results for style/subject and action recognition tasks on well-known publicly available databases. Moreover, EHECCO favors the interpretation of relevant anthropometric variables correlated with players’ expertise and acted movement on a Tennis-Mocap database (also publicly available with this work). Thereby, our EHECCO-based framework provides a unified representation (through the tensor RKHS) of the Mocap time series to compute linear correlations between a coded metric from joint distributions and player properties, i.e., age, body measurements, and sport movement (action class).

4 citations

Journal ArticleDOI
TL;DR: This work proposes a novel methodology to estimate TE between single pairs of instantaneous phase time series using a kernel-based TE estimator defined in terms of Renyi’s α entropy, which sidesteps the need for probability distribution computation withphase time series obtained by complex filtering the neural signals.
Abstract: Neural oscillations are present in the brain at different spatial and temporal scales, and they are linked to several cognitive functions. Furthermore, the information carried by their phases is fundamental for the coordination of anatomically distributed processing in the brain. The concept of phase transfer entropy refers to an information theory-based measure of directed connectivity among neural oscillations that allows studying such distributed processes. Phase TE is commonly obtained from probability estimations carried out over data from multiple trials, which bars its use as a characterization strategy in brain–computer interfaces. In this work, we propose a novel methodology to estimate TE between single pairs of instantaneous phase time series. Our approach combines a kernel-based TE estimator defined in terms of Renyi’s α entropy, which sidesteps the need for probability distribution computation with phase time series obtained by complex filtering the neural signals. Besides, a kernel-alignment-based relevance analysis is added to highlight relevant features from effective connectivity-based representation supporting further classification stages in EEG-based brain–computer interface systems. Our proposal is tested on simulated coupled data and two publicly available databases containing EEG signals recorded under motor imagery and visual working memory paradigms. Attained results demonstrate how the introduced effective connectivity succeeds in detecting the interactions present in the data for the former, with statistically significant results around the frequencies of interest. It also reflects differences in coupling strength, is robust to realistic noise and signal mixing levels, and captures bidirectional interactions of localized frequency content. Obtained results for the motor imagery and working memory databases show that our approach, combined with the relevance analysis strategy, codes discriminant spatial and frequency-dependent patterns for the different conditions in each experimental paradigm, with classification performances that do well in comparison with those of alternative methods of similar nature.

4 citations

Journal ArticleDOI
TL;DR: In this article , the authors reviewed and investigated the studies on the diagnosis of sleep apnea using AI methods, including machine learning (ML) and deep learning (DL) methods.
Abstract: Apnea is a sleep disorder that stops or reduces airflow for a short time during sleep. Sleep apnea may last for a few seconds and happen for many while sleeping. This reduction in breathing is associated with loud snoring, which may awaken the person with a feeling of suffocation. So far, a variety of methods have been introduced by researchers to diagnose sleep apnea, among which the polysomnography (PSG) method is known to be the best. Analysis of PSG signals is very complicated. Many studies have been conducted on the automatic diagnosis of sleep apnea from biological signals using artificial intelligence (AI), including machine learning (ML) and deep learning (DL) methods. This research reviews and investigates the studies on the diagnosis of sleep apnea using AI methods. First, computer aided diagnosis system (CADS) for sleep apnea using ML and DL techniques along with its parts including dataset, preprocessing, and ML and DL methods are introduced. This research also summarizes the important specifications of the studies on the diagnosis of sleep apnea using ML and DL methods in a table. In the following, a comprehensive discussion is made on the studies carried out in this field. The challenges in the diagnosis of sleep apnea using AI methods are of paramount importance for researchers. Accordingly, these obstacles are elaborately addressed. In another section, the most important future works for studies on sleep apnea detection from PSG signals and AI techniques are presented. Ultimately, the essential findings of this study are provided in the conclusion section. This article is categorized under: Technologies > Artificial Intelligence Application Areas > Data Mining Software Tools Algorithmic Development > Biological Data Mining

3 citations

References
More filters
Book
13 Mar 2017
TL;DR: This practical book shows you how to implement programs capable of learning from data by using concrete examples, minimal theory, and two production-ready Python frameworks-scikit-learn and TensorFlow-author Aurelien Geron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems.
Abstract: Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how. By using concrete examples, minimal theory, and two production-ready Python frameworks-scikit-learn and TensorFlow-author Aurelien Geron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems. You'll learn a range of techniques, starting with simple linear regression and progressing to deep neural networks. With exercises in each chapter to help you apply what you've learned, all you need is programming experience to get started. Explore the machine learning landscape, particularly neural nets Use scikit-learn to track an example machine-learning project end-to-end Explore several training models, including support vector machines, decision trees, random forests, and ensemble methods Use the TensorFlow library to build and train neural nets Dive into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learning Learn techniques for training and scaling deep neural nets Apply practical code examples without acquiring excessive machine learning theory or algorithm details

1,870 citations

Journal ArticleDOI
TL;DR: This study shows how to design and train convolutional neural networks to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping.
Abstract: Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc.

1,675 citations

Journal ArticleDOI
TL;DR: A comprehensive overview of the modern classification algorithms used in EEG-based BCIs is provided, the principles of these methods and guidelines on when and how to use them are presented, and a number of challenges to further advance EEG classification in BCI are identified.
Abstract: Objective: Most current Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately 10 years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach: We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results: We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance: This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these Review of Classification Algorithms for EEG-based BCI 2 methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.

1,280 citations

Journal ArticleDOI
TL;DR: A new kernel is derived by establishing a connection with the Riemannian geometry of symmetric positive definite matrices, effectively replacing the traditional spatial filtering approach for motor imagery EEG-based classification in brain-computer interface applications.

326 citations

Journal ArticleDOI
TL;DR: A novel algorithm, namely temporally constrained sparse group spatial pattern (TSGSP), is proposed for the simultaneous optimization of filter bands and time window within CSP to further boost classification accuracy of MI EEG.
Abstract: Common spatial pattern (CSP)-based spatial filtering has been most popularly applied to electroencephalogram (EEG) feature extraction for motor imagery (MI) classification in brain–computer interface (BCI) application. The effectiveness of CSP is highly affected by the frequency band and time window of EEG segments. Although numerous algorithms have been designed to optimize the spectral bands of CSP, most of them selected the time window in a heuristic way. This is likely to result in a suboptimal feature extraction since the time period when the brain responses to the mental tasks occurs may not be accurately detected. In this paper, we propose a novel algorithm, namely temporally constrained sparse group spatial pattern (TSGSP), for the simultaneous optimization of filter bands and time window within CSP to further boost classification accuracy of MI EEG. Specifically, spectrum-specific signals are first derived by bandpass filtering from raw EEG data at a set of overlapping filter bands. Each of the spectrum-specific signals is further segmented into multiple subseries using sliding window approach. We then devise a joint sparse optimization of filter bands and time windows with temporal smoothness constraint to extract robust CSP features under a multitask learning framework. A linear support vector machine classifier is trained on the optimized EEG features to accurately identify the MI tasks. An experimental study is implemented on three public EEG datasets (BCI Competition III dataset IIIa, BCI Competition IV datasets IIa, and BCI Competition IV dataset IIb) to validate the effectiveness of TSGSP in comparison to several other competing methods. Superior classification performance (averaged accuracies are 88.5%, 83.3%, and 84.3% for the three datasets, respectively) based on the experimental results confirms that the proposed algorithm is a promising candidate for performance improvement of MI-based BCIs.

233 citations