scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Neural Systems and Rehabilitation Engineering in 2022"


Journal ArticleDOI
TL;DR: A physics-informed deep learning framework for musculoskeletal modelling, where physics-based domain knowledge is brought into the data-driven model as soft constraints to penalise/regularise the data -driven model and the physics law between muscle forces and joint kinematics is used the soft constraint.
Abstract: Musculoskeletal models have been widely used for detailed biomechanical analysis to characterise various functional impairments given their ability to estimate movement variables (i.e., muscle forces and joint moments) which cannot be readily measured in vivo. Physics-based computational neuromusculoskeletal models can interpret the dynamic interaction between neural drive to muscles, muscle dynamics, body and joint kinematics and kinetics. Still, such set of solutions suffers from slowness, especially for the complex models, hindering the utility in real-time applications. In recent years, data-driven methods have emerged as a promising alternative due to the benefits in speedy and simple implementation, but they cannot reflect the underlying neuromechanical processes. This paper proposes a physics-informed deep learning framework for musculoskeletal modelling, where physics-based domain knowledge is brought into the data-driven model as soft constraints to penalise/regularise the data-driven model. We use the synchronous muscle forces and joint kinematics prediction from surface electromyogram (sEMG) as the exemplar to illustrate the proposed framework. Convolutional neural network (CNN) is employed as the deep neural network to implement the proposed framework. Simultaneously, the physics law between muscle forces and joint kinematics is used the soft constraint. Experimental validations on two groups of data, including one benchmark dataset and one self-collected dataset from six healthy subjects, are performed. The experimental results demonstrate the effectiveness and robustness of the proposed framework.

31 citations


Journal ArticleDOI
TL;DR: In this paper , a tensor-based frequency feature combination (TFFC) was proposed to extract the frequency information in the MI-BCI system using electroencephalogram (EEG).
Abstract: With the development of the brain-computer interface (BCI) community, motor imagery-based BCI system using electroencephalogram (EEG) has attracted increasing attention because of its portability and low cost. Concerning the multi-channel EEG, the frequency component is one of the most critical features. However, insufficient extraction hinders the development and application of MI-BCIs. To deeply mine the frequency information, we proposed a method called tensor-based frequency feature combination (TFFC). It combined tensor-to-vector projection (TVP), fast fourier transform (FFT), common spatial pattern (CSP) and feature fusion to construct a new feature set. With two datasets, we used different classifiers to compare TFFC with the state-of-the-art feature extraction methods. The experimental results showed that our proposed TFFC could robustly improve the classification accuracy of about 5% ( ). Moreover, visualization analysis implied that the TFFC was a generalization of CSP and Filter Bank CSP (FBCSP). Also, a complementarity between weighted narrowband features (wNBFs) and broadband features (BBFs) was observed from the averaged fusion ratio. This article certificates the importance of frequency information in the MI-BCI system and provides a new direction for designing a feature set of MI-EEG.

20 citations


Journal ArticleDOI
TL;DR: This work designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset and revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas.
Abstract: The attention mechanism of the Transformer has the advantage of extracting feature correlation in the long-sequence data and visualizing the model. As time-series data, the spatial and temporal dependencies of the EEG signals between the time points and the different channels contain important information for accurate classification. So far, Transformer-based approaches have not been widely explored in motor-imagery EEG classification and visualization, especially lacking general models based on cross-individual validation. Taking advantage of the Transformer model and the spatial-temporal characteristics of the EEG signals, we designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset. With 3s EEG data, our models obtained the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks in cross-individual validation, which outperformed other state-of-the-art models by 0.88%, 2.11%, and 1.06%. The inclusion of the positional embedding modules in the Transformer could improve the EEG classification performance. Furthermore, the visualization results of attention weights provided insights into the working mechanism of the Transformer-based networks during motor imagery tasks. The topography of the attention weights revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas. Together, our deep learning methods not only provide novel and powerful tools for classifying and understanding EEG data but also have broad applications for brain-computer interface (BCI) systems.

16 citations


Journal ArticleDOI
TL;DR: In this paper , a new dynamic brain network analysis method based on EEG microstate was proposed to evaluate the cross-task mental workload using the dynamic functional connectivity metrics under specific microstate, which provided a new insight for understanding the neural mechanism of mental workload with different types of information.
Abstract: The accurate evaluation of operators' mental workload in human-machine systems plays an important role in ensuring the correct execution of tasks and the safety of operators. However, the performance of cross-task mental workload evaluation based on physiological metrics remains unsatisfactory. To explore the changes in dynamic functional connectivity properties with varying mental workload in different tasks, four mental workload tasks with different types of information were designed and a newly proposed dynamic brain network analysis method based on EEG microstate was applied in this paper. Six microstate topographies labeled as Microstate A-F were obtained to describe the task-state EEG dynamics, which was highly consistent with previous studies. Dynamic brain network analysis revealed that 15 nodes and 68 pairs of connectivity from the Frontal-Parietal region were sensitive to mental workload in all four tasks, indicating that these nodal metrics had potential to effectively evaluate mental workload in the cross-task scenario. The characteristic path length of Microstate D brain network in both Theta and Alpha bands decreased whereas the global efficiency increased significantly when the mental workload became higher, suggesting that the cognitive control network of brain tended to have higher function integration property under high mental workload state. Furthermore, by using a SVM classifier, an averaged classification accuracy of 95.8% for within-task and 80.3% for cross-task mental workload discrimination were achieved. Results implies that it is feasible to evaluate the cross-task mental workload using the dynamic functional connectivity metrics under specific microstate, which provided a new insight for understanding the neural mechanism of mental workload with different types of information.

15 citations


Journal ArticleDOI
TL;DR: In this article , a convolutional neural network using a channel-wise variational autoencoder (CVNet) based on inter-task transfer learning was proposed to decode the forearm movement decoding from electroencephalography (EEG) signals.
Abstract: Highly sophisticated control based on a brain-computer interface (BCI) requires decoding kinematic information from brain signals. The forearm is a region of the upper limb that is often used in everyday life, but intuitive movements within the same limb have rarely been investigated in previous BCI studies. In this study, we focused on various forearm movement decoding from electroencephalography (EEG) signals using a small number of samples. Ten healthy participants took part in an experiment and performed motor execution (ME) and motor imagery (MI) of the intuitive movement tasks (Dataset I). We propose a convolutional neural network using a channel-wise variational autoencoder (CVNet) based on inter-task transfer learning. We approached that training the reconstructed ME-EEG signals together will also achieve more sufficient classification performance with only a small amount of MI-EEG signals. The proposed CVNet was validated on our own Dataset I and a public dataset, BNCI Horizon 2020 (Dataset II). The classification accuracies of various movements are confirmed to be 0.83 (±0.04) and 0.69 (±0.04) for Dataset I and II, respectively. The results show that the proposed method exhibits performance increases of approximately 0.09~0.27 and 0.08~0.24 compared with the conventional models for Dataset I and II, respectively. The outcomes suggest that the training model for decoding imagined movements can be performed using data from ME and a small number of data samples from MI. Hence, it is presented the feasibility of BCI learning strategies that can sufficiently learn deep learning with a few amount of calibration dataset and time only, with stable performance.

15 citations


Journal ArticleDOI
TL;DR: This is the first epilepsy seizure detection study employing the integration of both the UL and the SL modules, achieving a competitive performance superior or similar to that of the state-of-the-art methods.
Abstract: The electroencephalogram (EEG), for measuring the electrophysiological activity of the brain, has been widely applied in automatic detection of epilepsy seizures. Various EEG-based seizure detection algorithms have already yielded high sensitivity, but training those algorithms requires a large amount of labelled data. Data labelling is often done with a lot of human efforts, which is very time-consuming. In this study, we propose a hybrid system integrating an unsupervised learning (UL) module and a supervised learning (SL) module, where the UL module can significantly reduce the workload of data labelling. For preliminary seizure screening, UL synthesizes amplitude-integrated EEG (aEEG) extraction, isolation forest-based anomaly detection, adaptive segmentation, and silhouette coefficient-based anomaly detection evaluation. The UL module serves to quickly locate the determinate subjects (seizure segments and seizure-free segments) and the indeterminate subjects (potential seizure candidates). Afterwards, more robust seizure detection for the indeterminate subjects is performed by the SL using an EasyEnsemble algorithm. EasyEnsemble, as a class-imbalance learning method, can potentially decrease the generalization error of the seizure-free segments. The proposed method can significantly reduce the workload of data labelling while guaranteeing satisfactory performance. The proposed seizure detection system is evaluated using the Children’s Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) scalp EEG dataset, and it achieves a mean accuracy of 92.62%, a mean sensitivity of 95.55%, and a mean specificity of 92.57%. To the best of our knowledge, this is the first epilepsy seizure detection study employing the integration of both the UL and the SL modules, achieving a competitive performance superior or similar to that of the state-of-the-art methods.

14 citations


Journal ArticleDOI
TL;DR: Based on bidirectional gated recurrent unit (Bi-GRU) neural network, an automatic seizure detection method is proposed in this article , where wavelet transforms are applied to EEG recordings for filtering pre-processing.
Abstract: Visual inspection of long-term electroencephalography (EEG) is a tedious task for physicians in neurology. Based on bidirectional gated recurrent unit (Bi-GRU) neural network, an automatic seizure detection method is proposed in this paper to facilitate the diagnosis and treatment of epilepsy. Firstly, wavelet transforms are applied to EEG recordings for filtering pre-processing. Then the relative energies of signals in several particular frequency bands are calculated and inputted into Bi-GRU network. Afterwards, the outputs of Bi-GRU network are further processed by moving average filtering, threshold comparison and seizure merging to generate the discriminant results that the tested EEG belong to seizure or not. Evaluated on CHB-MIT scalp EEG database, the proposed seizure detection method obtained an average sensitivity of 93.89% and an average specificity of 98.49%. 124 out of 128 seizures were correctly detected and the achieved average false detection rate was 0.31 per hour on 867.14 h testing data. The results show the superiority of Bi-GRU network in seizure detection and the proposed detection method has a promising potential in the monitoring of long-term EEG.

14 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper extracted low-dimensional spectral-temporal features in terms of mean-standard deviation of wavelet transform coefficient (MS-WTC), based on which a novel absence seizure detection framework was developed.
Abstract: Absence seizure as a generalized onset seizure, simultaneously spreading seizure to both sides of the brain, involves around ten-second sudden lapses of consciousness. It common occurs in children than adults, which affects living quality even threats lives. Absence seizure can be confused with inattentive attention-deficit hyperactivity disorder since both have similar symptoms, such as inattention and daze. Therefore, it is necessary to detect absence seizure onset. However, seizure onset detection in electroencephalography (EEG) signals is a challenging task due to the non-stereotyped seizure activities as well as their stochastic and non-stationary characteristics in nature. Joint spectral-temporal features are believed to contain sufficient and powerful feature information for absence seizure detection. However, the resulting high-dimensional features involve redundant information and require heavy computational load. Here, we discover significant low-dimensional spectral-temporal features in terms of mean-standard deviation of wavelet transform coefficient (MS-WTC), based on which a novel absence seizure detection framework is developed. The EEG signals are transformed into the spectral-temporal domain, with their low-dimensional features fed into a convolutional neural network. Superior detection performance is achieved on the widely-used benchmark dataset as well as a clinical dataset from the Chinese 301 Hospital. For the former, seven classification tasks were evaluated with the accuracy from 99.8% to 100.0%, while for the latter, the method achieved a mean accuracy of 94.7%, overwhelming other methods with low-dimensional temporal and spectral features. Experimental results on two seizure datasets demonstrate reliability, efficiency and stability of our proposed MS-WTC method, validating the significance of the extracted low-dimensional spectral-temporal features.

13 citations


Journal ArticleDOI
TL;DR: The results show the superiority of Bi-GRU network in seizure detection and the proposed detection method has a promising potential in the monitoring of long-term EEG.
Abstract: Visual inspection of long-term electroencephalography (EEG) is a tedious task for physicians in neurology. Based on bidirectional gated recurrent unit (Bi-GRU) neural network, an automatic seizure detection method is proposed in this paper to facilitate the diagnosis and treatment of epilepsy. Firstly, wavelet transforms are applied to EEG recordings for filtering pre-processing. Then the relative energies of signals in several particular frequency bands are calculated and inputted into Bi-GRU network. Afterwards, the outputs of Bi-GRU network are further processed by moving average filtering, threshold comparison and seizure merging to generate the discriminant results that the tested EEG belong to seizure or not. Evaluated on CHB-MIT scalp EEG database, the proposed seizure detection method obtained an average sensitivity of 93.89% and an average specificity of 98.49%. 124 out of 128 seizures were correctly detected and the achieved average false detection rate was 0.31 per hour on 867.14 h testing data. The results show the superiority of Bi-GRU network in seizure detection and the proposed detection method has a promising potential in the monitoring of long-term EEG.

13 citations


Journal ArticleDOI
TL;DR: Experimental results on two seizure datasets demonstrate reliability, efficiency and stability of the proposed MS-WTC method, validating the significance of the extracted low-dimensional spectral-temporal features.
Abstract: Absence seizure as a generalized onset seizure, simultaneously spreading seizure to both sides of the brain, involves around ten-second sudden lapses of consciousness. It common occurs in children than adults, which affects living quality even threats lives. Absence seizure can be confused with inattentive attention-deficit hyperactivity disorder since both have similar symptoms, such as inattention and daze. Therefore, it is necessary to detect absence seizure onset. However, seizure onset detection in electroencephalography (EEG) signals is a challenging task due to the non-stereotyped seizure activities as well as their stochastic and non-stationary characteristics in nature. Joint spectral-temporal features are believed to contain sufficient and powerful feature information for absence seizure detection. However, the resulting high-dimensional features involve redundant information and require heavy computational load. Here, we discover significant low-dimensional spectral-temporal features in terms of mean-standard deviation of wavelet transform coefficient (MS-WTC), based on which a novel absence seizure detection framework is developed. The EEG signals are transformed into the spectral-temporal domain, with their low-dimensional features fed into a convolutional neural network. Superior detection performance is achieved on the widely-used benchmark dataset as well as a clinical dataset from the Chinese 301 Hospital. For the former, seven classification tasks were evaluated with the accuracy from 99.8% to 100.0%, while for the latter, the method achieved a mean accuracy of 94.7%, overwhelming other methods with low-dimensional temporal and spectral features. Experimental results on two seizure datasets demonstrate reliability, efficiency and stability of our proposed MS-WTC method, validating the significance of the extracted low-dimensional spectral-temporal features.

13 citations


Journal ArticleDOI
TL;DR: A comparative study on the classification of dysarthria severity levels using different deep learning techniques and acoustic features and finds the DNN classifier using MFCC-based i-vectors outperforms other systems.
Abstract: Assessing the severity level of dysarthria can provide an insight into the patient’s improvement, assist pathologists to plan therapy, and aid automatic dysarthric speech recognition systems. In this article, we present a comparative study on the classification of dysarthria severity levels using different deep learning techniques and acoustic features. First, we evaluate the basic architectural choices such as deep neural network (DNN), convolutional neural network, gated recurrent units and long short-term memory network using the basic speech features, namely, Mel-frequency cepstral coefficients (MFCCs) and constant-Q cepstral coefficients. Next, speech-disorder specific features computed from prosody, articulation, phonation and glottal functioning are evaluated on DNN models. Finally, we explore the utility of low-dimensional feature representation using subspace modeling to give i-vectors, which are then classified using DNN models. Evaluation is done using the standard UA-Speech and TORGO databases. By giving an accuracy of 93.97% under the speaker-dependent scenario and 49.22% under the speaker-independent scenario for the UA-Speech database, the DNN classifier using MFCC-based i-vectors outperforms other systems.

Journal ArticleDOI
TL;DR: In this article , a Transformer-based model was proposed to extract feature correlation in the long-sequence EEG data and visualize the model, which achieved the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks.
Abstract: The attention mechanism of the Transformer has the advantage of extracting feature correlation in the long-sequence data and visualizing the model. As time-series data, the spatial and temporal dependencies of the EEG signals between the time points and the different channels contain important information for accurate classification. So far, Transformer-based approaches have not been widely explored in motor-imagery EEG classification and visualization, especially lacking general models based on cross-individual validation. Taking advantage of the Transformer model and the spatial-temporal characteristics of the EEG signals, we designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset. With 3s EEG data, our models obtained the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks in cross-individual validation, which outperformed other state-of-the-art models by 0.88%, 2.11%, and 1.06%. The inclusion of the positional embedding modules in the Transformer could improve the EEG classification performance. Furthermore, the visualization results of attention weights provided insights into the working mechanism of the Transformer-based networks during motor imagery tasks. The topography of the attention weights revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas. Together, our deep learning methods not only provide novel and powerful tools for classifying and understanding EEG data but also have broad applications for brain-computer interface (BCI) systems.

Journal ArticleDOI
TL;DR: In this article , an unsupervised learning (UL) module and a supervised learning (SL) module are integrated to reduce the workload of data labelling for epilepsy seizure detection.
Abstract: The electroencephalogram (EEG), for measuring the electrophysiological activity of the brain, has been widely applied in automatic detection of epilepsy seizures. Various EEG-based seizure detection algorithms have already yielded high sensitivity, but training those algorithms requires a large amount of labelled data. Data labelling is often done with a lot of human efforts, which is very time-consuming. In this study, we propose a hybrid system integrating an unsupervised learning (UL) module and a supervised learning (SL) module, where the UL module can significantly reduce the workload of data labelling. For preliminary seizure screening, UL synthesizes amplitude-integrated EEG (aEEG) extraction, isolation forest-based anomaly detection, adaptive segmentation, and silhouette coefficient-based anomaly detection evaluation. The UL module serves to quickly locate the determinate subjects (seizure segments and seizure-free segments) and the indeterminate subjects (potential seizure candidates). Afterwards, more robust seizure detection for the indeterminate subjects is performed by the SL using an EasyEnsemble algorithm. EasyEnsemble, as a class-imbalance learning method, can potentially decrease the generalization error of the seizure-free segments. The proposed method can significantly reduce the workload of data labelling while guaranteeing satisfactory performance. The proposed seizure detection system is evaluated using the Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) scalp EEG dataset, and it achieves a mean accuracy of 92.62%, a mean sensitivity of 95.55%, and a mean specificity of 92.57%. To the best of our knowledge, this is the first epilepsy seizure detection study employing the integration of both the UL and the SL modules, achieving a competitive performance superior or similar to that of the state-of-the-art methods.

Journal ArticleDOI
TL;DR: The feasibility of BCI learning strategies that can sufficiently learn deep learning with a few amount of calibration dataset and time only, with stable performance is presented.
Abstract: Highly sophisticated control based on a brain-computer interface (BCI) requires decoding kinematic information from brain signals. The forearm is a region of the upper limb that is often used in everyday life, but intuitive movements within the same limb have rarely been investigated in previous BCI studies. In this study, we focused on various forearm movement decoding from electroencephalography (EEG) signals using a small number of samples. Ten healthy participants took part in an experiment and performed motor execution (ME) and motor imagery (MI) of the intuitive movement tasks (Dataset I). We propose a convolutional neural network using a channel-wise variational autoencoder (CVNet) based on inter-task transfer learning. We approached that training the reconstructed ME-EEG signals together will also achieve more sufficient classification performance with only a small amount of MI-EEG signals. The proposed CVNet was validated on our own Dataset I and a public dataset, BNCI Horizon 2020 (Dataset II). The classification accuracies of various movements are confirmed to be 0.83 (±0.04) and 0.69 (±0.04) for Dataset I and II, respectively. The results show that the proposed method exhibits performance increases of approximately 0.09~0.27 and 0.08~0.24 compared with the conventional models for Dataset I and II, respectively. The outcomes suggest that the training model for decoding imagined movements can be performed using data from ME and a small number of data samples from MI. Hence, it is presented the feasibility of BCI learning strategies that can sufficiently learn deep learning with a few amount of calibration dataset and time only, with stable performance.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a SincNet-based hybrid neural network (SHNN) for MI-based BCIs, which used squeeze-and-excitation modules to learn a sparse representation of the filtered data.
Abstract: It is difficult to identify optimal cut-off frequencies for filters used with the common spatial pattern (CSP) method in motor imagery (MI)-based brain-computer interfaces (BCIs). Most current studies choose filter cut-frequencies based on experience or intuition, resulting in sub-optimal use of MI-related spectral information in the electroencephalography (EEG). To improve information utilization, we propose a SincNet-based hybrid neural network (SHNN) for MI-based BCIs. First, raw EEG is segmented into different time windows and mapped into the CSP feature space. Then, SincNets are used as filter bank band-pass filters to automatically filter the data. Next, we used squeeze-and-excitation modules to learn a sparse representation of the filtered data. The resulting sparse data were fed into convolutional neural networks to learn deep feature representations. Finally, these deep features were fed into a gated recurrent unit module to seek sequential relations, and a fully connected layer was used for classification. We used the BCI competition IV datasets 2a and 2b to verify the effectiveness of our SHNN method. The mean classification accuracies (kappa values) of our SHNN method are 0.7426 (0.6648) on dataset 2a and 0.8349 (0.6697) on dataset 2b, respectively. The statistical test results demonstrate that our SHNN can significantly outperform other state-of-the-art methods on these datasets.

Journal ArticleDOI
TL;DR: In this article , a fully wireless body sensor network is proposed for the integrated acquisition of EEG and sEMG signals, which is composed of wireless bio-signal acquisition modules, named sensor units and a set of synchronization modules used as a general-purpose system for time-locked recordings.
Abstract: Sensorimotor integration is the process through which the human brain plans the motor program execution according to external sources. Within this context, corticomuscular and corticokinematic coherence analyses are common methods to investigate the mechanism underlying the central control of muscle activation. This requires the synchronous acquisition of several physiological signals, including EEG and sEMG. Nevertheless, physical constraints of the current, mostly wired, technologies limit their application in dynamic and naturalistic contexts. In fact, although many efforts were made in the development of biomedical instrumentation for EEG and High Density-surface EMG (HD-sEMG) signal acquisition, the need for an integrated wireless system is emerging. We hereby describe the design and validation of a new fully wireless body sensor network for the integrated acquisition of EEG and HD-sEMG signals. This Body Sensor Network is composed of wireless bio-signal acquisition modules, named sensor units, and a set of synchronization modules used as a general-purpose system for time-locked recordings. The system was characterized in terms of accuracy of the synchronization and quality of the collected signals. An in-depth characterization of the entire system and an head-to-head comparison of the wireless EEG sensor unit with a wired benchmark EEG device were performed. The proposed device represents an advancement of the State-of-the-Art technology allowing the integrated acquisition of EEG and HD-sEMG signals for the study of sensorimotor integration.

Journal ArticleDOI
TL;DR: In this paper , a fine-grained workload paradigm including working memory and mathematic addition tasks was designed, and four domain adaptation methods were explored to bridge the discrepancy between the two different tasks.
Abstract: Cognitive workload recognition is pivotal to maintain the operator's health and prevent accidents in the human-robot interaction condition. So far, the focus of workload research is mostly restricted to a single task, yet cross-task cognitive workload recognition has remained a challenge. Furthermore, when extending to a new workload condition, the discrepancy of electroencephalogram (EEG) signals across various cognitive tasks limits the generalization of the existed model. To tackle this problem, we propose to construct the EEG-based cross-task cognitive workload recognition models using domain adaptation methods in a leave-one-task-out cross-validation setting, where we view any task of each subject as a domain. Specifically, we first design a fine-grained workload paradigm including working memory and mathematic addition tasks. Then, we explore four domain adaptation methods to bridge the discrepancy between the two different tasks. Finally, based on the supporting vector machine classifier, we conduct experiments to classify the low and high workload levels on a private EEG dataset. Experimental results demonstrate that our proposed task transfer framework outperforms the non-transfer classifier with improvements of 3% to 8% in terms of mean accuracy, and the transfer joint matching (TJM) consistently achieves the best performance.

Journal ArticleDOI
TL;DR: A deep learning-based early fusion structure, which combines two signals before the fully-connected layer, called the fNIRS-guided attention network (FGANet), which significantly outperformed the EEG-standalone network and has 4.0% and 2.7% higher accuracy than the state-of-the-art algorithms in mental arithmetic and motor imagery tasks, respectively.
Abstract: Non-invasive brain-computer interfaces (BCIs) have been widely used for neural decoding, linking neural signals to control devices. Hybrid BCI systems using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have received significant attention for overcoming the limitations of EEG- and fNIRS-standalone BCI systems. However, most hybrid EEG-fNIRS BCI studies have focused on late fusion because of discrepancies in their temporal resolutions and recording locations. Despite the enhanced performance of hybrid BCIs, late fusion methods have difficulty in extracting correlated features in both EEG and fNIRS signals. Therefore, in this study, we proposed a deep learning-based early fusion structure, which combines two signals before the fully-connected layer, called the fNIRS-guided attention network (FGANet). First, 1D EEG and fNIRS signals were converted into 3D EEG and fNIRS tensors to spatially align EEG and fNIRS signals at the same time point. The proposed fNIRS-guided attention layer extracted a joint representation of EEG and fNIRS tensors based on neurovascular coupling, in which the spatially important regions were identified from fNIRS signals, and detailed neural patterns were extracted from EEG signals. Finally, the final prediction was obtained by weighting the sum of the prediction scores of the EEG and fNIRS-guided attention features to alleviate performance degradation owing to delayed fNIRS response. In the experimental results, the FGANet significantly outperformed the EEG-standalone network. Furthermore, the FGANet has 4.0% and 2.7% higher accuracy than the state-of-the-art algorithms in mental arithmetic and motor imagery tasks, respectively.

Journal ArticleDOI
TL;DR: A simple and effective end-to-end adder network and supervised contrastive learning (AddNet-SCL) that uses addition instead of the massive multiplication in the convolution process to reduce the computational cost and has broad prospects in clinical practice.
Abstract: Deep learning (DL) methods have been widely used in the field of seizure prediction from electroencephalogram (EEG) in recent years. However, DL methods usually have numerous multiplication operations resulting in high computational complexity. In addtion, most of the current approaches in this field focus on designing models with special architectures to learn representations, ignoring the use of intrinsic patterns in the data. In this study, we propose a simple and effective end-to-end adder network and supervised contrastive learning (AddNet-SCL). The method uses addition instead of the massive multiplication in the convolution process to reduce the computational cost. Besides, contrastive learning is employed to effectively use label information, points of the same class are clustered together in the projection space, and points of different class are pushed apart at the same time. Moreover, the proposed model is trained by combining the supervised contrastive loss from the projection layer and the cross-entropy loss from the classification layer. Since the adder networks uses the $\ell _{{1}}$ -norm distance as the similarity measure between the input feature and the filters, the gradient function of the network changes, an adaptive learning rate strategy is employed to ensure the convergence of AddNet-CL. Experimental results show that the proposed method achieves 94.9% sensitivity, an area under curve (AUC) of 94.2%, and a false positive rate of (FPR) 0.077/h on 19 patients in the CHB-MIT database and 89.1% sensitivity, an AUC of 83.1%, and an FPR of 0.120/h in the Kaggle database. Competitive results show that this method has broad prospects in clinical practice.

Journal ArticleDOI
TL;DR: Results implies that it is feasible to evaluate the cross-task mental workload using the dynamic functional connectivity metrics under specific microstate, which provided a new insight for understanding the neural mechanism of mental workload with different types of information.
Abstract: The accurate evaluation of operators’ mental workload in human-machine systems plays an important role in ensuring the correct execution of tasks and the safety of operators. However, the performance of cross-task mental workload evaluation based on physiological metrics remains unsatisfactory. To explore the changes in dynamic functional connectivity properties with varying mental workload in different tasks, four mental workload tasks with different types of information were designed and a newly proposed dynamic brain network analysis method based on EEG microstate was applied in this paper. Six microstate topographies labeled as Microstate A-F were obtained to describe the task-state EEG dynamics, which was highly consistent with previous studies. Dynamic brain network analysis revealed that 15 nodes and 68 pairs of connectivity from the Frontal-Parietal region were sensitive to mental workload in all four tasks, indicating that these nodal metrics had potential to effectively evaluate mental workload in the cross-task scenario. The characteristic path length of Microstate D brain network in both Theta and Alpha bands decreased whereas the global efficiency increased significantly when the mental workload became higher, suggesting that the cognitive control network of brain tended to have higher function integration property under high mental workload state. Furthermore, by using a SVM classifier, an averaged classification accuracy of 95.8% for within-task and 80.3% for cross-task mental workload discrimination were achieved. Results implies that it is feasible to evaluate the cross-task mental workload using the dynamic functional connectivity metrics under specific microstate, which provided a new insight for understanding the neural mechanism of mental workload with different types of information.

Journal ArticleDOI
TL;DR: In this article , the effect of change in background on SSVEP and SSMVEP based brain computer interfaces (BCI) in a small-profile augmented reality (AR) headset was evaluated.
Abstract: This study evaluated the effect of change in background on steady state visually evoked potentials (SSVEP) and steady state motion visually evoked potentials (SSMVEP) based brain computer interfaces (BCI) in a small-profile augmented reality (AR) headset. A four target SSVEP and SSMVEP BCI was implemented using the Cognixion AR headset prototype. An active (AB) and a non-active background (NB) were evaluated. The signal characteristics and classification performance of the two BCI paradigms were studied. Offline analysis was performed using canonical correlation analysis (CCA) and complex-spectrum based convolutional neural network (C-CNN). Finally, the asynchronous pseudo-online performance of the SSMVEP BCI was evaluated. Signal analysis revealed that the SSMVEP stimulus was more robust to change in background compared to SSVEP stimulus in AR. The decoding performance revealed that the C-CNN method outperformed CCA for both stimulus types and NB background, in agreement with results in the literature. The average offline accuracies for W = 1 s of C-CNN were (NB vs. AB): SSVEP: 82% ±15% vs. 60% ±21% and SSMVEP: 71.4% ± 22% vs. 63.5% ± 18%. Additionally, for W = 2 s, the AR-SSMVEP BCI with the C-CNN method was 83.3% ± 27% (NB) and 74.1% ±22% (AB). The results suggest that with the C-CNN method, the AR-SSMVEP BCI is both robust to change in background conditions and provides high decoding accuracy compared to the AR-SSVEP BCI. This study presents novel results that highlight the robustness and practical application of SSMVEP BCIs developed with a low-cost AR headset.

Journal ArticleDOI
TL;DR:
Abstract: Depression score is traditionally determined by taking the Beck depression inventory (BDI) test, which is a qualitative questionnaire. Quantitative scoring of depression has also been achieved by analyzing and classifying pre-recorded electroencephalography (EEG) signals. Here, we go one step further and apply raw EEG signals to a proposed hybrid convolutional and temporal-convolutional neural network (CNN-TCN) to continuously estimate the BDI score. In this research, the EEG signals of 119 individuals are captured by 64 scalp electrodes through successive eyes-closed and eyes-open intervals. Moreover, all the subjects take the BDI test and their scores are determined. The proposed CNN-TCN provides mean squared error (MSE) of 5.64±1.6 and mean absolute error (MAE) of 1.73±0.27 for eyes-open state and also provides MSE of 9.53±2.94 and MAE of 2.32±0.35 for the eyes-closed state, which significantly surpasses state-of-the-art deep network methods. In another approach, conventional EEG features are elicited from the EEG signals in successive frames and apply them to the proposed CNN-TCN in conjunction with known statistical regression methods. Our method provides MSE of 10.81±5.14 and MAE of 2.41±0.59 that statistically outperform the statistical regression methods. Moreover, the results with raw EEG are significantly better than those with EEG features.

Journal ArticleDOI
TL;DR: A novel fully convolutional neural network architecture (SleepFCN) is introduced to classify sleep stages into five classes using single-channel electroencephalograms (EEGs) and outperforms state-of-the-art works in both classification correctness and learning speed.
Abstract: Sleep is a vital process of our daily life as we roughly spend one-third of our lives asleep. In order to evaluate sleep quality and potential sleep disorders, sleep stage classification is a gold standard method. In this paper, we introduce a novel fully convolutional neural network architecture (SleepFCN) to classify sleep stages into five classes using single-channel electroencephalograms (EEGs). The framework of SleepFCN includes two major parts for feature extraction and temporal sequence encoding namely multi-scale feature extraction (MSFE) and residual dilated causal convolutions (ResDC), respectively. These are then followed by convolutional layers of 1-sized kernels instead of dense layers to build the fully convolutional neural network. Due to the imbalance in the distribution of sleep stages, we incorporate a weight corresponding to the number of samples of each class in our loss function. We evaluated the performance of SleepFCN using the Sleep-EDF and SHHS datasets. Our experimental results show that the proposed method outperforms state-of-the-art works in both classification correctness and learning speed.

Journal ArticleDOI
TL;DR: In this paper , a serious game rehabilitation system was proposed for the training of motor function and coordination of both arm and hand movement where the user performs corresponding ADLs movements to interact with the target in the serious game.
Abstract: Most stroke survivors have difficulties completing activities of daily living (ADLs) independently. However, few rehabilitation systems have focused on ADLs-related training for gross and fine motor function together. We propose an ADLs-based serious game rehabilitation system for the training of motor function and coordination of both arm and hand movement where the user performs corresponding ADLs movements to interact with the target in the serious game. A multi-sensor fusion model based on electromyographic (EMG), force myographic (FMG), and inertial sensing was developed to estimate users’ natural upper limb movement. Eight healthy subjects and three stroke patients were recruited in an experiment to validate the system’s effectiveness. The performance of different sensor and classifier configurations on hand gesture classification against the arm position variations were analyzed, and qualitative patient questionnaires were conducted. Results showed that elbow extension/flexion has a more significant negative influence on EMG-based, FMG-based, and EMG+FMG-based hand gesture recognition than shoulder abduction/adduction does. In addition, there was no significant difference in the negative influence of shoulder abduction/adduction and shoulder flexion/extension on hand gesture recognition. However, there was a significant interaction between sensor configurations and algorithm configurations in both offline and real-time recognition accuracy. The EMG+FMG-combined multi-position classifier model had the best performance against arm position change. In addition, all the stroke patients reported their ADLs-related ability could be restored by using the system. These results demonstrate that the multi-sensor fusion model could estimate hand gestures and gross movement accurately, and the proposed training system has the potential to improve patients’ ability to perform ADLs.

Journal ArticleDOI
TL;DR: In this paper , the authors compared EEGNet, Shallow &, Deep ConvNet, MB3D and ParaAtt on two large, publicly available, databases with 42 and 62 human subjects.
Abstract: Motor imagery (MI) based brain-computer interface (BCI) is an important BCI paradigm which requires powerful classifiers. Recent development of deep learning technology has prompted considerable interest in using deep learning for classification and resulted in multiple models. Finding the best performing models among them would be beneficial for designing better BCI systems and classifiers going forward. However, it is difficult to directly compare performance of various models through the original publications, since the datasets used to test the models are different from each other, too small, or even not publicly available. In this work, we selected five MI-EEG deep classification models proposed recently: EEGNet, Shallow & Deep ConvNet, MB3D and ParaAtt, and tested them on two large, publicly available, databases with 42 and 62 human subjects. Our results show that the models performed similarly on one dataset while EEGNet performed the best on the second with a relatively small training cost using the parameters that we evaluated.

Journal ArticleDOI
TL;DR: Recovery of hand function after rehabilitation of SSVEP-BCI controlled soft robotic glove showed better result than solely robotic glove rehabilitation, equivalent efficacy as results from previous reported MI- BCI robotic hand rehabilitation.
Abstract: Soft robotic glove with brain computer interfaces (BCI) control has been used for post-stroke hand function rehabilitation. Motor imagery (MI) based BCI with robotic aided devices has been demonstrated as an effective neural rehabilitation tool to improve post-stroke hand function. It is necessary for a user of MI-BCI to receive a long time training, while the user usually suffers unsuccessful and unsatisfying results in the beginning. To propose another non-invasive BCI paradigm rather than MI-BCI, steady-state visually evoked potentials (SSVEP) based BCI was proposed as user intension detection to trigger the soft robotic glove for post-stroke hand function rehabilitation. Thirty post-stroke patients with impaired hand function were randomly and equally divided into three groups to receive conventional, robotic, and BCI-robotic therapy in this randomized control trial (RCT). Clinical assessment of Fugl-Meyer Motor Assessment of Upper Limb (FMA-UL), Wolf Motor Function Test (WMFT) and Modified Ashworth Scale (MAS) were performed at pre-training, post-training and three months follow-up. In comparing to other groups, The BCI-robotic group showed significant improvement after training in FMA full score (10.05 ± 8.03, p = 0.001), FMA shoulder/elbow (6.2 ± 5.94, p = 0.0004) and FMA wrist/hand (4.3 ± 2.83, p = 0.007), and WMFT (5.1 ± 5.53, p = 0.037). The improvement of FMA was significantly correlated with BCI accuracy (r = 0.714, p = 0.032). Recovery of hand function after rehabilitation of SSVEP-BCI controlled soft robotic glove showed better result than solely robotic glove rehabilitation, equivalent efficacy as results from previous reported MI-BCI robotic hand rehabilitation. It proved the feasibility of SSVEP-BCI controlled soft robotic glove in post-stroke hand function rehabilitation.

Journal ArticleDOI
TL;DR: This work proposes to construct the EEG-based cross-task cognitive workload recognition models using domain adaptation methods in a leave-one-task-out cross-validation setting, where any task of each subject as a domain is viewed.
Abstract: Cognitive workload recognition is pivotal to maintain the operator’s health and prevent accidents in the human-robot interaction condition. So far, the focus of workload research is mostly restricted to a single task, yet cross-task cognitive workload recognition has remained a challenge. Furthermore, when extending to a new workload condition, the discrepancy of electroencephalogram (EEG) signals across various cognitive tasks limits the generalization of the existed model. To tackle this problem, we propose to construct the EEG-based cross-task cognitive workload recognition models using domain adaptation methods in a leave-one-task-out cross-validation setting, where we view any task of each subject as a domain. Specifically, we first design a fine-grained workload paradigm including working memory and mathematic addition tasks. Then, we explore four domain adaptation methods to bridge the discrepancy between the two different tasks. Finally, based on the supporting vector machine classifier, we conduct experiments to classify the low and high workload levels on a private EEG dataset. Experimental results demonstrate that our proposed task transfer framework outperforms the non-transfer classifier with improvements of 3% to 8% in terms of mean accuracy, and the transfer joint matching (TJM) consistently achieves the best performance.

Journal ArticleDOI
TL;DR: In this article , steady-state visually evoked potentials (SSVEP) based BCI was proposed as user intension detection to trigger the soft robotic glove for post-stroke hand function rehabilitation.
Abstract: Soft robotic glove with brain computer interfaces (BCI) control has been used for post-stroke hand function rehabilitation. Motor imagery (MI) based BCI with robotic aided devices has been demonstrated as an effective neural rehabilitation tool to improve post-stroke hand function. It is necessary for a user of MI-BCI to receive a long time training, while the user usually suffers unsuccessful and unsatisfying results in the beginning. To propose another non-invasive BCI paradigm rather than MI-BCI, steady-state visually evoked potentials (SSVEP) based BCI was proposed as user intension detection to trigger the soft robotic glove for post-stroke hand function rehabilitation. Thirty post-stroke patients with impaired hand function were randomly and equally divided into three groups to receive conventional, robotic, and BCI-robotic therapy in this randomized control trial (RCT). Clinical assessment of Fugl-Meyer Motor Assessment of Upper Limb (FMA-UL), Wolf Motor Function Test (WMFT) and Modified Ashworth Scale (MAS) were performed at pre-training, post-training and three months follow-up. In comparing to other groups, The BCI-robotic group showed significant improvement after training in FMA full score (10.05 ± 8.03, p = 0.001), FMA shoulder/elbow (6.2 ± 5.94, p = 0.0004) and FMA wrist/hand (4.3 ± 2.83, p = 0.007), and WMFT (5.1 ± 5.53, p = 0.037). The improvement of FMA was significantly correlated with BCI accuracy (r = 0.714, p = 0.032). Recovery of hand function after rehabilitation of SSVEP-BCI controlled soft robotic glove showed better result than solely robotic glove rehabilitation, equivalent efficacy as results from previous reported MI-BCI robotic hand rehabilitation. It proved the feasibility of SSVEP-BCI controlled soft robotic glove in post-stroke hand function rehabilitation.

Journal ArticleDOI
TL;DR: The proposed device represents an advancement of the State-of-the-Art technology allowing the integrated acquisition of EEG and HD-sEMG signals for the study of sensorimotor integration.
Abstract: Sensorimotor integration is the process through which the human brain plans the motor program execution according to external sources. Within this context, corticomuscular and corticokinematic coherence analyses are common methods to investigate the mechanism underlying the central control of muscle activation. This requires the synchronous acquisition of several physiological signals, including EEG and sEMG. Nevertheless, physical constraints of the current, mostly wired, technologies limit their application in dynamic and naturalistic contexts. In fact, although many efforts were made in the development of biomedical instrumentation for EEG and High Density-surface EMG (HD-sEMG) signal acquisition, the need for an integrated wireless system is emerging. We hereby describe the design and validation of a new fully wireless body sensor network for the integrated acquisition of EEG and HD-sEMG signals. This Body Sensor Network is composed of wireless bio-signal acquisition modules, named sensor units, and a set of synchronization modules used as a general-purpose system for time-locked recordings. The system was characterized in terms of accuracy of the synchronization and quality of the collected signals. An in-depth characterization of the entire system and an head-to-head comparison of the wireless EEG sensor unit with a wired benchmark EEG device were performed. The proposed device represents an advancement of the State-of-the-Art technology allowing the integrated acquisition of EEG and HD-sEMG signals for the study of sensorimotor integration.

Journal ArticleDOI
TL;DR: In this article , a serious game rehabilitation system was proposed for the training of motor function and coordination of both arm and hand movement where the user performs corresponding ADLs movements to interact with the target in the serious game.
Abstract: Most stroke survivors have difficulties completing activities of daily living (ADLs) independently. However, few rehabilitation systems have focused on ADLs-related training for gross and fine motor function together. We propose an ADLs-based serious game rehabilitation system for the training of motor function and coordination of both arm and hand movement where the user performs corresponding ADLs movements to interact with the target in the serious game. A multi-sensor fusion model based on electromyographic (EMG), force myographic (FMG), and inertial sensing was developed to estimate users' natural upper limb movement. Eight healthy subjects and three stroke patients were recruited in an experiment to validate the system's effectiveness. The performance of different sensor and classifier configurations on hand gesture classification against the arm position variations were analyzed, and qualitative patient questionnaires were conducted. Results showed that elbow extension/flexion has a more significant negative influence on EMG-based, FMG-based, and EMG+FMG-based hand gesture recognition than shoulder abduction/adduction does. In addition, there was no significant difference in the negative influence of shoulder abduction/adduction and shoulder flexion/extension on hand gesture recognition. However, there was a significant interaction between sensor configurations and algorithm configurations in both offline and real-time recognition accuracy. The EMG+FMG-combined multi-position classifier model had the best performance against arm position change. In addition, all the stroke patients reported their ADLs-related ability could be restored by using the system. These results demonstrate that the multi-sensor fusion model could estimate hand gestures and gross movement accurately, and the proposed training system has the potential to improve patients' ability to perform ADLs.