scispace - formally typeset
Search or ask a question

Showing papers by "Kai Keng Ang published in 2022"


Journal ArticleDOI
TL;DR: A number of recent randomized controlled trials reported the efficacy of brain-computer interface (BCI) for upper-limb stroke rehabilitation compared with other therapies as mentioned in this paper, despite the e...
Abstract: Background. A number of recent randomized controlled trials reported the efficacy of brain–computer interface (BCI) for upper-limb stroke rehabilitation compared with other therapies. Despite the e...

16 citations


Journal ArticleDOI
TL;DR: The proposed r-KLwDSA algorithm was particularly successful in improving the BCI accuracy of the sessions that had initial session-specific accuracy below 60%, with an average improvement of around 10% in the accuracy, leading to more stroke patients having meaningful BCI rehabilitation.
Abstract: Current motor imagery-based brain-computer interface (BCI) systems require a long calibration time at the beginning of each session before they can be used with adequate levels of classification accuracy. In particular, this issue can be a significant burden for long term BCI users. This article proposes a novel transfer learning algorithm, called r-KLwDSA, to reduce the BCI calibration time for long-term users. The proposed r-KLwDSA algorithm aligns the user's EEG data collected in previous sessions to the few EEG trials collected in the current session, using a novel linear alignment method. Thereafter, the aligned EEG trials from the previous sessions and the few EEG trials from the current sessions are fused through a weighting mechanism before they are used for calibrating the BCI model. To validate the proposed algorithm, a large dataset containing the EEG data from 11 stroke patients, each performing 18 BCI sessions, was used. The proposed framework demonstrated a significant improvement in the classification accuracy, of over 4% compared to the session-specific algorithm, when there were as few as two trials per class available from the current session. The proposed algorithm was particularly successful in improving the BCI accuracy of the sessions that had initial session-specific accuracy below 60%, with an average improvement of around 10% in the accuracy, leading to more stroke patients having meaningful BCI rehabilitation.

4 citations


Journal ArticleDOI
TL;DR: The results support, for the first time, the use of a metric learning based feature extractor to learn representations from non-stationary EEG signals for BCI-assisted post-stroke rehabilitation.
Abstract: Although brain-computer interface (BCI) shows promising prospects to help post-stroke patients recover their motor function, its decoding accuracy is still highly dependent on feature extraction methods. Most current feature extractors in BCI are classification-based methods, yet very few works from literature use metric learning based methods to learn representations for BCI. To circumvent this shortage, we propose a deep metric learning based method, Weighted Convolutional Siamese Network (WCSN) to learn representations from electroencephalogram (EEG) signal. This approach can enhance the decoding accuracy by learning a low dimensional embedding to extract distance-based representations from pair-wise EEG data. To enhance training efficiency and algorithm performance, a temporal-spectral distance weighted sampling method is proposed to select more informative input samples. In addition, an adaptive training strategy is adopted to address the session-to-session non-stationarity by progressively updating the subject-specific model. The proposed method is applied on both upper limb and lower limb neurorehabilitation datasets acquired from 33 stroke patients, with a total of 358 sessions. Results indicate that using k-Nearest Neighbor as the classification algorithm, the proposed method yielded 72.8% and 66.0% accuracies for the two datasets respectively, significantly better than the other state-of-the-arts ( ${p} < {0.05}$ ). Without losing generality, we also evaluated the proposed method on two publicly available datasets acquired from healthy subjects, wherein the proposed algorithm demonstrated superior performance at most cases as well. Our results support, for the first time, the use of a metric learning based feature extractor to learn representations from non-stationary EEG signals for BCI-assisted post-stroke rehabilitation.

3 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed an EEG-Video Emotion-based Summarization (EVES) model based on a multimodal deep reinforcement learning (DRL) architecture that leverages neural signals to learn visual interestingness to produce quantitatively and qualitatively better video summaries.
Abstract: Video summarization is the process of selecting a subset of informative keyframes to expedite storytelling with limited loss of information. In this article, we propose an EEG-Video Emotion-based Summarization (EVES) model based on a multimodal deep reinforcement learning (DRL) architecture that leverages neural signals to learn visual interestingness to produce quantitatively and qualitatively better video summaries. As such, EVES does not learn from the expensive human annotations but the multimodal signals. Furthermore, to ensure the temporal alignment and minimize the modality gap between the visual and EEG modalities, we introduce a Time Synchronization Module (TSM) that uses an attention mechanism to transform the EEG representations onto the visual representation space. We evaluate the performance of EVES on the TVSum and SumMe datasets. Based on the rank order statistics benchmarks, the experimental results show that EVES outperforms the unsupervised models and narrows the performance gap with supervised models. Furthermore, the human evaluation scores show that EVES receives a higher rating than the state-of-the-art DRL model DR-DSN by 11.4% on the coherency of the content and 7.4% on the emotion-evoking content. Thus, our work demonstrates the potential of EVES in selecting interesting content that is both coherent and emotion-evoking.

1 citations


Proceedings ArticleDOI
01 Jul 2022
TL;DR: It is found that for most channels, their removal did not significantly affect decoder performance, however, for a subset of channels, removing them significantly reduced the decoder accuracy, which suggests that information is not uniformly distributed among the recording channels.
Abstract: Implanted microelectrode arrays can directly pick up electrode signals from the primary motor cortex (M1) during movement, and brain-machine interfaces (BMIs) can decode these signals to predict the directions of contemporaneous movements. However, it is not well known how much each individual input is responsible for the overall performance of a BMI decoder. In this paper, we seek to quantify how much each channel contributes to an artificial neural network (ANN)-based decoder, by measuring how much the removal of each individual channel degrades the accuracy of the output. If information on movement direction was equally distributed among channels, then the removal of one would have a minimal effect on decoder accuracy. On the other hand, if that information was distributed sparsely, then the removal of specific information-rich channels would significantly lower decoder accuracy. We found that for most channels, their removal did not significantly affect decoder performance. However, for a subset of channels (16 out of 61), removing them significantly reduced the decoder accuracy. This suggests that information is not uniformly distributed among the recording channels. We propose examining these channels further to optimize BMIs more effectively, as well as understand how M1 functions at the neuronal level.

Proceedings ArticleDOI
04 Dec 2022
TL;DR: In this article , the authors proposed an online adaptive CNN (aCNN) to address the non-stationarity in multi-session EEG by progressively updating the subject-specific model, and evaluated on two neurorehabilitation datasets with a large population of post-stroke patients (33 patients with a total of 358 EEG sessions).
Abstract: The convolutional neural network (CNN) automatically learns EEG representations in higher and nonlinear space via backpropagation and outputs the predictions in an end-to-end manner. Owing to these advantages, CNN has been used to decode electroencephalogram (EEG) and drive brain computer interface (BCI). However, its applications in BCI-assisted post-stroke neurorehabilitation remain limited for it is unable to address the inherent session-to-session non-stationarity in the EEG between the initial calibration session and subsequent online sessions. In this paper, we present a simple but effective online adaptive CNN (aCNN) to address the non-stationarity in multi-session EEG by progressively updating the subject-specific model. The performance of the proposed aCNN is evaluated on two neurorehabilitation datasets with a large population of post-stroke patients (33 patients with a total of 358 EEG sessions). Results indicate that, our proposed aCNN reaches at least as good a performance as the widely used online adaptive Filter Bank Common Spatial Patterns (aFBCSP) and with significantly higher accuracies than that for DeepConv and offline FBCSP algorithms. Our results support, for the first time, the use of a CNN-based adaptive learning method to decode non-stationary EEG signals for BCI-assisted post-stroke rehabilitation.

Journal ArticleDOI
TL;DR: In this article , a large EEG dataset from 136 stroke patients who performed motor imagery of their stroke-impaired hand was analyzed, and the BCI features were extracted from channels covering either the ipsilesional, contralesional or bilateral hemansions.
Abstract: Abstract Brain-computer interfaces (BCIs) have recently been shown to be clinically effective as a novel method of stroke rehabilitation. In many BCI-based studies, the activation of the ipsilesional hemisphere was considered a key factor required for motor recovery after stroke. However, emerging evidence suggests that the contralesional hemisphere also plays a role in motor function rehabilitation. The objective of this study is to investigate the effectiveness of the BCI in detecting motor imagery of the affected hand from contralesional hemisphere. We analyzed a large EEG dataset from 136 stroke patients who performed motor imagery of their stroke-impaired hand. BCI features were extracted from channels covering either the ipsilesional, contralesional or bilateral hemisphere, and the offline BCI accuracy was computed using 10 $$\times $$ × 10-fold cross-validations. Our results showed that most stroke patients can operate the BCI using either their contralesional or ipsilesional hemisphere. Those with the ipsilesional BCI accuracy of less than 60% had significantly higher motor impairments than those with the ipsilesional BCI accuracy above 80%. Interestingly, those with the ipsilesional BCI accuracy of less than 60% achieved a significantly higher contralesional BCI accuracy, whereas those with the ipsilesional BCI accuracy more than 80% had significantly poorer contralesional BCI accuracy. This study suggests that contralesional BCI may be a useful approach for those with a high motor impairment who cannot accurately generate signals from ipsilesional hemisphere to effectively operate BCI.