scispace - formally typeset
Search or ask a question

Showing papers on "Motor imagery published in 2019"


Journal ArticleDOI
TL;DR: This work aims to unify the neuroscientific literature relevant to the recovery process and rehabilitation practice in order to provide a synthesis of the principles that constitute an effective neurorehabilitation approach.
Abstract: What are the principles underlying effective neurorehabilitation? The aim of neurorehabilitation is to exploit interventions based on human and animal studies about learning and adaptation, as well as to show that the activation of experience-dependent neuronal plasticity augments functional recovery after stroke. Instead of teaching compensatory strategies that do not reduce impairment but allow the patient to return home as soon as possible, functional recovery might be more sustainable as it ensures a long-term reduction in impairment and an improvement in quality of life. At the same time, neurorehabilitation permits the scientific community to collect valuable data, which allows inferring about the principles of brain organization. Hence neuroscience sheds light on the mechanisms of learning new functions or relearning lost ones. However, current rehabilitation methods lack the exact operationalization of evidence gained from skill learning literature, leading to an urgent need to bridge motor learning theory and present clinical work in order to identify a set of ingredients and practical applications that could guide future interventions. This work aims to unify the neuroscientific literature relevant to the recovery process and rehabilitation practice in order to provide a synthesis of the principles that constitute an effective neurorehabilitation approach. Previous attempts to achieve this goal either focused on a subset of principles or did not link clinical application to the principles of motor learning and recovery. We identified 15 principles of motor learning based on existing literature: massed practice, spaced practice, dosage, task-specific practice, goal-oriented practice, variable practice, increasing difficulty, multisensory stimulation, rhythmic cueing, explicit feedback/knowledge of results, implicit feedback/knowledge of performance, modulate effector selection, action observation/embodied practice, motor imagery, and social interaction. We comment on trials that successfully implemented these principles and report evidence from experiments with healthy individuals as well as clinical work.

167 citations


Journal ArticleDOI
08 Jan 2019-Sensors
TL;DR: Better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding.
Abstract: Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.

123 citations


Journal ArticleDOI
01 Jan 2019
TL;DR: This paper provides a comprehensive review of dominant feature extraction methods and classification algorithms in brain-computer interface for motor imagery tasks.
Abstract: Motor Imagery Brain Computer Interface (MI-BCI) provides a non-muscular channel for communication to those who are suffering from neuronal disorders. The designing of an accurate and reliable MI-BCI system requires the extraction of informative and discriminative features. Common Spatial Pattern (CSP) has been potent and is widely used in BCI for extracting features in motor imagery tasks. The classifiers translate these features into device commands. Many classification algorithms have been devised, among those Support Vector Machine (SVM) and Linear Discriminate Analysis (LDA) have been widely used. In recent studies, the researchers are using deep neural networks for the classification of motor imagery tasks. This paper provides a comprehensive review of dominant feature extraction methods and classification algorithms in brain-computer interface for motor imagery tasks. Authors discuss existing challenges in the domain of motor imagery brain-computer interface and suggest possible research directions.

123 citations


Journal ArticleDOI
Yang Li1, Xian-Rui Zhang1, Bin Zhang1, Mengying Lei1, Weigang Cui1, Yuzhu Guo1 
08 May 2019
TL;DR: An end-to-end EEG decoding framework, which employs raw multi-channel EEG as inputs is proposed, to boost decoding accuracy by the channel-projection mixed-scale convolutional neural network (CP-MixedNet) aided by amplitude-perturbation data augmentation.
Abstract: Motor imagery electroencephalography (EEG) decoding is an essential part of brain-computer interfaces (BCIs) which help motor-disabled patients to communicate with the outside world by external devices. Recently, deep learning algorithms using decomposed spectrums of EEG as inputs may omit important spatial dependencies and different temporal scale information, thus generated the poor decoding performance. In this paper, we propose an end-to-end EEG decoding framework, which employs raw multi-channel EEG as inputs, to boost decoding accuracy by the channel-projection mixed-scale convolutional neural network (CP-MixedNet) aided by amplitude-perturbation data augmentation. Specifically, the first block in CP-MixedNet is designed to learn primary spatial and temporal representations from EEG signals. The mixed-scale convolutional block is then used to capture mixed-scale temporal information, which effectively reduces the number of training parameters when expanding reception fields of the network. Finally, based on the features extracted in previous blocks, the classification block is constructed to classify EEG tasks. The experiments are implemented on two public EEG datasets (BCI competition IV 2a and High gamma dataset) to validate the effectiveness of the proposed approach compared to the state-of-the-art methods. The competitive results demonstrate that our proposed method is a promising solution to improve the decoding performance of motor imagery BCIs.

122 citations


Journal ArticleDOI
TL;DR: It is shown that it is also possible to identify particular features of MI in untrained subjects, and the application of artificial neural networks allows us to classify MI in raising right and left arms with average accuracy of 70% for both KI and VI using appropriate filtration of input signals.
Abstract: The understanding of neurophysiological mechanisms responsible for motor imagery (MI) is essential for the development of brain-computer interfaces (BCI) and bioprosthetics. Our magnetoencephalographic (MEG) experiments with voluntary participants confirm the existence of two types of motor imagery, kinesthetic imagery (KI) and visual imagery (VI), distinguished by activation and inhibition of different brain areas in motor-related α- and β-frequency regions. Although the brain activity corresponding to MI is usually observed in specially trained subjects or athletes, we show that it is also possible to identify particular features of MI in untrained subjects. Similar to real movement, KI implies muscular sensation when performing an imaginary moving action that leads to event-related desynchronization (ERD) of motor-associated brain rhythms. By contrast, VI refers to visualization of the corresponding action that results in event-related synchronization (ERS) of α- and β-wave activity. A notable difference between KI and VI groups occurs in the frontal brain area. In particular, the analysis of evoked responses shows that in all KI subjects the activity in the frontal cortex is suppressed during MI, while in the VI subjects the frontal cortex is always active. The accuracy in classification of left-arm and right-arm MI using artificial intelligence is similar for KI and VI. Since untrained subjects usually demonstrate the VI imagery mode, the possibility to increase the accuracy for VI is in demand for BCIs. The application of artificial neural networks allows us to classify MI in raising right and left arms with average accuracy of 70% for both KI and VI using appropriate filtration of input signals. The same average accuracy is achieved by optimizing MEG channels and reducing their number to only 13.

97 citations


Journal ArticleDOI
TL;DR: Examining the efficacy of an EEG-based BCI-VR system using a MI paradigm for post-stroke upper limb rehabilitation on functional assessments, and related changes in MI ability and brain imaging found important improvements in upper extremity scores (Fugl-Meyer) and increases in brain activation measured by fMRI that suggest neuroplastic changes in brain motor networks.
Abstract: To maximize brain plasticity after stroke, a plethora of rehabilitation strategies have been explored. These include the use of intensive motor training, motor-imagery (MI), and action-observation (AO). Growing evidence of the positive impact of virtual reality (VR) techniques on recovery following stroke has been shown. However, most VR tools are designed to exploit active movement, and hence patients with low level of motor control cannot fully benefit from them. Consequently, the idea of directly training the central nervous system has been promoted by utilizing MI with electroencephalography (EEG)-based brain-computer interfaces (BCIs). To date, detailed information on which VR strategies lead to successful functional recovery is still largely missing and very little is known on how to optimally integrate EEG-based BCIs and VR paradigms for stroke rehabilitation. The purpose of this study was to examine the efficacy of an EEG-based BCI-VR system using a MI paradigm for post-stroke upper limb rehabilitation on functional assessments, and related changes in MI ability and brain imaging. To achieve this, a 60 years old male chronic stroke patient was recruited. The patient underwent a 3-week intervention in a clinical environment, resulting in 10 BCI-VR training sessions. The patient was assessed before and after intervention, as well as on a one-month follow-up, in terms of clinical scales and brain imaging using functional MRI (fMRI). Consistent with prior research, we found important improvements in upper extremity scores (Fugl-Meyer) and identified increases in brain activation measured by fMRI that suggest neuroplastic changes in brain motor networks. This study expands on the current body of evidence, as more data are needed on the effect of this type of interventions not only on functional improvement but also on the effect of the intervention on plasticity through brain imaging.

87 citations


Journal ArticleDOI
27 Jun 2019-Sensors
TL;DR: The proposed CapsNet-based framework classifies the two-class motor imagery, namely right-hand and left-hand movements, and outperformed state-of-the-art CNN-based methods and various conventional machine learning approaches.
Abstract: Various convolutional neural network (CNN)-based approaches have been recently proposed to improve the performance of motor imagery based-brain-computer interfaces (BCIs). However, the classification accuracy of CNNs is compromised when target data are distorted. Specifically for motor imagery electroencephalogram (EEG), the measured signals, even from the same person, are not consistent and can be significantly distorted. To overcome these limitations, we propose to apply a capsule network (CapsNet) for learning various properties of EEG signals, thereby achieving better and more robust performance than previous CNN methods. The proposed CapsNet-based framework classifies the two-class motor imagery, namely right-hand and left-hand movements. The motor imagery EEG signals are first transformed into 2D images using the short-time Fourier transform (STFT) algorithm and then used for training and testing the capsule network. The performance of the proposed framework was evaluated on the BCI competition IV 2b dataset. The proposed framework outperformed state-of-the-art CNN-based methods and various conventional machine learning approaches. The experimental results demonstrate the feasibility of the proposed approach for classification of motor imagery EEG signals.

86 citations


Journal ArticleDOI
TL;DR: A novel multimodal human-machine interface system (mHMI) is developed using combinations of electrooculography, electroencephalography, and electromyogram to generate numerous control instructions for real-time control of soft robot naturally.
Abstract: Brain-computer interface (BCI) technology shows potential for application to motor rehabilitation therapies that use neural plasticity to restore motor function and improve quality of life of stroke survivors However, it is often difficult for BCI systems to provide the variety of control commands necessary for multi-task real-time control of soft robot naturally In this study, a novel multimodal human-machine interface system (mHMI) is developed using combinations of electrooculography (EOG), electroencephalography (EEG), and electromyogram (EMG) to generate numerous control instructions Moreover, we also explore subject acceptance of an affordable wearable soft robot to move basic hand actions during robot-assisted movement Six healthy subjects separately perform left and right hand motor imagery, looking-left and looking-right eye movements, and different hand gestures in different modes to control a soft robot in a variety of actions The results indicate that the number of mHMI control instructions is significantly greater than achievable with any individual mode Furthermore, the mHMI can achieve an average classification accuracy of 9383% with the average information transfer rate of 4741 bits/min, which is entirely equivalent to a control speed of 17 actions per minute The study is expected to construct a more user-friendly mHMI for real-time control of soft robot to help healthy or disabled persons perform basic hand movements in friendly and convenient way

75 citations


Journal ArticleDOI
TL;DR: This paper proposes a brain–computer interface (BCI)-based teleoperation strategy for a dual-arm robot carrying a common object by multifingered hands based on motor imagery of the human brain, which utilizes common spatial pattern method to analyze the filtered electroencephalograph signals.
Abstract: This paper proposes a brain–computer interface (BCI)-based teleoperation strategy for a dual-arm robot carrying a common object by multifingered hands. The BCI is based on motor imagery of the human brain, which utilizes common spatial pattern method to analyze the filtered electroencephalograph signals. Human intentions can be recognized and classified into the corresponding reference commands in task space for the robot according to phenomena of event-related synchronization/desynchronization, such that the object manipulation tasks guided by human user’s mind can be achieved. Subsequently, a concise dynamics consisting of the dynamics of the robotic arms and the geometrical constraints between the end-effectors and the object is formulated for the coordinated dual arm. To achieve optimization motion in the task space, a redundancy resolution at velocity level has been implemented through neural-dynamics optimization. Extensive experiments have been made by a number of subjects, and the results were provided to demonstrate the effectiveness of the proposed control strategy.

70 citations


Journal ArticleDOI
TL;DR: A novel preprocessing method is proposed to automatically reconstruct the EEG signal by selecting the intrinsic mode functions (IMFs) based on a median frequency measure, and the reconstructed EEG signal has high SNR and contains only information correlated to a specific motor imagery task.
Abstract: The electroencephalogram (EEG) signals tend to have poor time-frequency localization when analysis techniques involve a fixed set of basis functions such as in short-time Fourier transform and wavelet transform. These signals also exhibit highly non-stationary characteristics and suffer from low signal-to-noise ratio (SNR). As a result, there is often poor task detection accuracy and high error rates in designed brain-computer interfacing (BCI) systems. In this paper, a novel preprocessing method is proposed to automatically reconstruct the EEG signal by selecting the intrinsic mode functions (IMFs) based on a median frequency measure. Multivariate empirical mode decomposition is used to decompose the EEG signals into a set of IMFs. The reconstructed EEG signal has high SNR and contains only information correlated to a specific motor imagery task. The common spatial pattern is used to extract features from the reconstructed EEG signals. The linear discriminant analysis and support vector machine have been utilized in order to classify the features into left hand motor imagery and right hand motor imagery tasks. Our experimental results on the BCI competition IV dataset 2A show that the proposed method with fifteen channels outperforms bandpass filtering with 22 channels (>1%) and by >9 % $(p = 0.0078)$ with raw EEG signals, >13% $(p = 0.0039)$ with empirical mode decomposition-based filtering and >17 % $(p = 0.0039)$ with discrete wavelet transform-based filtering.

70 citations


Journal ArticleDOI
TL;DR: To exploit the e2e‐property of deep learning models, a novel GDL methodology is proposed where only minimal objective‐free preprocessing steps are needed and an innovative multilevel GDL‐based classifying scheme is proposed.

Journal ArticleDOI
05 Dec 2019-Entropy
TL;DR: A novel motor imagery classification scheme based on the continuous wavelet transform and the convolutional neural network is proposed to achieve improved classification performance compared with the existing methods, thus showcasing the feasibility of motor imagery BCI.
Abstract: The motor imagery-based brain-computer interface (BCI) using electroencephalography (EEG) has been receiving attention from neural engineering researchers and is being applied to various rehabilitation applications. However, the performance degradation caused by motor imagery EEG with very low single-to-noise ratio faces several application issues with the use of a BCI system. In this paper, we propose a novel motor imagery classification scheme based on the continuous wavelet transform and the convolutional neural network. Continuous wavelet transform with three mother wavelets is used to capture a highly informative EEG image by combining time-frequency and electrode location. A convolutional neural network is then designed to both classify motor imagery tasks and reduce computation complexity. The proposed method was validated using two public BCI datasets, BCI competition IV dataset 2b and BCI competition II dataset III. The proposed methods were found to achieve improved classification performance compared with the existing methods, thus showcasing the feasibility of motor imagery BCI.

Journal ArticleDOI
TL;DR: A fusion approach that combines features from simultaneously recorded electroencephalogram (EEG) and MEG signals to improve classification performances in motor imagery-based brain-computer interfaces (BCIs) is adopted.
Abstract: We adopted a fusion approach that combines features from simultaneously recorded electroencephalogram (EEG) and magnetoencephalogram (MEG) signals to improve classification performances in motor imagery-based brain-computer interfaces (BCIs). We applied our approach to a group of 15 healthy subjects and found a significant classification performance enhancement as compared to standard single-modality approaches in the alpha and beta bands. Taken together, our findings demonstrate the advantage of considering multimodal approaches as complementary tools for improving the impact of noninvasive BCIs.

Journal ArticleDOI
TL;DR: A novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain- computer interface commands is presented.
Abstract: Brain-computer interface technologies, such as steady-state visually evoked potential, P300, and motor imagery are methods of communication between the human brain and the external devices. Motor imagery-based brain-computer interfaces are popular because they avoid unnecessary external stimuli. Although feature extraction methods have been illustrated in several machine intelligent systems in motor imagery-based brain-computer interface studies, the performance remains unsatisfactory. There is increasing interest in the use of the fuzzy integrals, the Choquet and Sugeno integrals, that are appropriate for use in applications in which fusion of data must consider possible data interactions. To enhance the classification accuracy of brain-computer interfaces, we adopted fuzzy integrals, after employing the classification method of traditional brain-computer interfaces, to consider possible links between the data. Subsequently, we proposed a novel classification framework called the multimodal fuzzy fusion-based brain-computer interface system. Ten volunteers performed a motor imagery-based brain-computer interface experiment, and we acquired electroencephalography signals simultaneously. The multimodal fuzzy fusion-based brain-computer interface system enhanced performance compared with traditional brain-computer interface systems. Furthermore, when using the motor imagery-relevant electroencephalography frequency alpha and beta bands for the input features, the system achieved the highest accuracy, up to 78.81% and 78.45% with the Choquet and Sugeno integrals, respectively. Herein, we present a novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain-computer interface commands.

Journal ArticleDOI
TL;DR: The aim of this work is to reduce the long calibration time in BCI systems by proposing a transfer learning model which can be used for evaluating unseen single trials for a subject without the need for training session data.
Abstract: The performance of a brain–computer interface (BCI) will generally improve by increasing the volume of training data on which it is trained. However, a classifier’s generalization ability is often ...

Journal ArticleDOI
TL;DR: Although the performance was not directly correlated to the degree of embodiment, subjective magnitude of the body ownership transfer illusion correlated with the ability to modulate the sensorimotor rhythm.
Abstract: This paper presents a gamified motor imagery brain-computer interface (MI-BCI) training in immersive virtual reality. Aim of the proposed training method is to increase engagement, attention, and motivation in co-adaptive event-driven MI-BCI training. This was achieved using gamification, progressive increase of the training pace, and virtual reality design reinforcing the body ownership transfer (embodiment) into the avatar. From the 20 healthy participants performing 6 runs of 2-class MI-BCI training (left/right hand), 19 were trained for a basic level of MI-BCI operation, with average peak accuracy in the session equal to 75.84\%. This confirms the proposed training method succeeded in improvement of the MI-BCI skills; moreover, participants were leaving the session in high positive affect. Although the performance was not directly correlated to the degree of embodiment, subjective magnitude of the body ownership transfer illusion correlated with the abilities to modulate the sensorimotor rhythm.

Journal ArticleDOI
TL;DR: This study aims to improve multiclass classification accuracy for motor imagery movement using sub-band common spatial patterns with sequential feature selection (SBCSP-SBFS) method and shows an increase of 7% as compared to previously implemented multiclass EEG classification.
Abstract: Electroencephalogram (EEG) signal classification plays an important role to facilitate physically impaired patients by providing brain-computer interface (BCI)-controlled devices. However, practical applications of BCI make it difficult to decode motor imagery-based brain signals for multiclass classification due to their non-stationary nature. In this study, we aim to improve multiclass classification accuracy for motor imagery movement using sub-band common spatial patterns with sequential feature selection (SBCSP-SBFS) method. Filter bank having bandpass filters of different overlapped frequency cutoffs is applied to suppress the noise signals from raw EEG signals. The output of these sub-band filters is sent for feature extraction by applying common spatial pattern (CSP) and linear discriminant analysis (LDA). As all of the extracted features are not necessary for classification therefore, selection of optimal features is done by passing the extracted features to sequential backward floating selection (SBFS) technique. Three different classifiers were then trained on these optimal features, i.e., support vector machine (SVM), Naive-Bayesian Parzen-Window (NBPW), and k-Nearest Neighbor (KNN). Results are evaluated on two datasets, i.e., Emotiv Epoc and wet gel electrodes for three classes, i.e., right-hand motor imagery, left hand motor imagery, and rest state. The proposed model yields a maximum accuracy of 60.61% in case of Emotiv Epoc headset and 86.50% for wet gel electrodes. The computed accuracy shows an increase of 7% as compared to previously implemented multiclass EEG classification.

Journal ArticleDOI
TL;DR: A neurorehabilitation setup combining several approaches that were shown to have a positive effect in patients with SCI is described, including gait training by means of non-invasive, surface functional electrical stimulation of the lower-limbs, proprioceptive and tactile feedback, balance control through overground walking and cue-based decoding of cortical motor commands using a brain-machine interface (BMI).
Abstract: Spinal cord injury (SCI) impairs the flow of sensory and motor signals between the brain and the areas of the body located below the lesion level. Here, we describe a neurorehabilitation setup combining several approaches that were shown to have a positive effect in patients with SCI: gait training by means of non-invasive, surface functional electrical stimulation (sFES) of the lower-limbs, proprioceptive and tactile feedback, balance control through overground walking and cue-based decoding of cortical motor commands using a brain-machine interface (BMI). The central component of this new approach was the development of a novel muscle stimulation paradigm for step generation using 16 sFES channels taking all sub-phases of physiological gait into account. We also developed a new BMI protocol to identify left and right leg motor imagery that was used to trigger an sFES-generated step movement. Our system was tested and validated with two patients with chronic paraplegia. These patients were able to walk safely with 65–70% body weight support, accumulating a total of 4,580 steps with this setup. We observed cardiovascular improvements and less dependency on walking assistance, but also partial neurological recovery in both patients, with substantial rates of motor improvement for one of them.

Journal ArticleDOI
TL;DR: Results indicate that chronic musculoskeletal pain conditions affecting the limbs and face are associated with altered motor imagery performance as measured by the LRJT.

Journal ArticleDOI
TL;DR: NFB by fMRI is used to train healthy individuals to reinforce brain patterns related to motor execution while performing a motor imagery task, with no overt movement, and the first demonstration of white matter FA changes following a very short training schedule is demonstrated.

Journal ArticleDOI
TL;DR: A hybrid BCI paradigm to explore a feasible and natural way to play games by using electroencephalogram (EEG) signals in a practical environment by combining motor imagery and steady-state visually evoked potentials to generate multiple commands.
Abstract: Brain-computer interfaces (BCIs) not only can allow individuals to voluntarily control external devices, helping to restore lost motor functions of the disabled, but can also be used by healthy users for entertainment and gaming applications. In this study, we proposed a hybrid BCI paradigm to explore a feasible and natural way to play games by using electroencephalogram (EEG) signals in a practical environment. In this paradigm, we combined motor imagery (MI) and steady-state visually evoked potentials (SSVEPs) to generate multiple commands. A classic game, Tetris, was chosen as the control object. The novelty of this study includes the effective usage of a “dwell time” approach and fusion rules to design BCI games. To demonstrate the feasibility of the proposed hybrid paradigm, ten subjects were chosen to participate in online control experiments. The experimental results showed that all subjects successfully completed the predefined tasks with high accuracy. This proposed hybrid BCI paradigm co...

Journal ArticleDOI
TL;DR: Findings validate the feasibility of the proposed NFT to improve sensorimotor cortical activations and BCI performance during motor imagery and it is promising to optimize conventional NFT manner and evaluate the effectiveness of motor training.
Abstract: Objective We proposed a brain-computer interface (BCI) based visual-haptic neurofeedback training (NFT) by incorporating synchronous visual scene and proprioceptive electrical stimulation feedback. The goal of this work was to improve sensorimotor cortical activations and classification performance during motor imagery (MI). In addition, their correlations and brain network patterns were also investigated respectively. Approach 64-channel electroencephalographic (EEG) data were recorded in nineteen healthy subjects during MI before and after NFT. During NFT sessions, the synchronous visual-haptic feedbacks were driven by real-time lateralized relative event-related desynchronization (lrERD). Main results By comparison between previous and posterior control sessions, the cortical activations measured by multi-band (i.e. alpha_1: 8-10 Hz, alpha_2: 11-13 Hz, beta_1: 15-20 Hz and beta_2: 22-28 Hz) absolute ERD powers and lrERD patterns were significantly enhanced after the NFT. The classification performance was also significantly improved, achieving a ~9% improvement and reaching ~85% in mean classification accuracy from a relatively poor performance. Additionally, there were significant correlations between lrERD patterns and classification accuracies. The partial directed coherence based functional connectivity (FC) networks covering the sensorimotor area also showed an increase after the NFT. Significance These findings validate the feasibility of our proposed NFT to improve sensorimotor cortical activations and BCI performance during motor imagery. And it is promising to optimize conventional NFT manner and evaluate the effectiveness of motor training.

Journal ArticleDOI
TL;DR: Changes in corticospinal excitability were specific to actual/imagined movement preparation, as no modulation was observed when preparing and generating images of cued visual objects and inhibition is a signature of how actions are prepared, whether they are imagined or actually executed.
Abstract: Current theories consider motor imagery, the mental representation of action, to have considerable functional overlap with the processes involved in actual movement preparation and execution. To test the neural specificity of motor imagery, we conducted a series of 3 experiments using transcranial magnetic stimulation (TMS). We compared changes in corticospinal excitability as people prepared and implemented actual or imagined movements, using a delayed response task in which a cue indicated the forthcoming response. TMS pulses, used to elicit motor-evoked responses in the first dorsal interosseous muscle of the right hand, were applied before and after an imperative signal, allowing us to probe the state of excitability during movement preparation and implementation. Similar to previous work, excitability increased in the agonist muscle during the implementation of an actual or imagined movement. Interestingly, preparing an imagined movement engaged similar inhibitory processes as that observed during actual movement, although the degree of inhibition was less selective in the imagery conditions. These changes in corticospinal excitability were specific to actual/imagined movement preparation, as no modulation was observed when preparing and generating images of cued visual objects. Taken together, inhibition is a signature of how actions are prepared, whether they are imagined or actually executed.

Journal ArticleDOI
TL;DR: The results suggest that motor imagery training could be an adjunct to standard physiotherapy care in older adults, although it is unclear whether or not the effects are clinically worthwhile.

Journal ArticleDOI
TL;DR: It is suggested that MI may be effective for pain relief and improvement in range of motion among chronic musculoskeletal pain conditions, although conclusion is based on a limited certainty of evidence as assessed using the GRADES approach.
Abstract: Introduction:In recent years, there has been an increase in the use of motor imagery (MI) in the rehabilitation of musculoskeletal pain conditions. Across the literature, most reviews have yet to consider Laterality Judgement Task training as a form of MI method. This review aimed to evaluate the ef

Journal ArticleDOI
TL;DR: The association between upper limb motor recovery and beta activations reinforces the hypothesis that broader regions of the cortex activate during movement tasks as a compensatory mechanism in stroke patients with severe motor impairment.
Abstract: Stroke is a leading cause of motor disability worldwide. Upper limb rehabilitation is particularly challenging since approximately 35% of patients recover significant hand function after 6 months of the stroke's onset. Therefore, new therapies, especially those based on brain-computer interfaces (BCI) and robotic assistive devices, are currently under research. Electroencephalography (EEG) acquired brain rhythms in alpha and beta bands, during motor tasks, such as motor imagery/intention (MI), could provide insight of motor-related neural plasticity occurring during a BCI intervention. Hence, a longitudinal analysis of subacute stroke patients' brain rhythms during a BCI coupled to robotic device intervention was performed in this study. Data of 9 stroke patients were acquired across 12 sessions of the BCI intervention. Alpha and beta event-related desynchronization/synchronization (ERD/ERS) trends across sessions and their association with time since stroke onset and clinical upper extremity recovery were analyzed, using correlation and linear stepwise regression, respectively. More EEG channels presented significant ERD/ERS trends across sessions related with time since stroke onset, in beta, compared to alpha. Linear models implied a moderate relationship between alpha rhythms in frontal, temporal, and parietal areas with upper limb motor recovery and suggested a strong association between beta activity in frontal, central, and parietal regions with upper limb motor recovery. Higher association of beta with both time since stroke onset and upper limb motor recovery could be explained by beta relation with closed-loop communication between the sensorimotor cortex and the paralyzed upper limb, and alpha being probably more associated with motor learning mechanisms. The association between upper limb motor recovery and beta activations reinforces the hypothesis that broader regions of the cortex activate during movement tasks as a compensatory mechanism in stroke patients with severe motor impairment. Therefore, EEG across BCI interventions could provide valuable information for prognosis and BCI cortical activity targets.

Journal ArticleDOI
TL;DR: The findings from the present study may be a basis for further development of BCI systems for decoding left and right stepping during mental exercise where the two motions are alternately imagined.
Abstract: Bilateral upper-limb motor imagery has been demonstrated to be a useful mental task in electroencephalography (EEG)-based brain–computer interfaces (BCIs). By contrast, few studies have examined bilateral lower-limb motor imagery, and all of them have focused on imaginary foot movements. The left–right classification accuracy reported in these studies based on the EEG mu rhythm (8–13 Hz) and beta band (13–30 Hz) remains unsatisfactory. The present study investigated the possibility of using lower-limb stepping motor imagery as the mental task and analysed the EEG difference between imaginary left-leg stepping (L-stepping) and right-leg stepping (R-stepping) movements. An experimental paradigm was designed to collect 5-s motor imagery EEG signals at nine recording sites around the vertex of the brain. Results from eight able-bodied participants indicated that the commonly used mu event-related desynchronisation (ERD) feature exhibited no significant difference between the two imaginary movements for all recording sites and all time intervals within the 5-s motor imagery period. Regarding the other commonly used feature, beta event-related synchronisation, no significant difference between the two imagery tasks was observed for most of the recording sites and time intervals. Instead, theta band (4–8 Hz) ERD significantly differed between the L- and R-stepping imagery tasks at five sites (FC4, C3, CP3, Cz, CPz) within the first 2 s after motor imagery cue onset. The findings from the present study may be a basis for further development of BCI systems for decoding left and right stepping during mental exercise where the two motions are alternately imagined.

Journal ArticleDOI
TL;DR: The findings pave the path for removing the necessity to acquire subject specific training data and hold promise for a novel, real-time fNIRS based BCI system design which will be most effective for application to disease populations for whom obtaining data to train a classification algorithm is not possible.
Abstract: Objective The aim of this study was to introduce a novel methodology for classification of brain hemodynamic responses collected via functional near infrared spectroscopy (fNIRS) during rest, motor imagery (MI) and motor execution (ME) tasks which involves generating population-level training sets. Approach A 48-channel fNIRS system was utilized to obtain hemodynamic signals from the frontal (FC), primary motor (PMC) and somatosensory cortex (SMC) of ten subjects during an experimental paradigm consisting of MI and ME of various right hand movements. Classification accuracies of random forest (RF), support vector machines (SVM), and artificial neural networks (ANN) were computed at the single subject level by training each classifier with subject specific features, and at the group level by training with features from all subjects for ME versus Rest, MI versus Rest and MI versus ME conditions. The performances were also computed for channel data restricted to FC, PMC and SMC regions separately to determine optimal probe location. Main results RF, SVM and ANN had comparably high classification accuracies for ME versus Rest (%94, %96 and %98 respectively) and for MI versus Rest (%95, %95 and %98 respectively) when fed with group level feature sets. The accuracy performance of each algorithm in localized brain regions were comparable (>%93) to the accuracy performance obtained with whole brain channels (>%94) for both ME versus Rest and MI versus Rest conditions. Significance By demonstrating the feasibility of generating a population level training set with a high classification performance for three different classification algorithms, the findings pave the path for removing the necessity to acquire subject specific training data and hold promise for a novel, real-time fNIRS based BCI system design which will be most effective for application to disease populations for whom obtaining data to train a classification algorithm is not possible.

Journal ArticleDOI
25 Jan 2019
TL;DR: An RHI-based paradigm with motorized moving rubber hand can significantly enhance the MI with better characteristics for use with BCI and the arrival time suggests that the proposed paradigm is applicable for BCI.
Abstract: Enhancing motor imagery (MI) results in amplified event-related desynchronization (ERD) and is important for MI-based rehabilitation and brain–computer interface (BCI) applications. Many attempts to enhance the MI by providing a visual guidance have been reported. We believe that the rubber hand illusion (RHI), which induces body ownership over an external object, can provide better guidance to enhance MI; thus, an RHI-based paradigm with motorized moving rubber hand was proposed. To validate the proposed MI enhancing paradigm, we conducted an experimental comparison among paradigms with 20 healthy subjects. The peak amplitude and arrival times of ERD were compared at contralateral and ipsilateral electroencephalogram channels. We found significantly amplified ERD caused by the proposed paradigm, which is similar to the ERD caused by motor execution. In addition, the arrival time suggests that the proposed paradigm is applicable for BCI. In conclusion, the proposed paradigm can significantly enhance the MI with better characteristics for use with BCI.

Journal ArticleDOI
TL;DR: The findings suggest that imagined speech can be used as a reliable activation task for selected users for development of more intuitive BCIs for communication.
Abstract: OBJECTIVE Most brain-computer interfaces (BCIs) based on functional near-infrared spectroscopy (fNIRS) require that users perform mental tasks such as motor imagery, mental arithmetic, or music imagery to convey a message or to answer simple yes or no questions. These cognitive tasks usually have no direct association with the communicative intent, which makes them difficult for users to perform. APPROACH In this paper, a 3-class intuitive BCI is presented which enables users to directly answer yes or no questions by covertly rehearsing the word 'yes' or 'no' for 15 s. The BCI also admits an equivalent duration of unconstrained rest which constitutes the third discernable task. Twelve participants each completed one offline block and six online blocks over the course of two sessions. The mean value of the change in oxygenated hemoglobin concentration during a trial was calculated for each channel and used to train a regularized linear discriminant analysis (RLDA) classifier. MAIN RESULTS By the final online block, nine out of 12 participants were performing above chance (p < 0.001 using the binomial cumulative distribution), with a 3-class accuracy of 83.8% ± 9.4%. Even when considering all participants, the average online 3-class accuracy over the last three blocks was 64.1 % ± 20.6%, with only three participants scoring below chance (p < 0.001). For most participants, channels in the left temporal and temporoparietal cortex provided the most discriminative information. SIGNIFICANCE To our knowledge, this is the first report of an online 3-class imagined speech BCI. Our findings suggest that imagined speech can be used as a reliable activation task for selected users for development of more intuitive BCIs for communication.