scispace - formally typeset
Search or ask a question

Showing papers by "Gernot Müller-Putz published in 2018"


Journal ArticleDOI
TL;DR: It is shown that it is possible to discriminate three executed reach-and-grasp actions prominent in people's everyday use from non-invasive EEG, and underlying neural correlates showed significant differences between all tested conditions.
Abstract: Objective. Despite the high number of degrees of freedom of the human hand, most actions of daily life can be executed incorporating only palmar, pincer and lateral grasp. In this study we attempt to discriminate these three different executed reach-and-grasp actions utilizing their EEG neural correlates. Approach. In a cue-guided experiment, 15 healthy individuals were asked to perform these actions using daily life objects. We recorded 72 trials for each reach-and-grasp condition and from a no-movement condition. Results. Using low-frequency time domain features from 0.3 to 3 Hz, we achieved binary classification accuracies of 72.4%, STD +/- 5.8% between grasp types, for grasps versus no-movement condition peak performances of 93.5%, STD +/- 4.6% could be reached. In an offline multiclass classification scenario which incorporated not only all reach-and-grasp actions but also the no-movement condition, the highest performance could be reached using a window of 1000 ms for feature extraction. Classification performance peaked at 65.9%, STD +/- 8.1%. Underlying neural correlates of the reach-and-grasp actions, investigated over the primary motor cortex, showed significant differences starting from approximately 800 ms to 1200 ms after the movement onset which is also the same time frame where classification performance reached its maximum. Significance. We could show that it is possible to discriminate three executed reach-and-grasp actions prominent in people's everyday use from non-invasive EEG. Underlying neural correlates showed significant differences between all tested conditions. This findings will eventually contribute to our attempt of controlling a neuroprosthesis in a natural and intuitive way which could ultimately benefit motor impaired end users in their daily life actions.

104 citations


Journal ArticleDOI
TL;DR: Predictive activity in contralateral primary sensorimotor and premotor areas exhibited significantly larger tuning to end-effector velocity when the visuomotor tracking task was performed, extending the current understanding of low-frequency electroencephalography (EEG) tuning to position and velocity signals.
Abstract: Movement decoders exploit the tuning of neural activity to various movement parameters with the ultimate goal of controlling end-effector action. Invasive approaches, typically relying on spiking activity, have demonstrated feasibility. Results of recent functional neuroimaging studies suggest that information about movement parameters is even accessible non-invasively in the form of low-frequency brain signals. However, their spatiotemporal tuning characteristics to single movement parameters are still unclear. Here, we extend the current understanding of low-frequency electroencephalography (EEG) tuning to position and velocity signals. We recorded EEG from 15 healthy participants while they performed visuomotor and oculomotor pursuit tracking tasks. Linear decoders, fitted to EEG signals in the frequency range of the tracking movements, predicted positions and velocities with moderate correlations (0.2–0.4; above chance level) in both tasks. Predictive activity in terms of decoder patterns was significant in superior parietal and parieto-occipital areas in both tasks. By contrasting the two tracking tasks, we found that predictive activity in contralateral primary sensorimotor and premotor areas exhibited significantly larger tuning to end-effector velocity when the visuomotor tracking task was performed.

37 citations


Journal ArticleDOI
TL;DR: It is concluded that sports MI combined with an interactive game environment could be a future promising task in motor learning and rehabilitation improving motor functions in late therapy processes or support neuroplasticity.
Abstract: Motor imagery is often used inducing changes in electroencephalographic (EEG) signals for imagery-based brain-computer interfacing (BCI). A BCI is a device translating brain signals into control signals providing severely motor-impaired persons with an additional, non-muscular channel for communication and control. In the last years, there is increasing interest using BCIs also for healthy people in terms of enhancement or gaming. Most studies focusing on improving signal processing feature extraction and classification methods, but the performance of a BCI can also be improved by optimizing the user's control strategies, e.g., using more vivid and engaging mental tasks for control. We used multichannel EEG to investigate neural correlates of a sports imagery task (playing tennis) compared to a simple motor imagery task (squeezing a ball). To enhance the vividness of both tasks participants performed a short physical exercise between two imagery sessions. EEG was recorded from 60 closely spaced electrodes placed over frontal, central, and parietal areas of 30 healthy volunteers divided in two groups. Whereas Group 1 (EG) performed a physical exercise between the two imagery sessions, Group 2 (CG) watched a landscape movie without physical activity. Spatiotemporal event-related desynchronization (ERD) and event-related synchronization (ERS) patterns during motor imagery (MI) tasks were evaluated. The results of the EG showed significant stronger ERD patterns in the alpha frequency band (8-13 Hz) during MI of tennis after training. Our results are in evidence with previous findings that MI in combination with motor execution has beneficial effects. We conclude that sports MI combined with an interactive game environment could be a future promising task in motor learning and rehabilitation improving motor functions in late therapy processes or support neuroplasticity.

31 citations


Journal ArticleDOI
TL;DR: The electroencephalographic (EEG)-measurable signatures caused by a loss of control over the cursor's trajectory, causing a target miss are studied to suggest that the masked and unmasked errors were indistinguishable in terms of classification.
Abstract: The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain-computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. OBJECTIVE We developed a task in which subjects have continuous control of a cursor's position by means of a joystick. The cursor's position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. APPROACH This paper studies the electroencephalographic (EEG)-measurable signatures caused by a loss of control over the cursor's trajectory, causing a target miss. MAIN RESULTS In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-[Formula: see text], average TPR = 81.8% and average TNR = 96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-[Formula: see text], average TPR = 60.9% and average TNR = 58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. SIGNIFICANCE The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

30 citations


Journal ArticleDOI
TL;DR: The neurophysiological signature of the interacting processes which lead to a single reach-and-grasp movement imagination (MI) is investigated and differences in frontal and parietal areas between the late ERP components related to the internally-driven selection and the externally-cued process are found.
Abstract: In this study, we investigate the neurophysiological signature of the interacting processes which lead to a single reach-and-grasp movement imagination (MI). While performing this task, the human healthy participants could either define their movement targets according to an external cue, or through an internal selection process. After defining their target, they could start the MI whenever they wanted. We recorded high density electroencephalographic (EEG) activity and investigated two neural correlates: the event-related potentials (ERPs) associated with the target selection, which reflect the perceptual and cognitive processes prior to the MI, and the movement-related cortical potentials (MRCPs), associated with the planning of the self-paced MI. We found differences in frontal and parietal areas between the late ERP components related to the internally-driven selection and the externally-cued process. Furthermore, we could reliably estimate the MI onset of the self-paced task. Next, we extracted MRCP features around the MI onset to train classifiers of movement vs. rest directly on self-paced MI data. We attained performance significantly higher than chance level for both time-locked and asynchronous classification. These findings contribute to the development of more intuitive brain-computer interfaces in which movement targets are defined internally and the movements are self-paced.

30 citations


Journal ArticleDOI
TL;DR: EEG activity reflected different movement covariates in different stages of grasping, contributing to the understanding of the temporal organization of neural grasping patterns, and could inform the design of noninvasive neuroprosthetics and brain-computer interfaces with more natural control.
Abstract: Movement covariates, such as electromyographic or kinematic activity, have been proposed as candidates for the neural representation of hand control. However, it remains unclear how these movement covariates are reflected in electroencephalographic (EEG) activity during different stages of grasping movements. In this exploratory study, we simultaneously acquired EEG, kinematic and electromyographic recordings of human subjects performing 33 types of grasps, yielding the largest such dataset to date. We observed that EEG activity reflected different movement covariates in different stages of grasping. During the pre-shaping stage, centro-parietal EEG in the lower beta frequency band reflected the object’s shape and size, whereas during the finalization and holding stages, contralateral parietal EEG in the mu frequency band reflected muscle activity. These findings contribute to the understanding of the temporal organization of neural grasping patterns, and could inform the design of noninvasive neuroprosthetics and brain-computer interfaces with more natural control.

27 citations


Journal ArticleDOI
TL;DR: FNIRS was used to investigate brain responses in healthy controls and one patient in minimally conscious state, and single runs of the patient recordings revealed task‐synchronous patterns – however, it cannot be concluded whether the measured activation derives from instruction based task performance and thus awareness.

26 citations


Journal ArticleDOI
TL;DR: The analysis of variance (ANOVA) analyses revealed that latency and magnitude of the most characteristic ErrP peaks were significantly influenced by the speed at which the grasping was executed, but not the type of grasp, which resulted in an greater accuracy of single-trial decoding of errors for fast movements compared to slow ones.
Abstract: OBJECTIVE In this manuscript, we consider factors that may affect the design of a hybrid brain-computer interface (BCI). We combine neural correlates of natural movements and interaction error-related potentials (ErrP) to perform a 3D reaching task, focusing on the impact that such factors have on the evoked ErrP signatures and in their classification. APPROACH Users attempted to control a 3D virtual interface that simulated their own hand, to reach and grasp two different objects. Three factors of interest were modulated during the experimentation: (1) execution speed of the grasping, (2) type of grasping and (3) mental strategy (motor imagery or real motion) used to produce motor commands. Thirteen healthy subjects carried out the protocol. The peaks and latencies of the ErrP were analyzed for the different factors as well as the classification performance. MAIN RESULTS ErrP are evoked for erroneous commands decoded from neural correlates of natural movements. The analysis of variance (ANOVA) analyses revealed that latency and magnitude of the most characteristic ErrP peaks were significantly influenced by the speed at which the grasping was executed, but not the type of grasp. This resulted in an greater accuracy of single-trial decoding of errors for fast movements (75.65%) compared to slow ones (68.99%). SIGNIFICANCE Understanding the effects of combining paradigms is a first step to design hybrid BCI that optimize decoding accuracy and can be deployed in motor substitution and neuro-rehabilitation applications.

25 citations


Journal ArticleDOI
TL;DR: It was found thatMI of grasping next to an object recruited the visuo‐spatial cognition network including posterior parietal and premotor regions more strongly than MI of grasping an object, indicating that grasping nextto an object requires additional processing resources rendering MI more complex.

17 citations


Journal ArticleDOI
TL;DR: Novel insights are provided in cortical activation patterns during affective motor imagery and its psychological and cognitive mechanisms underlying pain experience and its hemodynamic cortical changes are investigated with fNIRS.

15 citations


Journal ArticleDOI
TL;DR: This work found online AAS + RLAF to be highly effective in improving EEG quality and RLAF is a very effective add-on tool to enable high quality EEG in simultaneous EEG-fMRI experiments, even when online artifact reduction is necessary.
Abstract: Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) allow us to study the active human brain from two perspectives concurrently. Signal processing based artifact reduction techniques are mandatory for this, however, to obtain reasonable EEG quality in simultaneous EEG-fMRI. Current artifact reduction techniques like average artifact subtraction (AAS), typically become less effective when artifact reduction has to be performed on-the-fly. We thus present and evaluate a new technique to improve EEG quality online. This technique adds up with online AAS and combines a prototype EEG-cap for reference recordings of artifacts, with online adaptive filtering and is named reference layer adaptive filtering (RLAF). We found online AAS + RLAF to be highly effective in improving EEG quality. Online AAS + RLAF outperformed online AAS and did so in particular online in terms of the chosen performance metrics, these being specifically alpha rhythm amplitude ratio between closed and opened eyes (3–45% improvement), signal-to-noise-ratio of visual evoked potentials (VEP) (25–63% improvement), and VEPs variability (16–44% improvement). Further, we found that EEG quality after online AAS + RLAF is occasionally even comparable with the offline variant of AAS at a 3T MRI scanner. In conclusion RLAF is a very effective add-on tool to enable high quality EEG in simultaneous EEG-fMRI experiments, even when online artifact reduction is necessary.

Proceedings ArticleDOI
15 Jan 2018
TL;DR: A framework is proposed that encompasses the detection of goal-directed movement intention, movement classification and decoding, error-related potentials detection and delivery of kinesthetic feedback and discusses future directions that could be promising to translate the proposed framework to individuals with SCI.
Abstract: Spinal cord injury (SCI) can disrupt the communication pathways between the brain and the rest of the body, restricting the ability to perform volitional movements. Neuroprostheses or robotic arms can enable individuals with SCI to move independently, improving their quality of life. The control of restorative or assistive devices is facilitated by brain-computer interfaces (BCIs), which convert brain activity into control commands. In this paper, we summarize the recent findings of our research towards the main aim to provide reliable and intuitive control. We propose a framework that encompasses the detection of goal-directed movement intention, movement classification and decoding, error-related potentials detection and delivery of kinesthetic feedback. Finally, we discuss future directions that could be promising to translate the proposed framework to individuals with SCI.

Proceedings ArticleDOI
26 Dec 2018
TL;DR: It is demonstrated that the recognition accuracy of aBCIs deteriorates when re-calibration is ruled out during the long-term usage for the same subject, and a stable feature selection method is proposed to choose the most stable affective features, for mitigating the accuracy deterioration to a lesser extent and maximizing the aBCI performance in the long run.
Abstract: Affective brain-computer interface (aBCI) introduces personal affective factors into human-computer interactions, which could potentially enrich the user's experience during the interaction with a computer. However, affective neural patterns are volatile even within the same subject. To maintain satisfactory emotion recognition accuracy, the state-of-the-art aBCIs mainly tailor the classifier to the subject-of-interest and require frequent re-calibrations for the classifier. In this paper, we demonstrate that the recognition accuracy of aBCIs deteriorates when re-calibration is ruled out during the long-term usage for the same subject. Then, we propose a stable feature selection method to choose the most stable affective features, for mitigating the accuracy deterioration to a lesser extent and maximizing the aBCI performance in the long run. We validate our method on a dataset comprising six subjects' EEG data collected during two sessions per day for each subject for eight consecutive days.