scispace - formally typeset
Search or ask a question
Topic

Brain–computer interface

About: Brain–computer interface is a research topic. Over the lifetime, 3673 publications have been published within this topic receiving 90472 citations. The topic is also known as: mind-machine interface & direct neural interface.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper compares classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG) in terms of performance and provides guidelines to choose the suitable classification algorithm(s) for a specific BCI.
Abstract: In this paper we review classification algorithms used to design brain–computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.

2,519 citations

Journal ArticleDOI
17 May 2012-Nature
TL;DR: The results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.
Abstract: Two people with long-standing tetraplegia use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. John Donoghue and colleagues have previously demonstrated that people with tetraplegia can learn to use neural signals from the motor cortex to control a computer cursor. Work from another lab has also shown that monkeys can learn to use such signals to feed themselves with a robotic arm. Now, Donoghue and colleagues have advanced the technology to a level at which two people with long-standing paralysis — a 58-year-old woman and a 66-year-old man — are able to use a neural interface to direct a robotic arm to reach for and grasp objects. One subject was able to learn to pick up and drink from a bottle using a device implanted 5 years earlier, demonstrating not only that subjects can use the brain–machine interface, but also that it has potential longevity. Paralysis following spinal cord injury, brainstem stroke, amyotrophic lateral sclerosis and other disorders can disconnect the brain from the body, eliminating the ability to perform volitional movements. A neural interface system1,2,3,4,5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices6,7,8. Able-bodied monkeys have used a neural interface system to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here we demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm and hand over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.

2,181 citations

Journal ArticleDOI
01 Jul 2001
TL;DR: At this time, a tetraplegic patient is able to operate an EEG-based control of a hand orthosis with nearly 100% classification accuracy by mental imagination of specific motor commands.
Abstract: Motor imagery can modify the neuronal activity in the primary sensorimotor areas in a very similar way as observable with a real executed movement. One part of EEG-based brain-computer interfaces (BCI) is based on the recording and classification of circumscribed and transient EEG changes during different types of motor imagery such as, e.g., imagination of left-hand, right-hand, or foot movement. Features such as, e.g., band power or adaptive autoregressive parameters are either extracted in bipolar EEG recordings overlaying sensorimotor areas or from an array of electrodes located over central and neighboring areas. For the classification of the features, linear discrimination analysis and neural networks are used. Characteristic for the Graz BCI is that a classifier is set up in a learning session and updated after one or more sessions with online feedback using the procedure of "rapid prototyping." As a result, a discrimination of two brain states (e.g., leftversus right-hand movement imagination) can be reached within only a few days of training. At this time, a tetraplegic patient is able to operate an EEG-based control of a hand orthosis with nearly 100% classification accuracy by mental imagination of specific motor commands.

1,638 citations

Journal ArticleDOI
TL;DR: It is shown that a noninvasive BCI that uses scalp-recorded electroencephalographic activity and an adaptive algorithm can provide humans, including people with spinal cord injuries, with multidimensional point-to-point movement control that falls within the range of that reported with invasive methods in monkeys.
Abstract: Brain-computer interfaces (BCIs) can provide communication and control to people who are totally paralyzed. BCIs can use noninvasive or invasive methods for recording the brain signals that convey the user's commands. Whereas noninvasive BCIs are already in use for simple applications, it has been widely assumed that only invasive BCIs, which use electrodes implanted in the brain, can provide multidimensional movement control of a robotic arm or a neuroprosthesis. We now show that a noninvasive BCI that uses scalp-recorded electroencephalographic activity and an adaptive algorithm can provide humans, including people with spinal cord injuries, with multidimensional point-to-point movement control that falls within the range of that reported with invasive methods in monkeys. In movement time, precision, and accuracy, the results are comparable to those with invasive BCIs. The adaptive algorithm used in this noninvasive BCI identifies and focuses on the electroencephalographic features that the person is best able to control and encourages further improvement in that control. The results suggest that people with severe motor disabilities could use brain signals to operate a robotic arm or a neuroprosthesis without needing to have electrodes implanted in their brains.

1,493 citations

Journal ArticleDOI
31 Jan 2012-Sensors
TL;DR: The state-of-the-art of BCIs are reviewed, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface.
Abstract: A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.

1,407 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
80% related
Feature extraction
111.8K papers, 2.1M citations
79% related
Support vector machine
73.6K papers, 1.7M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Feature (computer vision)
128.2K papers, 1.7M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20251
2023909
20221,964
2021293
2020295
2019325