scispace - formally typeset
Open AccessJournal ArticleDOI

Noninvasive brain-actuated control of a mobile robot by human EEG

Reads0
Chats0
TLDR
It is shown that two human subjects successfully moved a robot between several rooms by mental control only, using an EEG-based brain-machine interface that recognized three mental states.
Abstract
Brain activity recorded noninvasively is sufficient to control a mobile robot if advanced robotics is used in combination with asynchronous electroencephalogram (EEG) analysis and machine learning techniques. Until now brain-actuated control has mainly relied on implanted electrodes, since EEG-based systems have been considered too slow for controlling rapid and complex sequences of movements. We show that two human subjects successfully moved a robot between several rooms by mental control only, using an EEG-based brain-machine interface that recognized three mental states. Mental control was comparable to manual control on the same task with a performance ratio of 0.74.

read more

Content maybe subject to copyright    Report

1026 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 51, NO. 6, JUNE 2004
Noninvasive Brain-Actuated Control of a Mobile
Robot by Human EEG
José del R. Millán*, Frédéric Renkens, Josep Mouriño, Student Member, IEEE, and Wulfram Gerstner
Abstract—Brain activity recorded noninvasively is sufficient to
control a mobile robot if advanced robotics is used in combination
with asynchronous electroencephalogram (EEG) analysis and ma-
chine learning techniques. Until now brain-actuated control has
mainly relied on implanted electrodes, since EEG-based systems
have been considered too slow for controlling rapid and complex
sequences of movements. We show that two human subjects suc-
cessfully moved a robot between several rooms by mental control
only, using an EEG-based brain-machine interface that recognized
three mental states. Mental control was comparable to manual con-
trol on the same task with a performance ratio of 0.74.
Index Terms—Asynchronous protocol, brain-machine interface,
EEG, robotics.
I. INTRODUCTION
T
HE idea of moving robots or prosthetic devices not by
manual control, but by mere “thinking” (i.e., the brain ac-
tivity of human subjects) has fascinated researchers over the
last couple of years [1]–[9]. Initial demonstrations of the fea-
sibility of such an approach have relied on intracranial elec-
trodes implanted in the motor cortex of monkeys [1]–[5]. For
humans, noninvasive methods based on electroencephalogram
(EEG) signals are preferable, but they suffer from a reduced spa-
tial resolution and increased noise due to measurements on the
scalp. So far control tasks based on human EEG have been lim-
ited to simple exercises such as moving a computer cursor or
opening a hand orthosis [6]–[9]. Here we show that the signals
derived from an EEG-based brain-machine interface (BMI) are
sufficient to continuously control a miniature mobile robot in
an indoor environment with several rooms, corridor, and door-
ways (Fig. 1). Moreover, experimental results obtained with two
volunteer healthy subjects show that brain-actuated control of
the robot is nearly as efficient as manual control. The subjects
achieved these results after a few days of training with a portable
noninvasive BMI that uses 8 scalp electrodes.
Manuscript received June 30, 2003; revised February 6, 2004. This work was
supported in part by the European ESPRIT Programme, LTR Project 28193.
The work of J. del R. Millán was supported in part by the Swiss National Sci-
ence Foundation through the National Centre of Competence in Research on
“Interactive Multimodal Information Management (IM2).” Asterisk indicates
corresponding author.
*J. del R. Millán is with the IDIAP Research Institute, CH-1920 Martigny,
Switzerland, and also with the Laboratory of Computational Neuroscience,
Swiss Federal Institute of Technology, CH-1015 Lausanne, Switzerland
(e-mail: jose.millan@idiap.ch).
F. Renkens and W. Gerstner are with the Laboratory of Computational Neuro-
science, Swiss Federal Inst. of Technology, CH-1015 Lausanne EPFL, Switzer-
land.
J. Mouriño is now with the Centre de Recerca en Enginyeria Biomédica, Uni-
versitat Politècnica de Catalunya, E-08028 Barcelona, Spain.
Digital Object Identifier 10.1109/TBME.2004.827086
Fig. 1. The mobile robot in its environments. The environment (80.0 cm
2
60.0 cm) consists of several rooms along a corridor. The Khepera robot (5.7 cm
diameter) is a two-wheeled vehicle. It has 3 lights on top to provide feedback
to the user and 8 infrared sensors around its diameter to detect obstacles. The
readings of the infrared sensors, which have limited perception ranges, are
used by a multilayer perceptron to determine the probability to be in one of 6
perceptual states: open space, obstacle to left, obstacle to right, wall to left,
wall to right, wall in front.
Human EEG signals represent the global activity of millions
of neurons. In standard clinical protocols, EEG signals are syn-
chronized to an external cue and averaged over tens of trials
in order to increase the signal-to-noise ratio and resolve spatial
and temporal activation patterns. For the control of mechanical
devices via an EEG-based BMI, averaging over several trials
is not possible. Single-trial analysis (also called “online” anal-
ysis) is, however, typically limited by a low channel capacity
below 0.5 b/s[8], and so EEG-based BMI’s have been consid-
ered too slow for controlling rapid and complex sequences of
movements. Nevertheless, previous studies have succeeded in
recognizing a few mental states that have been used for commu-
nication [8]–[10]. One of the main reasons for such a low bit rate
is the use of synchronous protocols where EEG is time-locked
to externally paced cues repeated every 4–10 s. In this paper, we
use an asynchronous protocol and analyze the ongoing EEG ac-
tivity to determine the subjects’ mental state which can change
at any moment. This approach nearly doubles the usual bit rate
of EEG-based brain-machine interfaces.
II. M
ETHODS
How is it possible to control a robot that has to make ac-
curate turns at precise moments in time using signals that ar-
rive at a rate of about 1 b/s? There are three key features of
our approach. First, the user’s mental states are associated with
high-level commands (e.g., “turn right at the next occasion”)
and the robot executes these commands autonomously using
the readings of its on-board sensors. Second, the subject can
0018-9294/04$20.00 © 2004 IEEE
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on April 19,2010 at 12:09:04 UTC from IEEE Xplore. Restrictions apply.

MILLÁN et al.: NONINVASIVE BRAIN-ACTUATED CONTROL OF A MOBILE ROBOT BY HUMAN EEG 1027
Fig. 2. Finite state automaton used for the control of the robot. Transitions
between the 6 behaviors (ellipses) are triggered by 3 mental states (#1, #2, #3)
and 4 perceptual states (
j
o
: left wall,
o
j
: right wall,
^o
: wall or obstacle in front,
and free space). For example, the mental state #2 always causes a transition
to left turn or, in the presence of a wall to the left, to wall following. A
similar interpretation holds for the other mental states. For the sake of simplicity,
this figure does not represent the obstacle-avoidance routine nor the full set of
transitions to the behavior stop.
issue high-level commands at any moment. This is possible be-
cause the operation of the BMI is asynchronous and, unlike syn-
chronous approaches, does not require waiting for external cues.
The robot will continue executing a high-level command until
the next is received. Third, the robot relies on a behavior-based
controller [11] to implement the high-level commands that guar-
antee obstacle avoidance and smooth turns.
In our controller, both the users mental state and the robots
perceptual state can be considered as inputs to a finite state
automaton with 6 states (or behaviors). These behaviors are
forward movement,”“left turn,”“follow left wall,”“right
turn,”“follow right wall, and stop. Fig. 2 shows the essen-
tials of this finite state automaton. The transitions between
behaviors are determined by the 3 mental states (#1, #2, #3) of
the user, supplemented by 4 perceptual states of the environ-
ment determined from the robots sensory readings (left wall,
right wall, wall or obstacle in front, and free space). In addition,
the controller uses two other perceptual states (left obstacle
and right obstacle) and a few internal memory variables for
obstacle avoidance and stable implementation of the different
behaviors. The robots interpretation of a particular mental
state depends on the perceptual state of the robot. Thus, in an
open space, mental state #2 means left turn; on the other
hand, if a wall is detected on the left-hand side, mental state #2
is interpreted as follow left wall. Similarly, depending on the
perceptual state of the robot, mental state #3 can mean right
turn or follow right wall. However, mental state #1 always
means move forward. Moreover, the robot stops whenever
it perceived an obstacle in front to avoid collisions. Altogether
experimental subjects felt that our control schema was simple
and intuitive to use.
A final element is the use of an appropriate feedback indi-
cating the current mental state recognized by the embedded clas-
sifier. This is done by means of three lights on top of the robot,
with the same colors as the buttons used during the training
phase. The front light is green and is on when the robot receives
the mental command #1. The left light is blue and is associated
to the mental command #2, whereas the right light is red and
is associated to the mental command #3. Thus, if the robot is
following the left wall and is approaching an open door, a blue
feedback indicates that the robot will turn left to continue fol-
lowing the left wall (and, so, it will enter into the room). On the
contrary, a green feedback indicates that robot will move for-
ward along the corridor when facing the doorway and will not
enter into the room. This simple feedback allows users to rapidly
correct the robots trajectory in case of errors in the recognition
of the mental states or errors in the execution of the desired be-
havior (due to the limitations of the robots sensors).
A. EEG Signals
Two volunteer healthy subjects A and B wore a commer-
cial EEG cap with integrated scalp electrodes. EEG potentials,
referenced to the average of the left and right ear lobes, were
recorded at the 8 standard fronto-centro-parietal locations F3,
F4, C3, Cz, C4, P3, Pz, and P4. The sampling rate was 128 Hz.
The raw EEG potentials were first transformed by means of a
surface Laplacian (SL) computed globally by means of a spher-
ical spline of order 2 [12][14]. Every 62.5 ms the power spec-
trum in the band 830 Hz was estimated over the last second of
data. To do so, we used Welchs periodogram algorithm on seg-
ments of 0.5 s and averaged the estimations for 3 segments with
50% overlap. This yields a frequency resolution of 2 Hz. Each
96-dimensional vector (8 channels times 12 frequency compo-
nents in the band 830 Hz) was then normalized. The resulting
EEG sample was analyzed by a statistical classifier. No artifact
rejection or correction was employed.
B. Statistical Classifier
The different mental tasks (or states) are recognized by a clas-
sifier trained to classify EEG samples as class #1, #2, #3, or
unknown [15]. In our statistical classifier, we have for each
mental task a mixture of several Gaussians units. We think of
each unit as a prototype of one of the mental tasks (or classes) to
be recognized. The challenge is to find the appropriate position
of the Gaussian prototype as well as an appropriate variance.
We assume that the class-conditional probability density
function of class
for sample is a superposition of several
Gaussians
(1)
where
denotes the number of prototypes (Gaussians) of the
class
and are the activation level and the amplitude
of the ith prototype of the class
, respectively. The ampli-
tudes are constrained so that
. In our case, we set
. Also, in the experiments reported below, all the
classes have the same number of prototypes, namely
.
This choice is discussed in Section III-A, see also Table IV. The
activation level is given by
(2)
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on April 19,2010 at 12:09:04 UTC from IEEE Xplore. Restrictions apply.

1028 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 51, NO. 6, JUNE 2004
where is the dimensionality of the input space, corresponds
to the center of the ith prototype of class
is the covari-
ance matrix of this prototype, and
is the determinant of the
covariance matrix. In order to reduce the number of parameters
we restrict our model to a diagonal covariance matrix
that
is common to all the prototypes of the same class. Imposing di-
agonality equals an assumption of independence between the
features. Even though we do not believe this assumption holds
for our experiments in a strict sense, this has demonstrated to
be a valid simplification of the model given the a posteriori
good performance of the system. Before classification we av-
erage the class-conditioned probability over
consecu-
tive samples
(3)
Finally, the posterior probability of class
at time is
(4)
where
ranges over the number of classes, three in our case,
and
denotes the prior probability of class . In the fol-
lowing, we assume equal prior probabilities.
The response of the network for sample
is the class with
the highest posterior probability provided that is greater than a
given probability threshold of 0.85; otherwise the response is
classified as unknown so as to avoid making risky decisions
for uncertain samples. This rejection criterion keeps the number
of errors (false positives) low, which is desired since recovering
from erroneous actions (e.g., robot turning in the wrong direc-
tion) has a high cost. The choice of this probability threshold
was guided by a previous receiver operating characteristic study
where different subjects only carried out the initial training de-
scribed before [16], and the actual value was selected based on
a nonexhaustive evaluation of the performance of the subjects
during the first training session.
To initialize the center of the prototypes and the diagonal co-
variance matrix of the class
we run a clustering algorithm
typically, self-organizing maps [17]to compute the position
of the four prototypes per class. We then initialize the diag-
onal covariance matrix by setting
(5)
where
denotes the set of indexes of samples belonging to
the class
is the cardinality of this set, is the
nearest prototype of this class to the sample
, and is its
center. The index
denotes the element of a vector, and
the diagonal element of a matrix.
During learning we improve these initial estimations itera-
tively by stochastic gradient descent so as to minimize the mean
square error
, where is the jth compo-
nent of the target vector in the form 1-of-c; e.g., the target vector
for class #3 is coded as (0,0,1). We compute
from (3) and (4)
with
; i.e., each sample is used separately. Taking the
gradient of the error yields
(6)
In order to simplify the algorithm we neglect the second term
in the square brackets so that the final update rule is
(7)
where
is the learning rate. The interpretation of this rule is
that, during training, the centers of the Gaussians are pulled to-
ward the EEG samples of the mental task they represent and are
pushed away from EEG samples of other tasks.
Finally, after every iteration over the training set, we estimate
again the new value of
using (5).
The brain-machine interface responds every 0.5 s. First,
during each frame of 62.5 ms it computes the class-conditioned
probability for each classi.e., the mixture of Gaussians, (1)
and (2). Second, it averages the class-conditioned probabilities
over 8 consecutive samples, (3). Third, it estimates the posterior
probability based on the average class-conditioned probability
of each class using Bayes formula, (4). Finally, it compares
the posterior probability with a threshold value of 0.85.
At the end of training, errors and unknown responses are
below 5% and 30%, respectively. These are online performances
obtained on a new session using the classifier trained with data
of previous sessions. The theoretical channel capacity of the
interface is, hence, above 1 b/s (operation mode I). In addi-
tion, the interface could also operate in another mode (opera-
tion mode II) where classification errors are further reduced by
requiring that two consecutive periods of 0.5 s give the same
classification response. In this mode II errors and unknown
responses are below 2% and 40%, respectively, and so the the-
oretical channel capacity has a lower bound of approximately
0.85 b/s. The channel capacity is estimated using the equation
(8)
where
is the number of mental classes, is the probability
of unknown response, and
is the probability of error. Equa-
tion (8) is finally divided by the response interval of the brain
interface (0.5 s for mode I and 1 s for mode II) to get the max-
imum bit rate that could be transmitted theoretically.
The actual bit rate in a control experiment is, however,
lower for two reasons. The first one is that the operation of
the brain-actuated robot does not require frequent switches
between mental tasks. The second reason is that, although
subjects can rapidly switch between tasks [see an example in
Fig. 3(A)], they cannot maintain the maximum speed for a long
time due to loss of attention.
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on April 19,2010 at 12:09:04 UTC from IEEE Xplore. Restrictions apply.

MILLÁN et al.: NONINVASIVE BRAIN-ACTUATED CONTROL OF A MOBILE ROBOT BY HUMAN EEG 1029
Fig. 3. Responses of the brain-machine interface while subject B was
mentally controlling the robot during the first trial of the second set of
experiments. (A) Plot over a period of 16 s (32 decision steps) where the
subject delivered a rapid and accurate sequence of mental commands. (B) Plot
where the responses exhibit intermediate values and slow transitions between
mental commands.
C. Brain-Machine Interface Protocol
During an initial training period of 5 or 3 days, respectively,
the two subjects learned to control 3 mental tasks of their choice
with the interface operating in mode I. Neither subject had pre-
vious experience with meditation or specific mental training.
The subjects tried the following mental tasks: relax, imagi-
nation of left and right hand (or arm) movements, cube
rotation,”“subtraction, and word association. The tasks con-
sisted of getting relaxed, imagining repetitive self-paced move-
ments of the limb, visualizing a spinning cube, performing
successive elementary subtractions by a fixed number (e.g.,
, etc.), and concatenating related
words. All tasks (including relax) were performed with eyes
opened. After a short evaluation, the experimental subjects
A and B chose to work with the tasks relax-left-cube and
relax-left-right, respectively. In the sequel, we will refer to
these mental tasks as #1, #2, and #3 (i.e., relax is #1, left is #2,
and cube or right is #3).
Each day, subjects participated in four consecutive training
sessions of about 5 min, separated by breaks of 510 min.
During each training session subjects switched randomly every
1015 s between the three tasks. Subjects received feedback
through three colored buttons on a computer screen. The green
button flashed if the mental state #1 was recognized, the blue
button was associated to state #2, and the red button to state
#3. After each training session the statistical classifier was
optimized off-line. After this initial training, subjects learned to
control mentally the mobile robot for 2 days with the interface
operating in mode II. The results reported here were obtained
at the end of the second day of work with the robot. During
this training period, the user and the BMI engaged in a mutual
learning process where they were coupled and adapted to each
other.
A feature of the statistical classifier embedded in our brain-
machine interface is the use of a probability rejection criterion,
which helps also to deal with idle states. In an asynchronous
protocol, idle states appear during the operation of a brain-ac-
tuated device while the subject does not want the interface to
carry out any new action. Although the statistical classifier is
not explicitly trained to recognize those idle states, it can process
them adequately by responding unknown. It is worth noting,
however, that our subjects reported that the task of steering the
robot between rooms was so engaging that they preferred to
emit continuously mental commands rather than to go through
idle states. Actually, one of our subjects reported that when he
tried to stay in an idle state, he had a tendency to anticipate the
next behavior the robot should execute and, instinctively, con-
centrated on the corresponding mental statethus delivering a
wrong mental command.
D. Mobile Robot
The mobile robot was a small Khepera (Fig. 1) that closely
mimics a motorized wheelchair. The robot moved at a max-
imum speed of one third of its diameter per second, similar to the
speed of a wheelchair in an office building. The Khepera robot
is a two-wheeled vehicle. It has 8 infrared sensors around its
diameter to detect obstacles. The sensors have a limited percep-
tion range, which makes the recognition of the different environ-
mental situations difficult if the raw readings were used directly.
To overcome this limitation, we implemented a multilayer per-
ceptron that maps the 8 raw infrared sensory readings into 6
classes of environmental states, or robots perceptual states; i.e.,
wall to the left, wall to the right, obstacle to the left, obstacle to
the right, wall or obstacle in front, and free space. The mapping
was optimized on an independent set of experiments where the
robot was put at various locations in the environment.
III. R
ESULTS AND DISCUSSION
The task was to drive the robot through different rooms in
a house-like environment (Fig. 1). During training the subject
had to drive the robot mentally from a starting position to a first
target room; once the robot arrived, a second target room was
drawn at random and so on. At the end of the second day of
training, the trajectories were qualitatively rather good and the
robot never failed to visit the target room.
Fig. 3(A) shows the responses of the brain-machine interface
over a continuous period of 16 s where subject B switched
quickly between mental tasks to make the robot navigate be-
tween two rooms. The line
gives the probability that the cur-
rent EEG sample should be classified as #1, the line
that for
#2 and the line
that for #3. If none of the three probabili-
ties is above the decision threshold of 0.85 (dotted horizontal
line), the response is unknown. Operating in mode II, the
robot executed a new behavior only after two consecutive iden-
tical responses (e.g., #1#1; #2#2; or #3#3). Initially the robot
moved forward (#1) to exit a room. Then, at step 9, it turned
right (#3) to pass the doorway into the corridor. Note that at de-
cision steps 12 and 13 the responses were unknown, since all
lines stayed below the decision threshold and so the robot con-
tinued executing its current behavior (#3). At step 15, the robot
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on April 19,2010 at 12:09:04 UTC from IEEE Xplore. Restrictions apply.

1030 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 51, NO. 6, JUNE 2004
moved forward (#1) and one second afterwards it switched to
left wall following (#2) until it entered into a second room where
it moved forward (#1) before it finally turned right (#3).
It is remarkable that Fig. 3(A) shows rapid transitions
between mental tasks. There are two reasons for this. First,
Fig. 3(A) corresponds to a difficult segment of the robot
trajectory where many commands were required. It is, there-
fore, not representative of the responses of the brain interface
over the whole experiment because normally the subject does
not need to switch between mental tasks so quickly. But
because the subject wanted to steer the robot so as to move
in a relatively complex way, he needed to deliver a rapid and
accurate sequence of mental commandsand the brain inter-
face enabled him to do so. In other segments of the trajectory,
precise commands were not required and the situation looks
different: the experimental results reported in Section III-A
indicate that the classifier outputs have intermediate values
(that correspond to unknown responses) a significant number
of times. Fig. 3(B) shows a plot of responses with intermediate
values and slow transitions between mental commands during
an easy segment of the robot trajectory. In this case, the
robot was following a left wall for a few seconds and, after
it has crossed a doorway, it moved forward. Note that even if
the classifier responses were not always the correct one (there
are several unknown responses and even an error at step 7),
the robot still performed the correct behavior because of the
control strategy implemented by our finite state automaton. A
second reason for the relatively clear responses is that averaging
class-conditioned probabilities before using Bayes rule [see
(3) and (4)] helps to stabilize the responses of the classifier.
Although the subject brought the robot to each of the de-
sired rooms, there were a few occasions where the robot did
not follow the optimal trajectory. We may, therefore, wonder
how efficient the mental control of the robot really is. In order
to evaluate quantitatively the performance of the brain-actu-
ated robot, subjects A and B also carried out a second set
of experiments. In a given trial, the robot must travel from a
starting room to a target room as well as also visit an interme-
diate room. The rooms and their order were selected at random.
First, the subject made the robot visit the desired sequence of
rooms by mental control. In a later session, the subject drove
the robot along the same sequence of rooms by manual con-
trol. In this case, the subject used the same controller described
above but, instead of sending mental commands (#1, #2, #3) to
the robot, he simply pressed one of three keys. This procedure
allowed us to compare mental and manual control for a system
that is identical in all other aspects. In addition, the manual
trajectory should be quite close to the optimal path that can
be generated with the current controller. It is worth noting that
the reason why the subject controls the robot mentally first
and only afterwards manually is to avoid any learning process
that could facilitate mental control. Table I gives the time in
seconds for three different trials for the two subjects. The first
column gives, for each subject, the three mental tasks chosen.
For each trial, the table indicates the time required for mental
control, manual control and also the relation between the two.
Mental control was significantly worse than manual control,
but still the ratio of operating times was as high as 0.74 on
TABLE I
T
IME IN
SECONDS FOR
THREE
DIFFERENT TRIALS FOR
SUBJECTS
A AND
B
TABLE II
P
ERFORMANCES OVER THE
CONSECUTIVE
TRAINING
SESSIONS OF THE
FIRST
DAY.F
OR EACH
SESSION,PERFORMANCE
IS MEASURED IN
TERMS OF
ERRORS
(ERR., L
EFT COLUMN)
AND
UNKNOWN RESPONSES
(REJ., RIGHT
COLUMN)
average. Thus, mental control is worse than manual control,
but by less than a factor of 1.5.
Although users were emitting mental commands continu-
ously, the theoretical minimum number of control commands
to achieve the task is 13 for manual control during a typical
task. However, in order to reach the target as fast as possible,
subjects do emit more control commands than the minimum
(almost twice). Also, in the case of mental control the number
of control commands is significantly larger due to the less
accurate control of the robot. On average, subjects switched
between mental commands every 5.0 s.
A. Performance of the Statistical Classifiers
Tables IIIV give some additional details about the perfor-
mance of the statistical classifiers for the two subjects. Table II
shows the learning curves of the 2 subjects during the first day
of training (4 training sessions in total with the brain interface
operating in mode I). The classifiers are trained offline with data
of a given training session and tested online on the next session.
Note that the very first training session was used to gather the
initial EEG samples to train the statistical classifiers and so users
did not receive any feedback at this time. A clear improvement
in performance can be observed, in terms of errors and un-
known responses. We note that already in session four (i.e.,
after 3 iterations of training) the system has excellent perfor-
mance in mode II, corresponding to a theoretical bit rate of 1.02
for subject A and 0.91 for subject B.
Table III shows the average distance between the different
prototypes of a given mental task and between prototypes of
different tasks for the two subjects. The prototypes are those
learned at the end of the first training period, before using the
mobile robot. Interclass distances are always significantly larger
than their corresponding intraclass distances, which is a clear
indication that the learned prototypes are modeling relatively
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on April 19,2010 at 12:09:04 UTC from IEEE Xplore. Restrictions apply.

Figures
Citations
More filters
Journal ArticleDOI

A review of classification algorithms for EEG-based brain–computer interfaces

TL;DR: This paper compares classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG) in terms of performance and provides guidelines to choose the suitable classification algorithm(s) for a specific BCI.
Journal ArticleDOI

Brain Computer Interfaces, a Review

TL;DR: The state-of-the-art of BCIs are reviewed, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface.
Journal ArticleDOI

A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces: A 10-year Update

TL;DR: A comprehensive overview of the modern classification algorithms used in EEG-based BCIs is provided, the principles of these methods and guidelines on when and how to use them are presented, and a number of challenges to further advance EEG classification in BCI are identified.
Journal ArticleDOI

Control strategies for active lower extremity prosthetics and orthotics: a review

TL;DR: This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic P/O devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system.
Journal ArticleDOI

A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals.

TL;DR: This work presents the first such comprehensive survey of all BCI designs using electrical signal recordings published prior to January 2006, and asks what are the key signal processing components of a BCI, and what signal processing algorithms have been used in BCIs.
References
More filters
Journal ArticleDOI

Brain-computer interfaces for communication and control.

TL;DR: With adequate recognition and effective engagement of all issues, BCI systems could eventually provide an important new communication and control option for those with motor disabilities and might also give those without disabilities a supplementary control channel or a control channel useful in special circumstances.
Book

An Behavior-based Robotics

TL;DR: Following a discussion of the relevant biological and psychological models of behavior, the author covers the use of knowledge and learning in autonomous robots, behavior-based and hybrid robot architectures, modular perception, robot colonies, and future trends in robot intelligence.
Book

Behavior-Based Robotics

TL;DR: Whence behaviour? animal behaviour robot behaviour behaviour based architectures representational issues for behavioural systems hybrid deliberative/rective architectures perceptual basis for behaviour-based control adaptive behaviour social behaviour fringe robotics - beyond behaviour.
Journal ArticleDOI

Spherical splines for scalp potential and current density mapping

TL;DR: Description of mapping methods using spherical splines, both to interpolate scalp potentials (SPs) and to approximate scalp current densities (SCDs) with greater accuracy in areas with few electrodes.
Journal ArticleDOI

Motor imagery and direct brain-computer communication

TL;DR: At this time, a tetraplegic patient is able to operate an EEG-based control of a hand orthosis with nearly 100% classification accuracy by mental imagination of specific motor commands.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What contributions have the authors mentioned in the paper "Noninvasive brain-actuated control of a mobile robot by human eeg" ?

The authors show that two human subjects successfully moved a robot between several rooms by mental control only, using an EEG-based brain-machine interface that recognized three mental states. 

Their results open the possibility for physically disabled people to use a portable EEG-based brain-machine interface for controlling wheelchairs and prosthetic limbs. However, the authors will need to scale up the number of recognizable mental states to provide a more flexible and natural control of these robotics devices. This could be done by estimating local field potentials of small cortical areas from the scalp potentials recorded with a sufficiently high number of electrodes ( 32, 64, or more ) [ 21 ]. The Gaussian classifier embedded in the BMI would work upon the local field potentials of selected cortical areas instead of using EEG features. 

To initialize the center of the prototypes and the diagonal covariance matrix of the class the authors run a clustering algorithm— typically, self-organizing maps [17]—to compute the positionof the four prototypes per class. 

The authors assume that the class-conditional probability density function of class for sample is a superposition of several Gaussians(1)where denotes the number of prototypes (Gaussians) of the class and are the activation level and the amplitude of the ith prototype of the class , respectively. 

one of their subjects reported that when he tried to stay in an idle state, he had a tendency to anticipate the next behavior the robot should execute and, instinctively, concentrated on the corresponding mental state—thus delivering a wrong mental command. 

A second reason for the relatively clear responses is that averaging class-conditioned probabilities before using Bayes’ rule [see (3) and (4)] helps to stabilize the responses of the classifier. 

This rejection criterion keeps the number of errors (false positives) low, which is desired since recovering from erroneous actions (e.g., robot turning in the wrong direction) has a high cost. 

Note that even if the classifier responses were not always the correct one (there are several “unknown” responses and even an error at step 7), the robot still performed the correct behavior because of the control strategy implemented by their finite state automaton. 

After this initial training, subjects learned to control mentally the mobile robot for 2 days with the interface operating in mode II. 

In order to evaluate quantitatively the performance of the brain-actuated robot, subjects “A” and “B” also carried out a second set of experiments. 

in order to reach the target as fast as possible, subjects do emit more control commands than the minimum (almost twice). 

As additional evidence that subjects are not using EMG activity, which is broad-band, if the authors apply machine-learning techniques for the selection of those relevant features that best differentiate the mental tasks, the authors find that the classifier performance improves with only a small proportion of features, which are not grouped in a cluster [20]. 

in the case of mental control the number of control commands is significantly larger due to the less accurate control of the robot.