scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Synergetic Brain-Machine Interfacing Paradigm for Multi-DOF Robot Control

TL;DR: The user needs to only think about the end-point movement of the robot arm, which allows simultaneous multijoints control by BMI, and the support vector machine-based decoder designed in this paper is adaptive to the changing mental state of the user.
Abstract: This paper proposes a novel brain-machine interfacing (BMI) paradigm for control of a multijoint redundant robot system. Here, the user would determine the direction of end-point movement of a 3-degrees of freedom (DOF) robot arm using motor imagery electroencephalography signal with co-adaptive decoder (adaptivity between the user and the decoder) while a synergetic motor learning algorithm manages a peripheral redundancy in multi-DOF joints toward energy optimality through tacit learning. As in human motor control, torque control paradigm is employed for a robot to be adaptive to the given physical environment. The dynamic condition of the robot arm is taken into consideration by the learning algorithm. Thus, the user needs to only think about the end-point movement of the robot arm, which allows simultaneous multijoints control by BMI. The support vector machine-based decoder designed in this paper is adaptive to the changing mental state of the user. Online experiments reveals that the users successfully reach their targets with an average decoder accuracy of over 75% in different end-point load conditions.

Summary (4 min read)

Introduction

  • Depending on the nature of the experiment, the acquired EEG is found to have specific signal characteristics.
  • Study on the co-adaptivity of the user with the BMI system is an active area of research and to date, there is not much literature available on auto-adaptive and autocalibrated approaches.
  • As a result, such control techniques do not provide a practical solution and are far from natural human limb coordination since it is ideal to employ a control framework which allows users to drive BMI-driven robot as a third arm by his feeling.
  • This section also provides information on the experimental setup.

A. Synergetic BMI Control Scheme

  • It is known that human beings do not perform the joint actions of compound movements consciously.
  • But on gradual and repetitive trials of the same movement, the cerebellum begins to take control of the task by recognizing the relation to each segment of consciously initiated movement.
  • The aim of BMI control of a prosthetic or robotic limb is to allow seamless human-like movement but to date, they incur joint redundancy issues during movement tasks.
  • First, the decoder/ classifier is designed to continuously adapt to the changing brain signal, while the subject simultaneously observes the movement of the robot.
  • Because of the two adaptive function, the subject is free to control the robot arm without burdening himself to control complex joint management.

B. Experiment Description

  • In the first day, the subjects perform the tasks on two separate sessions.
  • The data from the first session is used to train the decoder, while the same from the second session is used for offline testing the training of the decoder.
  • Fig. 3 shows the generic structure of the visual cue.
  • Here, the online task required the subject to guide the robot end-point toward the target based on the instructions from the operator.

C. Co-Adaptive EEG-BMI System

  • The BMI system employs wavelet transforms [40], [41] for feature extraction, Laplacian EigenMaps [42] to determine the relevant features and an SVM classifier [43] to decode between the two mental states.
  • The filtered signals are then processed using discrete wavelet transform (DWT) [41] to derive the signature features related to left- and right-MI.
  • The authors have determined the optimal dimensionality of relevant features for each subject from their validation results.
  • The aim of the SVM classifiers is to determine the separating hyperplane with the maximum margin.
  • 2) Input N − L datapoints to the trained decoder and determine their respective posterior probabilities (P).

D. Peripheric Motor Learning of the Dynamic Environment

  • Tacit learning employs the command signal accumulated during repetitive interaction with the environment to develop an appropriate behavior for the system.
  • 2) The FB force error is mapped into the joint torque space by using the Jacobian of the robot arm and the motorcommand error works as a supervising signal.
  • Thus, each joint has a local torque control to generate the specified joint torque for the robot.
  • The control algorithms are executed on a master PC with the interface of analog-todigital and digital-to-analog converters from the encoders and to the motors, respectively.
  • The configuration allows the joints to be controlled independently and thus it can be presumed as a modular structure present within cerebellar pathways.

E. BMI Evaluation Metrics

  • To evaluate the performance of the BMI system during training and validation, the authors have employed four quantitative measures.
  • It is the measure of how correctly a classifier has classified the positive class.
  • Thus, the authors can say ROC curve is a plot of the classification result of the most positive classification to the most negative classification and the resultant AUC is widely used as a classification metric.
  • The authors have quantified the performance of the online task of moving the robot arm using left and right hand MI by the following metrics: 1) accuracy and 2) time taken, i.e., the time taken to process and decode the incoming EEG signal and transmit it remotely to the robot using SSH protocol.

III. RESULTS

  • This section begins with the detection of ERD/ERS signals from the EEG acquired from the Emotiv system.
  • Then, it presents the results on the performance of the BMI system during training and offline testing of the decoder, performance of the peripheral motor controller during its learning stage and the complete performance of BMI system (which includes the trained BMI decoder and the trained peripheral motor controller) during online experimentation.
  • The offline processing and online experimentation has been executed in MATLAB Windows 8.1 environment.

A. Detection of ERD/ERS Patterns

  • The Emotiv acquisition system, used in this paper, does not have any channels directly over the primary motor cortex, but it has channels, FC5, FC6, P7, and P8, in the vicinity of the region.
  • Thus, for MI studies, one can use these channels to detect the ERD/ERS waveform.
  • Hurtado-Rincon et al. [52] and Dharmasena et al. [53] has successfully classified between TABLE II VALIDATION OF THE DECODER ON A NEW DATASET (OF 40 TRIALS) DURING NO ADAPTATION AND ADAPTATION left- and right-MI.
  • Fok et al. [54] have acquired brain signals related to movement to successfully drive a powered orthosis tasked at opening and closing of the patient’s hand.
  • As noted from the plots, the right side of the brain is more active during left hand MI and vice-versa.

B. BMI Adaptive Decoder Training and Validation

  • In Table I, the authors have shown the average of the classification metric [i.e., accuracy, sensitivity, specificity, and AUC (in %)] for all the nine subjects.
  • The adaptation result suggests an increase of average accuracy, sensitivity, specificity, and AUC by 9.03%, 7.54%, 8.77%, and 6.37%, respectively, from its nonadaptive counterpart.
  • The p-values as observed from Table I suggests that for all subjects rejects the null hypothesis at 5% significance level, and thus, it is statistically shown that the adaptive decoder is more accurate than the nonadaptive one.
  • The average sensitivity, specificity and AUC being more than 85% suggests that the decoder can detect 85% of the positive and negative classes without adversely affecting each other.
  • The positive result shown during validation allowed us to use the decoder during online testing of the BMI system.

C. Learning of the Synergetic Motor Controller

  • For this experiment, the authors have used two different loads of 300 and 600 g as unknown loads for the robot.
  • It means the load is not a-priori known for the motor controller.
  • Fig. 8(a) and (b) illustrates the trajectory of the robot arm during its learning for both the loads.
  • The time sequential transition of the end-point of the robot for both the figures are illustrated using a color map which changes with the progress of time.
  • Fig. 8(c) shows the shoulder-elbow phase map for the different weights.

D. Online Performance of the Simultaneous Multi-DOF Robot Control by Co-Adaptive BMI

  • Following the training of robot controller using synergetic motor learning algorithm, the subject is ready to move the robot arm by his/her motor intention.
  • The decoder decodes the brain signal to generate the corresponding control command necessary to move the robot in either up or down direction.
  • As seen from the figure, the robot requires a number of steps (or trials of MI extraction) to reach the target.
  • This observation is also validated by the joint angle variance metric shown in Table III.
  • This observation regarding minimal shoulder and wrist usage for heavy object manipulation is well matched to the situation in human motor control.

IV. DISCUSSION

  • The authors describe some co-adaptive approaches implemented by other researchers.
  • In another interesting work, Kus et al. [57] developed a BCI system which followed an asynchronous mode of operation, automatic selection of parameters based on initial calibration and incremental update of the classifier parameters from FB.
  • The participants performed right hand, left hand and foot MI based on instructions from a visual cue with an accuracy of 74.84%.
  • The authors have employed this form of adaptation to the subject to make the task more realistic and practical.
  • The advantage of a separate motor learning control scheme, even for 3-DOF joint control, allows the subject to focus on the lower dimensional endpoint control of the robot while the proprioceptive information from the robot is processed inside the peripheral motor control and adapts accordingly while performing simultaneous multijoint control.

V. CONCLUSION

  • The authors have proposed a new BMI paradigm which integrates an MI EEG to extract the target intention with adaptive decoder for cortical signals and a synergetic motor learning control to cope with the peripheral control of a multijoint redundant robot arm with environmental dynamics adaptation capability.
  • The proposed method allowed for BMIcontrolled robot to employ different joint usage depending on the given payload systematically through the learning process.
  • To the best of the authors’ knowledge, it is a first system which incorporates dual adaptive nature in each cortical level and peripheral motor control level in BMI.
  • The positive result, thus obtained, has opened a door to proceed forward in this research, but it was verified with simple task as a starting point.
  • To improve the speed and robustness of the BCI control alogrithm, the authors would design a self-paced experiment with the provision of an error FB through EEG [15].

Did you find this useful? Give us your feedback

Figures (11)

Content maybe subject to copyright    Report

HAL Id: lirmm-01347425
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01347425
Submitted on 21 Jul 2016
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
A Synergetic Brain-Machine Interfacing Paradigm for
Multi-DOF Robot Control
Saugat Bhattacharyya, Shingo Shimoda, Mitsuhiro Hayashibe
To cite this version:
Saugat Bhattacharyya, Shingo Shimoda, Mitsuhiro Hayashibe. A Synergetic Brain-Machine Interfac-
ing Paradigm for Multi-DOF Robot Control. IEEE Transactions on Systems, Man and Cybernetics,
Part A: Systems and Humans, Institute of Electrical and Electronics Engineers, 2016, 46 (7), pp.957-
968. �10.1109/TSMC.2016.2560532�. �lirmm-01347425�

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 46, NO. 7, JULY 2016 957
A Synergetic Brain-Machine Interfacing Paradigm
for Multi-DOF Robot Control
Saugat Bhattacharyya, Shingo Shimoda, and Mitsuhiro Hayashibe, Senior Member, IEEE
Abstract—This paper proposes a novel brain-machine
interfacing (BMI) paradigm for control of a multijoint redun-
dant robot system. Here, the user would determine the direction
of end-point movement of a 3-degrees of freedom (DOF) robot
arm using motor imagery electroencephalography signal with co-
adaptive decoder (adaptivity between the user and the decoder)
while a synergetic motor learning algorithm manages a periph-
eral redundancy in multi-DOF joints toward energy optimality
through tacit learning. As in human motor control, torque con-
trol paradigm is employed for a robot to be adaptive to the given
physical environment. The dynamic condition of the robot arm is
taken into consideration by the learning algorithm. Thus, the user
needs to only think about the end-point movement of the robot
arm, which allows simultaneous multijoints control by BMI. The
support vector machine-based decoder designed in this paper is
adaptive to the changing mental state of the user. Online experi-
ments reveals that the users successfully reach their targets with
an average decoder accuracy of over 75% in different end-point
load conditions.
Index Terms—Brain-machine interfacing (BMI), co-adaptive
decoder, joint redundancy, multijoint robot, synergetic learning
control, tacit learning.
I. INTRODUCTION
A
S OF today, brain–machine interfacing (BMI) [or brain-
computer interfacing (BCI)] is one of the fastest growing
areas of research that provides a unique course of communi-
cation between a human and a machine (or device) without
any neuro-muscular intervention [1]. BMI was initially con-
ceived to provide rehabilitative and assistive solutions [2], [3]
to patients suffering from neuromuscular degenerative dis-
eases, such as amyotropic lateral sclerosis, cervical spinal
injury, paralysis, or amputee [4]. But in recent years, poten-
tial applications in fields of communication [5], [6], military
use [7], virtual reality [8], [9] and gaming [10], [11] has
Manuscript received October 22, 2015; revised December 17, 2015;
accepted March 7, 2016. Date of publication May 26, 2016; date of current
version June 14, 2016. This work was supported by the Erasmus Mundus
Action 2 project for Lot 11-Svaagata.eu:India through European Commission
(ref.nr. Agreement Number: 2012-2648/001-001-EM Action 2-Partnerships).
This paper was recommended by Associate Editor Z. Li.
S. Bhattacharyya is with the INRIA-LIRMM, University of Montpellier,
Montpellier 34095, France (e-mail: saugatbhattacharyya@live.com).
S. Shimoda is with the Brain Science Institute-Toyota Collaboration Center,
RIKEN, Nagoya 2271-130, Japan.
M. Hayashibe is with the INRIA-LIRMM, University of Montpellier,
Montpellier, France, and also with the Brain Science Institute–Toyota
Collaboration Center, RIKEN, Nagoya, Japan (e-mail: hayashibe@lirmm.fr).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TSMC.2016.2560532
widened its materiality across different domain other than
rehabilitation.
A BMI system relies on tools from digital signal pro-
cessing and machine learning to identify and predict the
cognitive state of the user from their corresponding brain
signals [4]. The brain signals are recorded either inva-
sively or noninvasively [12]. Although invasive means of
acquisition provides better performance in terms of accu-
racy and precision, noninvasive means are widely used
by most BMI/BCI researchers for their simplicity in user
interface. The most widely used noninvasive recording tech-
nique is electroencephalography (EEG), where the signals are
recorded by electrodes placed on the scalp, because it is
inexpensive, portable, easily available and has high temporal
resolution [4], [13].
Depending on the nature of the experiment, the acquired
EEG is found to have specific signal characteristics. Signals
acquired during movement related planning, imagination or
execution [motor imagery (MI)] is identified by a decrease
in spatio-spectral power [termed as event-related desynchro-
nization (ERD)] followed by an increase in power [termed as
event-related synchronization (ERS)] [14], [15]. Researchers
have widely used the changing patterns of ERD/ERS pat-
terns for different MI tasks [such as left (or right) hand
MI] to generate commands necessary to drive a peripheral
device such as mobile [16], [17] or humanoid robots [18],
wheelchairs [19] and navigation in virtual reality [8], and
gaming [20] environment.
Even after such advances of EEG-BMI in control appli-
cations, it still has not been used in real world applications
(except simple discrete selection task) because of certain issues
inherent in the signal. EEG signals are nonstationary, non-
linear, non-Gaussian, and highly variable in nature [1], [15],
because the recordings on different days or different times
of the same day exhibit high variability of the signal. This
phenomena usually occurs due to shifts in electrode positions
between sessions or changes in the electrochemical proper-
ties of the electrodes. Another issue that arises from EEG is
the noisy and low resolution signals recorded from the scalp,
which in actuality is the nonlinear superposition of electri-
cal activity of a large population of neurons. This masks the
underlying neural pattern of interest and restricts their detec-
tion. Even the current mental state of the user may affect
the quality of the signal [1], [21]. To address these prob-
lems, a practical BMI system should continuously track the
changing EEG patterns of the user in order to obtain a good
performance.
2168-2216
c
2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/
redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

958 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 46, NO. 7, JULY 2016
Study on the co-adaptivity of the user with the BMI sys-
tem is an active area of research and to date, there is not
much literature available on auto-adaptive and autocalibrated
approaches. In current co-adaptive approaches [1], [21], [22],
the system is initially trained to previous data, which is used
for initial training of the decoder. Then, data collected from
subsequent sessions are directly included into the system,
where the decoder is retrained and updated. The performance
of the user is displayed visually after the task, which allows
the user to train him/herself. DiGiovanna et al. [23] used rein-
forcement learning to develop an intelligent BMI control agent
that works in synergy with the BMI user and both the sys-
tem co-adapts and continuously learns from the environment.
The model was tested on rats and each subject co-adapted
with BMI control system significantly to control a prosthe-
sis. Recently, Bryan et al. [24] have devised a new approach
to BCI, which employs partially observable Markov decision
processes to handle the uncertainty of the EEG and achieve co-
adaptivity. Their approach allowed the system to make online
improvements to its behavior by adjusting itself to the user’s
changing circumstances.
Till now, a discussion on co-adaptation learning based on
both user and BMI system has been provided. Based on these
approaches, it is possible to control a robot (or prosthetic) limb
using MI BMI commands (like, left hand, and right hand).
But, control of multijoint robot involves redundancy manage-
ment issues in simultaneous multijoint control which is an
open problem in this area. To control multijoint robot using
the current control paradigms of BMI, one may need to con-
trol each individual joints separately in step-by-step manner to
complete a task [15], [19], [25]. Such movement of the robot
arm is not similar to human motor control, and is tedious
to the user. As a result, such control techniques do not pro-
vide a practical solution and are far from natural human limb
coordination since it is ideal to employ a control framework
which allows users to drive BMI-driven robot as a third arm
by his feeling. By following a human-like synergetic motor
control framework, one may obtain optimal BMI control solu-
tions in multidegrees of freedom (DOF) arm which is similar
to the case in actual human motor control [26]. For instance,
when we try to get a glass of water, we imagine mainly about
an end-point task itself in reaching motion than imagining
about individual joint angle trajectory. It is more natural to
imagine such end-point intentions and it can be obtained even
through noninvasive BMI using superficial cortical level sig-
nals. Redundant peripheral joint control should be managed at
different level as it is normally managed in cerebellum level
in human motor control.
In this paper, we propose a novel BMI paradigm, which is a
combination of a co-adaptive EEG decoder, which adapts the
decoder to the current mental state of the user while he/she
observes a feedback (FB) [22] (in this paper, the motion of
the robot), and synergetic motor learning scheme [26], to con-
trol the movements of a multijoint redundant robot driven by
torque control. The synergetic learning controller takes on a
role of functionality of cerebellum to optimize the periph-
eral motor coordination taking into account the given dynamic
environment. Torque control scheme is preferred in humanoid
robotics as it provides environmental compliance for human-
robot interaction [27]. Regarding motor intention in cortical
level, the decoder distinguishes between left and right MI EEG
to move a 3-DOF robot up and down toward a given target. As
a result, by blending the cortical signal level learning paradigm
of the BMI-user system and the peripheral motor learning
paradigm, we have attempted to simplify the BMI control of a
multijoint robot in a fashion similar to the situation where we
control a human limb naturally. As it is first trial and report of
this new BMI paradigm on redundant robot, a relatively simple
task focusing on the joint level handling is employed in this
paper. However, this paper first deals with tridirectional adap-
tation in BMI. In addition to the so-called bilateral adaptation
between human physiological signal changes and its adaptive
decoding, the third adaptation in peripheral motor control is
integrated to deal with redundant arm coordination.
The rest of this paper is organized as follows: Section II
describes the synergetic BMI control paradigm proposed in
this paper. This section also provides information on the exper-
imental setup. The results of the experiments are presented
and discussed in Section III. Section IV presents a compara-
tive discussion of this paper followed by concluding remarks
on Section V.
II. P
RINCIPLES AND METHODOLOGY
A. Synergetic BMI Control Scheme
It is known that human beings do not perform the joint
actions of compound movements consciously. Movements are
generally controlled by a subconscious mental subroutine and
thus, can be considered as automatic in nature [28]. While
learning a new movement the mental activity shifts from the
foreground mental routine to the background subconscious
one. Thach [28] and Wolpert et al. [29] suggested that training
of skilled movements in the human brain starts as a conscious
act in the cerebral cortex. But on gradual and repetitive trials of
the same movement, the cerebellum begins to take control of
the task by recognizing the relation to each segment of con-
sciously initiated movement. Finally, the cerebullum attains
control over the entire process and by a mere trigger from
the cerebrum, it can execute the entire movement without any
conscious effort [28]–[30]. The multijoint human motor sys-
tem requires to handle complex interaction torques which is
compensated by predictive motor control located within the
cerebellar cortex. Sensory information on the early phases of
the movement enters the cerebellum and triggers the memory
related to the optimal joint torque. As a result, motor learn-
ing and control are executed flawlessly and are easily adapted
to the ever-changing environment and newly generated goals.
The aim of BMI control of a prosthetic or robotic limb is to
allow seamless human-like movement but to date, they incur
joint redundancy issues during movement tasks. To solve this
problem, one needs to include a learning controller to manage
peripheral drive for a multijoint system to allow an optimal
human-like movement of the limb.
Several models have been formulated to deal with the
redundancy issues in the past and such models are gener-
ally defined as “minimum X, where X is jerk [31], torque

BHATTACHARYYA et al.: SYNERGETIC BMI PARADIGM FOR MULTI-DOF ROBOT CONTROL 959
Fig. 1. BMI paradigm employed in this paper for simultaneous control of
multi-DOFs robot using adaptive left-right MI decoder and synergetic motor
learning for peripheric joint redundancy management. The black dots indicate
the targets for the subjects in the vertical plane.
changes [32], motor command [33], and energy consump-
tion [34]. Researchers basically assume the use of a physical
inverse dynamical model [35] or approximation-based mod-
els [36]. Hayashibe and Shimoda [26] have proposed an
optimal method for multijoint redundancy management using
tacit learning scheme. This technique optimizes the multijoint
problem without any prior knowledge of the system dynamics
by using the task space error. Phenomenological optimal solu-
tions can be generated without using so-called mathematical
optimization process. In this paper, we have adopted this syn-
ergetic learning control technique for the peripheral multijoint
management of a 3-DOF robotic arm.
The details of the online BMI control paradigm, shown in
Fig. 1, are as follows. The participant observes the current
position of the end-effector of the robot and attempts to gen-
erate the required MI signal. The process involves filtering and
extracting features from the raw EEG signal. Then, the fea-
tures are fed as inputs to the decoder to identify the MI state
(left/right MI). The decoded output is then transmitted to the
robot as commands to move it up or down in the vertical plane.
Prior to the onset of the online task, the robot is trained to its
dynamic environment using a tacit learning approach [37]fora
fixed period of 70 s. In this paper, the load carried by the robot
is treated as the environmental change along with segmental
inertial configuration changes. As a result, the movement of
the joints of the robot adapts to the changing load. To make the
decoder co-adaptable to the changing brain state of the sub-
ject, we measure the posterior probability (P) of each incoming
event. If P fulfills the required conditions of the system then it
is included in the training dataset with a higher weight than the
older data, while the oldest data is removed from the dataset
and the decoder is retrained online. If P does not fulfill the
conditions, then we reject the incoming data and the decoder
does not need to be retrained. This step is included to change
the learning of the decoder with the current mental state of
the subject.
Fig. 2. Standard 10–20 representation of the electrodes present in an Emotiv
headset.
Our proposed scheme adapts at different stages. First,
the decoder/ classifier is designed to continuously adapt to
the changing brain signal, while the subject simultaneously
observes the movement of the robot. Second, the peripheral
motor controller is adaptable to the given physical environ-
ment. Because of the two adaptive function, the subject is free
to control the robot arm without burdening himself to control
complex joint management. Hence, here we have proposed a
tridirectional form of adaptation (user-decoder-robot).
B. Experiment Description
1) Subjects and Data Acquisition: The EEG in this paper is
recorded using a 14 channel Emotiv Epoc neuro-headset with
a sampling rate of 128 Hz and an in-built band-pass filter of
0.2–45 Hz. The electrodes: AF3, F7, F3, FC5, T7, P7, O1, O2,
P8, T8, FC6, F4, F8, and AF4, are arranged on the basis of
the standard 10–20 system (Fig. 2)[38]. Nine healthy subjects
with no prior experience on BMI (six male and three female,
one left-handed and eight right-handed), participated in this
experiment over a period of two days. In the first day, the
subjects perform the tasks on two separate sessions. The data
from the first session is used to train the decoder, while the
same from the second session is used for offline testing the
training of the decoder. In the second day, the subjects would
control the movement of a robot arm in real-time based on
the decoder trained on the previous day. Since, we are dealing
with human subjects for experimental purpose, we abide by
the norms of Helsinki Declaration of 1975, revised in 2000.
Prior to the experiments, the subjects are informed about the
purpose of the experiment and the tasks they have to perform.
2) Task and Stimuli: The experiment designed for this
paper is divided into two phases: 1) offline and 2) online.
In the offline phase, we determine the parameters of the sup-
port vector machines (SVMs) decoder for each subject. We
perform an offline validation of the adaptivity of the decoder
prior to employing it for the online phase.
The training and offline testing sessions comprise instructing
the subjects through a sequence of visual stimulus to imag-
ine the movement of the corresponding MI task, which is,
left and right movement. Fig. 3 shows the generic structure of
the visual cue. First, a blank screen is projected to the sub-
ject for 20 s, which provides the baseline of the EEG. Then,
a fixation + is displayed on screen for 1 s which is an
indicator to the subject to get ready for the task. Next, the

960 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 46, NO. 7, JULY 2016
Fig. 3. Timing diagram of a single trial to train the subject in left and right
hand MI (as indicated by left and right arrow, respectively).
instructions are provided to the subject for 3 s in form of
arrows. According to the direction of the arrow, the subject
imagines either left or right hand movement. Following the
instructions, a blank screen is again displayed for 1.5–3.5 s.
It allows the subject to relax during the task and removes the
possibility of over-lapping between two mental states. Each
task is repeated 40 times for the training session and 20 times
for the offline testing session.
During the online tasks, the subjects are not shown any
visual cues but are provided with audio cues from the operator.
The operator would instruct the subject to move toward the top
or bottom target (shown as black dots in Fig. 3). The subject
would then generate the necessary MI commands to move the
robot toward the target. The sequence of the instructions are
random in nature and he/she would take several discrete steps
(MI trials) to reach the target. The control commands required
to move the robot is as follows: 1) left MI indicates upward
movement of the robot and 2) right MI indicates downward
movement of the robot. The subject observes the movement
of the robot arm, which is considered as FB to the subject. If
the decoder makes an error by producing the wrong output,
then the subject on observing the error would attempt to fix it
by generating the right brain signal.
The robot used in this experiment has 3-DOF and is
located in Brain Science Institute-Toyota Collaboration Center,
RIKEN, Japan. The decoder output commands from the
decoder are sent remotely through an secure shell (SSH) file
transfer protocol [39] from INRIA-LIRMM, France. Prior to
the subject sending commands to the robot, peripheral motor
controller is trained using synergetic learning to adapt to the
given dynamical environment including arm inertial configura-
tion and the newly given end-point load which influences inter-
action torques of multijoints in complex way. Here, the online
task required the subject to guide the robot end-point toward
the target based on the instructions from the operator. The
online experimental task was repeated twice for each weight.
C. Co-Adaptive EEG-BMI System
The BMI system employs wavelet transforms [40], [41]for
feature extraction, Laplacian EigenMaps [42] to determine the
relevant features and an SVM classifier [43] to decode between
the two mental states. The BMI system achieves co-adaptivity
by the method mentioned in Section II-A.
1) Preprocessing: It is known from [4] and [38] that
MI signals are characterized by the presence of event
ERD/ERS [44], which are dominant in μ (8–12 Hz) and
central β (16–24 Hz) band of the EEG [38]. We preprocess
the raw EEG data by applying a band-pass filter in 8–25 Hz
range using a 4th order elliptical filter of 1 dB passband ripple
and 30 dB stopband ripple [15]. Elliptical filters are charac-
terized by a very sharp frequency roll-off and is equiripple in
nature, which provides good attenuation of the pass- and the
stop-band ripples [45]. With this step, noise due to muscle or
eye movement, environmental interference and other parallel
brain processes (not related to the tasks) is also removed.
2) Feature Extraction: The filtered signals are then pro-
cessed using discrete wavelet transform (DWT) [41] to derive
the signature features related to left- and right-MI. Wavelet
transform provides localized frequency information over a
given time period, which is highly suitable for nonstation-
ary signals like EEG. The DWT decomposes the signal at
different resolutions into coarse approximation and detail
coefficient [41].
In this paper, we have selected Daubechies wavelet of the
fourth order as the mother wavelet. As mentioned earlier, MI
signals are dominant in the 8–12 Hz and 16–24 Hz range. We
have extracted 3 s of EEG from onset of every stimuli, decom-
posed it to its fourth level and then reconstructed it using only
the third and fourth detail coefficient. The final feature vec-
tor is constructed from the average of the reconstructed signal
at D3 and D4 level. Thus, the final dimension of the feature
vector for each trial is 384 features × 14 electrodes.
3) Feature Selection: Sometimes due to high dimensional-
ity of the features, the decoder suffers from high computational
time, lack of relevant information, and overfitting, which in
turn has a detrimental effect on the performance of the BMI.
To negate this problem, researchers employ some form of lin-
ear or nonlinear dimensionality reduction technique [46], [47].
Laplacian EigenMap [42] is an unsupervised manifold learning
algorithm which performs nonlinear dimensionality reduction
by the following four basic steps.
1) Compute the nearest neighbors of the input data.
2) Using neighborhood relations construct a weight graph
matrix.
3) Optimize the graph matrix based on a fitness function.
4) Project the final data from the top or bottom half of the
matrix.
Extensive details on Laplacian EigenMaps are given in [42].
The advantage of this technique is to provide an optimal
embedding solution to the manifold, for interpreting the
dimensionality reduction problem geometrically, by maintain-
ing the locality and proximity relations. Thus, it is insensitive
to outliers and noise.
In this paper, we have determined the optimal dimensional-
ity of relevant features for each subject from their validation
results. The dimension which yields the best accuracy is used
during online testing. The dimension of the reduced feature
vector for each subject is mentioned in Table II.
4) Decoder Design: Selection of a classifier algorithm is
also an important issue. SVM [43] nowadays has earned
popularity for its good recognition accuracy and speed.
Training time of SVM is significantly small in comparison
to naive Bayesian and multilayered perceptron [43]. This
motivated us to select SVM in the present application.

Citations
More filters
Journal ArticleDOI
TL;DR: The safety and performance of industrial systems can be improved by developing specific information infrastructure, monitoring, and control approaches aimed at maintaining controllability under external disturbances and unexpected faults.
Abstract: Industrial cyberphysical systems (ICPSs) are the cornerstone research subject in the era of Industry 4.0 [1]. The study of ICPSs has, therefore, become a worldwide research focus [2]-[4]. ICPSs integrate physical entities with cyber networks to build systems that can work more harmoniously, benefiting from integrated design and system-wide optimization [5]. The safety and performance of industrial systems can be improved by developing specific information infrastructure, monitoring, and control approaches aimed at maintaining controllability under external disturbances and unexpected faults [6]. Based on these observations, the design and deployment of ICPSs have both theoretical and practical significance.

146 citations


Cites background from "A Synergetic Brain-Machine Interfac..."

  • ...Intelligent control strategies are also attracting a large amount of interest [13], [14], [30], [39]–[48]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a brain–computer interface (BCI)-based teleoperation strategy for a dual-arm robot carrying a common object by multifingered hands based on motor imagery of the human brain, which utilizes common spatial pattern method to analyze the filtered electroencephalograph signals.
Abstract: This paper proposes a brain–computer interface (BCI)-based teleoperation strategy for a dual-arm robot carrying a common object by multifingered hands. The BCI is based on motor imagery of the human brain, which utilizes common spatial pattern method to analyze the filtered electroencephalograph signals. Human intentions can be recognized and classified into the corresponding reference commands in task space for the robot according to phenomena of event-related synchronization/desynchronization, such that the object manipulation tasks guided by human user’s mind can be achieved. Subsequently, a concise dynamics consisting of the dynamics of the robotic arms and the geometrical constraints between the end-effectors and the object is formulated for the coordinated dual arm. To achieve optimization motion in the task space, a redundancy resolution at velocity level has been implemented through neural-dynamics optimization. Extensive experiments have been made by a number of subjects, and the results were provided to demonstrate the effectiveness of the proposed control strategy.

70 citations


Cites methods from "A Synergetic Brain-Machine Interfac..."

  • ...Meanwhile, MI-driven robotic arms are discussed in [23]–[25]....

    [...]

  • ...Compared with MI-driven robotic arms, such as [23]–[25], the advantages of redundancy resolution have been demonstrated....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a novel model-free dual neural network, which is able to address the learning and control of manipulators simultaneously in a unified framework, and demonstrates the effectiveness of the proposed control scheme for redundancy resolution of a PUMA 560 manipulator.
Abstract: Redundancy resolution is a critical problem in the control of manipulators. The dual neural network, as a special type of recurrent neural networks that are inherently parallel processing models, is widely investigated in past decades to control manipulators. However, to the best of our knowledge, existing dual neural networks require a full knowledge about manipulator parameters for efficient control. We make progress along this direction in this paper by proposing a novel model-free dual neural network, which is able to address the learning and control of manipulators simultaneously in a unified framework. Different from pure learning problems, the interplay of the control part and the learning part allows us to inject an additive noise into the control channel to increase the richness of signals for the purpose of efficient learning. Due to a deliberate design, the learning error is guaranteed for convergence to zero despite the existence of additive noise for stimulation. Theoretical analysis reveals the global stability of the proposed neural network control system. Simulation results verify the effectiveness of the proposed control scheme for redundancy resolution of a PUMA 560 manipulator.

54 citations

Journal ArticleDOI
TL;DR: This research provides a thorough review of the literature, construction of a domain ontology, presentation of patent metatrend statistical analysis, and data mining analysis using a technology function matrix and highlights technical and functional development trends using latent Dirichlet allocation (LDA) models.
Abstract: Immersive technology for human-centric cyberphysical systems includes broad concepts that enable users in the physical world to connect with the cyberworld with a sense of immersion. Complex systems such as virtual reality, augmented reality, brain-computer interfaces, and brain-machine interfaces are emerging as immersive technologies that have the potential for improving manufacturing systems. Industry 4.0 includes all technologies, standards, and frameworks for the fourth industrial revolution to facilitate intelligent manufacturing. Industrial immersive technologies will be used for smart manufacturing innovation in the context of Industry 4.0’s human machine interfaces. This research provides a thorough review of the literature, construction of a domain ontology, presentation of patent metatrend statistical analysis, and data mining analysis using a technology function matrix and highlights technical and functional development trends using latent Dirichlet allocation (LDA) models. A total of 179 references from the IEEE and IET databases and 2,672 patents are systematically analyzed to identify current trends. The paper establishes an essential foundation for the development of advanced human-centric cyberphysical systems in complex manufacturing processes.

41 citations


Cites background from "A Synergetic Brain-Machine Interfac..."

  • ...This helps to create a “think-and-play” user experience for games of the future and has been successfully applied to robotics control [18, 48]....

    [...]

References
More filters
Proceedings ArticleDOI
01 Dec 2011
TL;DR: A brain computer interfaces (BCI) is designed and implemented that localizes and acquires brain signals to drive a powered, hand orthotic which opens and closes a patient's hand.
Abstract: The loss of motor control severely impedes activities of daily life. Brain computer interfaces (BCIs) offer new possibilities to treat nervous system injuries, but conventional BCIs use signals from primary motor cortex, the same sites most likely damaged in a stroke causing paralysis. Recent studies found distinct cortical physiology associated with contralesional limb movements in regions distinct from primary motor cortex. To capitalize on these findings, we designed and implemented a BCI that localizes and acquires these brain signals to drive a powered, hand orthotic which opens and closes a patient's hand.

75 citations


"A Synergetic Brain-Machine Interfac..." refers background in this paper

  • ...[54] have acquired brain signals related to movement to successfully drive a powered orthosis tasked at opening and closing of the patient’s hand....

    [...]

Proceedings ArticleDOI
21 Jun 2012
TL;DR: This paper describes the first attempt to port a brain-computer interface (BCI), created in a research lab and running on expensive equipment, to consumer grade equipment, allowing one to reach a much broader audience and raise public awareness of this new technology.
Abstract: This paper describes our first attempt to port a brain-computer interface (BCI), created in a research lab and running on expensive equipment, to consumer grade equipment, allowing one to reach a much broader audience and raise public awareness of this new technology. With the BCI, the user can perform a complicated task through mind control only: playing a tactical video-game. Reliable control is accomplished by adapting a Steady-State Visual Evoked Potential (SSVEP) classifier to be robust enough to cope with the signal quality of the consumer grade electroencephalography (EEG) device used, i.e. the Emotiv EPOC. The difference in performance of the game running on research grade EEG equipment versus the Emotiv EPOC is examined. The game was tested by a broad audience during a public event and was well received.

69 citations


"A Synergetic Brain-Machine Interfac..." refers background in this paper

  • ...tial applications in fields of communication [5], [6], military use [7], virtual reality [8], [9] and gaming [10], [11] has...

    [...]

Journal ArticleDOI
TL;DR: A NARX recurrent neural network (NARX-RNN) model is presented for identification/prediction of FES-induced muscular dynamics with eEMG and the general importance regarding CI-based motor function modeling is introduced along with its potential impact in the rehabilitation domain.
Abstract: -One of the challenging issues in computational rehabilitation is that there is a large variety of patient situations depending on the type of neurological disorder. Human characteristics are basically subject specific and time variant; for instance, neuromuscular dynamics may vary due to muscle fatigue. To tackle such patient specificity and time-varying characteristics, a robust bio-signal processing and a precise model-based control which can manage the nonlinearity and time variance of the system, would bring break-through and new modality toward computational intelligence (CI) based rehabilitation technology and personalized neuroprosthetics. Functional electrical stimulation (FES) is a useful technique to assist restoring motor capability of spinal cord injured (SCI) patients by delivering electrical pulses to paralyzed muscles. However, muscle fatigue constraints the application of FES as it results in the time-variant muscle response. To perform adaptive closedloop FES control with actual muscle response feedback taken into account, muscular torque is essential to be estimated accurately. However, inadequacy of the implantable torque sensor limits the direct measurement of the time-variant torque at the joint. This motivates the development of methods to estimate muscle torque from bio-signals that can be measured. Evoked electromyogram (eEMG) has been found to be highly correlated with FES-induced torque under various muscle conditions, indicating that it can be used for torque/force prediction. A nonlinear ARX (NARX) type model is preferred to track the relationship between eEMG and stimulated muscular torque. This paper presents a NARX recurrent neural network (NARX-RNN) model for identification/prediction of FES-induced muscular dynamics with eEMG. The NARX-RNN model may possess novelty of robust prediction performance. Due to the difficulty of choosing a proper forgetting factor of Kalman filter for predicting time-variant torque with eEMG, the presented NARX-RNN could be considered as an alternative muscular torque predictor. Data collected from five SCI patients is used to evaluate the proposed NARX-RNN model, and the results show promising estimation performances. In addition, the general importance regarding CI-based motor function modeling is introduced along with its potential impact in the rehabilitation domain. The issue toward personalized neuroprosthetics is discussed in detail with the potential role of CI-based identification and the benefit for motor-impaired patient community.

68 citations


"A Synergetic Brain-Machine Interfac..." refers result in this paper

  • ...The results obtained in this preliminary study demonstrate that the use of automated interfaces to solve redundancy of joint movement is indeed possible and the positive results encourage us to further dwell into developing a tridirectional adaptive system for rehabilitative and assistive systems [58], [59]....

    [...]

Journal ArticleDOI
24 Sep 2012
TL;DR: The results demonstrate that the use of automated interfaces to reduce complexity for the intended operator (outside the laboratory) is indeed possible.
Abstract: A new multiclass brain-computer interface (BCI) based on the modulation of sensorimotor oscillations by imagining movements is described. By the application of advanced signal processing tools, statistics and machine learning, this BCI system offers: 1) asynchronous mode of operation, 2) automatic selection of user-dependent parameters based on an initial calibration, 3) incremental update of the classifier parameters from feedback data. The signal classification uses spatially filtered signals and is based on spectral power estimation computed in individualized frequency bands, which are automatically identified by a specially tailored AR-based model. Relevant features are chosen by a criterion based on Mutual Information. Final recognition of motor imagery is effectuated by a multinomial logistic regression classifier. This BCI system was evaluated in two studies. In the first study, five participants trained the ability to imagine of the right hand, left hand and feet in response to visual cues. The accuracy of the classifier was evaluated across four training sessions with feedback. The second study assessed the information transfer rate (ITR) of the BCI in an asynchronous application. The subjects' task was to navigate a cursor along a computer rendered 2-D maze. A peak information transfer rate of 8.0 bit/min was achieved. Five subjects performed with a mean ITR of 4.5 bit/min and an accuracy of 74.84%. These results demonstrate that the use of automated interfaces to reduce complexity for the intended operator (outside the laboratory) is indeed possible. The signal processing and classifier source code embedded in BCI2000 is available from https://www.brain-project.org/downloads.html.

63 citations


"A Synergetic Brain-Machine Interfac..." refers methods in this paper

  • ...[57] developed a BCI system which followed an asynchronous mode of operation, automatic selection of parameters based on initial calibration and incremental update of the classifier parameters from FB....

    [...]

Journal ArticleDOI
TL;DR: A multi-class discriminating algorithm based on the fusion of interval type-2 fuzzy logic and ANFIS to improve uncertainty handling and the result shows the competitiveness of this algorithm over other standard ones in the domain of non-stationary and uncertain signal data classification.

60 citations