scispace - formally typeset
Open AccessJournal ArticleDOI

Reach and grasp by people with tetraplegia using a neurally controlled robotic arm

TLDR
The results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.
Abstract
Two people with long-standing tetraplegia use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. John Donoghue and colleagues have previously demonstrated that people with tetraplegia can learn to use neural signals from the motor cortex to control a computer cursor. Work from another lab has also shown that monkeys can learn to use such signals to feed themselves with a robotic arm. Now, Donoghue and colleagues have advanced the technology to a level at which two people with long-standing paralysis — a 58-year-old woman and a 66-year-old man — are able to use a neural interface to direct a robotic arm to reach for and grasp objects. One subject was able to learn to pick up and drink from a bottle using a device implanted 5 years earlier, demonstrating not only that subjects can use the brain–machine interface, but also that it has potential longevity. Paralysis following spinal cord injury, brainstem stroke, amyotrophic lateral sclerosis and other disorders can disconnect the brain from the body, eliminating the ability to perform volitional movements. A neural interface system1,2,3,4,5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices6,7,8. Able-bodied monkeys have used a neural interface system to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here we demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm and hand over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.

read more

Content maybe subject to copyright    Report

LETTER
doi:10.1038/nature11076
Reach and grasp by people with tetraplegia using a
neurally controlled robotic arm
Leigh R. Hochberg
1,2,3,4
, Daniel Bacher
2
*, Beata Jarosiewicz
1,5
*, Nicolas Y. Masse
5
*, John D. Simeral
1,2,3
*, Joern Vogel
6
*,
Sami Haddadin
6
, Jie Liu
1,2
, Sydney S. Cash
3,4
, Patrick van der Smagt
6
& John P. Donoghue
1,2,5
Paralysis following spinal cord injury, brainstem stroke, amyotrophic
lateral sclerosis and other disorders can disconnect the brain from the
body, eliminating the ability to perform volitional movements. A
neural interface system
1–5
could restore mobility and independence
for people with paralysis by translating neuronal activity directly into
control signals for assistive devices. We have previously shown that
people with long-standing tetraplegia can use a neural interface
system to move and click a computer cursor and to control physical
devices
6–8
. Able-bodied monkeys have used a neural interface system
to control a robotic arm
9
, but it is unknown whether people with
profound upper extremity paralysis or limb loss could use cortical
neuronal ensemble signals to direct useful arm actions. Here we
demonstrate the ability of two people with long-standing tetraplegia
to use neural interface system-based control of a robotic arm to
perform three-dimensional reach and grasp movements. Participants
controlled the arm and hand over a broad space without explicit
training, using signals decoded from a small, local population of
motor cortex (MI) neurons recorded from a 96-channel micro-
electrode array. One of the study participants, implanted with the
sensor 5 years earlier, also used a robotic arm to drink coffee from a
bottle. Although robotic reach and grasp actions were not as fast or
accurate as those of an able-bodied person, our results demonstrate
the feasibility for people with tetraplegia, years after injury to the
central nervous system, to recreate useful multidimensional control
of complex devices directly from a small sample of neural signals.
The study participants, referred to as S3 and T2 (a 58-year-old
woman, and a 66-year-old man, respectively), were each tetraplegic
and anarthric as a result of a brainstem stroke. Both were enrolled in
the BrainGate2 pilot clinical trial (see Methods). Neural signals were
recorded using a 4 mm 3 4 mm, 96-channel microelectrode array,
which was implanted in the dominant MI hand area (for S3, in
November 2005, 5.3 years before the beginning of this study; for T2,
in June 2011, 5 months before this study). Participants performed
sessions on a near-weekly basis to perform point and click actions of
a computer cursor using decoded MI ensemble spiking signals
7
.Across
four sessions in her sixth year after implant (trial days 1952–1975), S3
used these neural signals to perform reach and grasp movements of
either of two differently purposed right-handed robot arms. The DLR
Light-Weight Robot III (German Aerospace Center, Oberpfaffenhofen,
Germany; Fig. 1b, left)
10
is designed to be an assistive device that can
reproduce complex arm and hand actions. The DEKA Arm System
(DEKA Research and Development; Fig. 1b, right) is a prototype
advanced upper limb replacement for people with arm amputation
11
.
T2 controlled the DEKA prosthetic limb on one session day (day 166).
Both robots were operated under continuous user-driven neuronal
ensemble controlof arm endpoint (hand) velocity in three-dimensional
space; a simultaneously decoded neural state executed a hand action. S3
had used the DLR robot on multiple occasions over the previous year
for algorithm development and interface testing, but she had no
exposure to the DEKA arm before the sessions reported here. T2
participated in three DEKA arm sessions for similar development
and testing before the session reportedherebuthad no other experience
using the robotic arms.
To decode movement intentions from neural activity, electrical poten-
tials from each of the 96 channels were filtered to reveal extracellular
action potentials (that is, ‘unit’ activity). Unit threshold crossings (see
Methods) were used to calibrate decoders that generated velocity and
hand state commands. Signals for reach were decoded using a Kalman
filter
12
to update continuously an estimate of the participant’s intended
hand velocity. The Kalman filter was initialized during a single ‘open-
loop’ filter calibration block (,4 min) in which the participants were
asked to imagine controlling the robotic arm as they watched it
undergo a series of regular, pre-programmed movements while the
accompanying neural activity was recorded. This open-loop filter
was then iteratively updated during four to eight ‘closed-loop’
calibration blocks while the participant actively controlled the robot
under visual feedback, with gradually decreasing levels of computer-
imposed error attenuation (see Methods). To discriminate an intended
hand state, a linear discriminant classifier was built on signals from the
same recorded units while the participant imagined squeezing their
hand
8
. On average, the decoder calibration procedure lasted ,31 min
(ranging from 20 to 48 min, exclusive of time between blocks).
After decoder calibration, we assessed whether each participant
could use the robotic arm to reach for and grasp foam ball targets of
diameter 6 cm, presented in three-dimensional space one at a time by
motorized levers (Fig. 1a–c and Supplementary Fig. 1b). Because hand
aperture was not much larger than the target size (only 1.3 times larger
for DLR, and 1.8 times larger for DEKA) and hand orientation was not
under user control, grasping targets required the participant to
manoeuvre the arm within a narrow range of approach angles with
the hand open while avoiding the target support rod below. Targets
were mounted on flexible supports; brushing them with the robotic
arm resulted in target displacements. Together, these factors increased
task difficulty beyond simple point-to-point movements and frequently
required complex curved paths or corrective actions (Fig. 1d and
Supplementary Movies 1–3). Trials were judged successful or un-
successful by two independent visual inspections of video data (see
Methods). A successful ‘touch’ trial occurred when the participant
contacted the target with the hand; a successful ‘grasp’ trial occurred
when the participant closed the hand while any part of the target or the
top of its supporting cone was within the volume enclosed by the hand.
In the three-dimensional reach and grasp task, S3 performed 158
trials across four sessions and T2 performed 45 trials in a single session
(Table 1 and Fig. 1e, f). S3 touched the target within the allotted time in
48.8% of the DLR and 69.2% of the DEKA trials, and T2 touched the
target within the allotted time in 95.6% of trials (Supplementary
1
Rehabilitation Research & Development Service, Department of Veterans Affairs, Providence, Rhode Island 02908, USA.
2
School of Engineering and Institute for Brain Science, Brown University,
Providence, Rhode Island 02912, USA.
3
Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts 02114, USA.
4
Harvard Medical School, Boston, Massachusetts 02115, USA.
5
Department of Neuroscience and Institute for Brain Science, Brown University, Providence, Rhode Island 02912, USA.
6
German Aerospace Center, Institute of Robotics and Mechatronics (DLR,
Oberpfaffenhofen) 82230, Germany.
*These authors contributed equally to this work.
372 | NATURE | VOL 485 | 17 MAY 2012
Macmillan Publishers Limited. All rights reserved
©2012

Movies 1–3 and Supplementary Fig. 2). Of the successful touches, S3
grasped the target 43.6% (DLR) and 66.7% (DEKA) of the time,
whereas T2 grasped the target 65.1% of the time. Of all trials, S3
grasped the target 21.3% (DLR) and 46.2% (DEKA) of the time, and
T2 grasped the target 62.2% of the time. In all sessions from both
participants, performance was significantly higher than expected by
chance alone (Supplementary Fig. 3). For S3, times to touch were
approximately the same for both robotic arms (Fig. 1f, blue bars;
median 6.2 6 5.4 s) and were comparable to times for T2
(6.1 6 5.5 s). The times for combined reach and grasp were similar
for both participants (S3, 9.4 6 6.2 s; T2, 9.5 6 5.5 s), although for
the first DLR session, times were about twice as long.
To explore the use of neural interface systems for facilitating activities
of daily living for people with paralysis, we also assessed how well
S3 could control the DLR arm as an assistive device. We asked her to
reach for and pick up a bottle of coffee, and then drink from it through
a straw and place it back on the table. For this task, we restricted
velocity control to the two-dimensional tabletop plane and we used
the simultaneously decoded grasp state as a sequentially activated
trigger for one of four different hand actions that depended upon
the phase of the task and the position of the hand (see Methods).
Because the 7.2 cm bottle diameter was 90% of the DLR hand aperture,
grasping the bottle required even greater alignment precision than
grasping the targets in the three-dimensional task described above.
Once triggered by the state switch, robust finger position and grasping
of the object was achieved by automated joint impedance control. We
familiarized the participant with the task for approximately 14 min
(during which we made adjustments to the robot hand grip force, and
the participant learned the physical space in which the state decoder
and directional commands would be effective in moving the bottle
close enough to drink from a straw). After this period, the participant
successfully grasped the bottle, brought it to her mouth, drank coffee
from it through a straw and replaced the bottle on the table, on four out
of six attempts over the next 8.5 min (Fig. 2, Supplementary Fig. 4 and
Supplementary Movie 4). The two unsuccessful attempts (numbers 2
and 5 in sequence) were aborted to prevent the arm from pushing the
bottle off the table (because the hand aperture was not properly
aligned with the bottle). This was the first time since the participant’s
stroke more than 14 years earlier that she had been able to bring any
drinking vessel to her mouth and drink from it solely of her own
volition.
The use of neural interface systems to restore functional movement
will become practical only if chronically implanted sensors function
for many years. It is thus notable that S3’s reach and grasp control was
achieved using signals from an intracortical array implanted over
5 years earlier. This result, supported by multiple demonstrations of
Table 1
|
Summary of neurally controlled robotic arm target-acquisition trials
Trial day 1952 S3 (DLR) Trial day 1959 S3 (DLR) Trial day 1974 S3 (DEKA) Trial day 1975 S3 (DEKA) Trial day 166 T2 (DEKA)
Number of trials 32 48 45 33 45
Targets contacted
Grasped
Time to touch (s)
Time to grasp (s)
Touched only
Time to touch (s)
16 (50.0%)
7 (21.9%)
5.4 6 6.9
18.2 6 6.4
9 (28.1%)
7.0 6 6.2
23 (47.9%)
10 (20.8%)
5.4 6 2.3
9.5 6 4.5
13 (27.1%)
4.6 6 3.0
34 (75.6%)
21 (46.7%)
6.1 6 4.9
8.2 6 4.9
13 (28.9%)
10.7 6 6.5
20 (60.6%)
15 (45.5%)
6.8 6 3.6
8.8 6 8.0
5 (15.1%)
9.4 6 8.0
43 (95.6%)
28 (62.2%)
5.5 6 4.7
9.5 6 5.5
15 (33.3%)
7.1 6 6.8
a
Touched
Grasped
e
d
c
Participant
Camera
for DEKA
Camera
for DLR
Channel position (cm)
Time (s)
Starting
position
Target
Target
Starting
position
0246810
02 46
0
20
40
60
80
100
Percentage of trials
0
10
20
30
Time (s)
Trial day (participant)
1952 (S3) 1959 (S3) 1974 (S3) 1975 (S3) 166 (T2)
f
Towards
–away
Left
–right
Down–up
30
–10
–20
20
10
–20
10
–5
15
–15
–30
5
b
DEKADLR
0
20
–20
Trial day 1959; trial 30Trial day 1975; trial 6
Time (s)
Figure 1
|
Experimental setup and performance metrics. a, Overhead view of
participant’s location at the table (grey rectangle) from which the targets
(purple spheres) were elevated by a motor. The robotic arm was positioned to
the right and slightly in front of the participant (the DLR and DEKA arms were
mounted in slightly different locations to maximize the correspondence of their
workspaces over the table; for details, see Supplementary Fig. 9). Both video
cameras were used for all DLR and DEKA sessions; labels indicate which
camera was used for the photographs in b. b, Photographs of the DLR (left
panel) and DEKA (right panel) robots. c, Reconstruction of an example trial in
which the participant moved the DEKA arm in all three dimensions to reach
and grasp a target successfully. The top panel illustrates the trajectory of the
hand in three-dimensional space. The middle panel shows the position of the
wrist joint for the same trajectory decomposed into each of its three dimensions
relative to the participant: the left–right axis (dashed blue line), the towards–
away axis (purple line) and the up–down axis (green line). The bottom panel
shows the threshold crossing events from all units that contributed to decoding
the movement. Each row of tick marks represents the activity of one unit and
each tick mark represents a threshold crossing. The grey shaded area shows the
first 1 s of the grasp. d, An example trajectory from a DLR session in which the
participant needed to move the robot hand, which started to the left of the
target, around and to the right of the target to approach it with the open part of
the hand. The middle and bottom panels are analogous to c. e, Percentage of
trials in which the participant successfully touched the target with the robotic
hand (blue bars) and successfully grasped the target (red bars). f, Average time
required to touch (blue bars) or grasp (red bars) the targets. Each circle shows
the acquisition time for one successful trial.
LETTER RESEARCH
17 MAY 2012 | VOL 485 | NATURE | 373
Macmillan Publishers Limited. All rights reserved
©2012

successful chronic recording capabilities in animals
13–15
, suggests that
the goal of creating long-term intracortical interfaces is feasible. At the
time of this study, S3 had lower recorded spike amplitudes and fewer
channels contributing signals to the filter than during her first years of
recording. Nevertheless, the units included in the Kalman filters were
sufficiently directionally tuned and modulated to allow neural control
of reach and grasp (Fig. 3 and Supplementary Figs 5 and 6). S3
sometimes experiences stereotypic limb flexion. These movements
did not appear to contribute in any way to her multidimensional reach
and grasp control, and the neural signals used for this control showed
waveform shapes and timing characteristics of unit spiking (Fig. 3 and
Supplementary Fig. 7). Furthermore, T2 produced no consistent
volitional movement during task performance, which further sub-
stantiates the intracortical origin of his neural control.
We have shown that two people with no functional arm control due
to brainstem stroke used the neuronal ensemble activity generated by
intended arm and hand movements to make point-to-point reaches
and grasps with a robotic arm across a natural human-arm workspace.
Moreover, S3 used these neurally driven commands to perform an
everyday task. These findings extend our previous demonstrations of
point and click neural control by people with tetraplegia
7,16
and show
that neural spiking activity recorded from a small MI intracortical
array contains sufficient information to allow people with long-standing
tetraplegia to perform even more complex manual skills. This result
suggests the feasibility of using cortically driven commands to restore
lost arm function for people with paralysis. In addition, we have demon-
strated considerably more complex robotic control than previously
shown in able-bodied non-human primates
9,17,18
. Both participants
operated human-scale arms in a three-dimensional target task that
required curved trajectories and precise alignments over a volume that
was 1.4–7.7 times greater than has been used by non-human primates.
The drinking task, although only two-dimensional 1 state control,
required both careful positioning and correctly timed hand state com-
mands to accomplish the series of actions necessary to retrievethe bottle,
drink from it and return it to the table.
Both participants performed these multidimensional actions after
long-standingparalysis. For S3, signals were adequate to achieve control
14 years and 11 months after her stroke, showing that MI neuronal
ensemble activity remains functionally engaged despite subcortical
damage of descending motor pathways. Future clinical research will
be needed to establish whether more signals
19–22
, signals from additional
or other areas
2,23–25
, better decoders, explicit participant training or
other advances (see Supplementary Materials) will provide more com-
plex, flexible, independent and natural control. In addition to the
robotic assistive device shown here, MI signals might also be used by
people with paralysis to reanimate paralysed muscles using functional
Time (s)
Mean rate (Hz)
Away right, channel 33
−5
05
Grasp, channel 10
0
40
0
40
Towards right , channel 91
Away left, channel 91
ad
be
f
S3: three dimensions
(trial day 1974)
T2: three dimensions
(trial day 166)
Time (s)
−5 0 5
0
25
0
25
0
40
0
6
Grasp, channel 12
Towards left, channel 33
c
g
h
0
8
i
Right, channel 9
Grasp, channel 12
S3: two dimensions
(trial day 1959)
0
8
Time (s)
05
90
0
Channel 33
Channel 12
Channel 10
Channel 91
Channel 9 Channel 12
–80 μV
80 μV
Left, channel 9
−5
Mean rate (Hz)
Figure 3
|
Examples of neural signals from three sessions and two
participants. A three-dimensional reach and grasp session from S3 (ac) and
T2 (df), and the two-dimensional 1 grasp drinking session from S3 (g
i). a, d, g, Average waveforms (black lines) 6 two standard deviations (grey
shadows) from two units from each session with a large directional modulation
of activity. b, e, h, Rasters and histograms of threshold crossings showing
directional modulation. Each row of tick marks represents a trial, and each tick
mark represents a threshold crossing event. The histogram summarizes the
average activity across all trials in that direction. Rasters are displayed for arm
movements to and from the pair of opposing targets that most closely aligned
with the selected units’ preferred directions. Parts b and e include both closed-
loop filter calibration trials and assessment trials; h includes only filter
calibration trials. Time 0 indicates the start of the trial. The dashed vertical line
1.8 s before the start of the trial identifies the time when the target for the
upcoming trial began to rise. Activity occurring before this time corresponded
to the end of the previous trial, which often included a grasp, followed by the
lowering of the previous target and the computer moving the hand to the next
starting position if it was not already there. c, f, i, Rasters and histograms from
calibration and assessment trials for units that modulated with intended grasp
state. During closed-loop filter calibration trials, the hand automatically closed
starting at time 0, cueing the participant to grasp; during assessment trials, the
grasp state was decoded at time 0. Expanded data appear in Supplementary Fig. 5.
Figure 2
|
Participant S3 drinking from a bottle using the DLR robotic arm.
Four sequential images from the first successful trial showing participant S3
using the robotic arm to grasp the bottle, bring it towards her mouth, drink
coffee from the bottle through a straw (her standard method of drinking) and
place the bottle back on the table. The researcher in the background was
positioned to monitor the participant and robotic arm. (See Supplementary
RESEARCH LETTER
374 | NATURE | VOL 485 | 17 MAY 2012
Macmillan Publishers Limited. All rights reserved
©2012

electrical stimulation
26–28
or by people with limb loss to control
prosthetic limbs. Whether MI signals are suitable for people with
limb loss to control an advanced prosthetic arm (such as the device
shown here) remains to be tested and compared with other control
strategies
11,29
. Though further developments might enable people with
tetraplegia to achieve rapid, dexterous actions under neural control, at
present, for people who have no or limited volitional movement of
their own arm, even the basic reach and grasp actions demonstrated
here could be substantially liberating, restoring the ability to eat and
drink independently.
METHODS SUMMARY
Permission for these studies was granted by the US Food and Drug Administration
(Investigational Device Exemption) and the Partners Healthcare/Massachusetts
General Hospital Institutional Review Board. Core elements of the investigational
BrainGate system have been described previously
6,7
.
During each session, participants were seated in a wheelchair with their feet
located near or underneath the edge of the table supporting the target placement
system. The robotic arm was positioned to the participant’s right (Fig. 1a). Raw
neural signals for each channel were sampled at 30 kHz and fed through custom
Simulink (Mathworks) software in 100 ms bins (S3) or 20 ms bins (T2) to extract
threshold crossing rates
2,30
; these threshold crossing rates were used as the neural
features for real-time decoding and for filter calibration. Open- and closed-loop
filter calibration was performed over several blocks, which were each 3–6 min long
and contained 18–24 trials. Targets were presented using a custom, automated
target placement platform. On each trial, one of seven servos placed its target (a
6 cm diameter foam ball supported by a spring-loaded wooden dowel rod attached
to theservo)inthe workspace by liftingit to its task-defined targetlocation (Fig. 1b).
Between trials, the previous trial’s target was returned to the tabletop while the next
target was raised. Owing to variability in the position of the target-placing platform
from session to session and changes in the angles of the spring-loaded rods used to
hold the targets, visual inspection was used for scoring successful grasp and
successful touch trials. Further details on session setup, signal processing, filter
calibration, robot systems and target presentations are given in Methods.
Full Methods and any associated references are available in the online version of
the paper at www.nature.com/nature.
Received 23 September 2011; accepted 26 March 2012.
1. Donoghue, J. P. Bridging the brain to the world: a perspective on neural interface
systems. Neuron 60, 511–521 (2008).
2. Gilja, V. et al. Challenges and opportunities for next-generation intra-cortically
based neural prostheses. IEEE Trans. Biomed. Eng. 58, 1891–1899 (2011).
3. Schwartz, A. B., Cui, X. T., Weber, D. J. & Moran, D. W. Brain-controlled interfaces:
movement restoration with neural prosthetics. Neuron 52, 205–220 (2006).
4. Nicolelis, M. A. L. & Lebedev, M. A. Principles of neural ensemble physiology
underlying the operation of brain-machine interfaces. Nature Rev. Neurosci. 10,
530–540 (2009).
5. Green, A. M. & Kalaska, J. F. Learning to move machines with the mind. Trends
Neurosci. 34, 61–75 (2011).
6. Hochberg, L. R. et al. Neuronal ensemble control of prosthetic devices by a human
with tetraplegia. Nature 442, 164–171 (2006).
7. Simeral, J. D., Kim, S. P., Black, M. J., Donoghue, J. P. & Hochberg, L. R. Neural
control of cursor trajectory and click by a human with tetraplegia 1000 days after
implant of an intracortical microelectrode array. J. Neural Eng. 8, 025027 (2011).
8. Kim, S. P. et al. Point-and-click cursor control with an intracortical neural interface
system by humans with tetraplegia. IEEE Trans. Neural Syst. Rehabil. Eng. 19,
193–203 (2011).
9. Velliste, M., Perel, S., Spalding, M. C., Whitford, A. S. & Schwartz, A. B. Cortical
control of a prosthetic arm for self-feeding. Nature 453, 1098–1101 (2008).
10. Albu-Schaffer, A. et al. The DLR lightweight robot: design and control concepts for
robots in human environments. Ind. Rob. 34, 376–385 (2007).
11. Resnik, L. Research update: VA study to optimize the DEKA Arm. J. Rehabil. Res.
Dev. 47, ix– x (2010).
12. Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P. & Black, M. J. Bayesian population
decoding of motor cortical activity using a Kalman filter. Neural Comput. 18,
80–118 (2006).
13. Suner, S., Fellows, M. R., Vargas-Irwin, C., Nakata, G. K. & Donoghue, J. P. Reliability
of signals from a chronically implanted, silicon-based electrode array in non-
human primate primary motor cortex. IEEE Trans. Neural Syst. Rehabil. Eng. 13,
524–541 (2005).
14. Chestek, C. A. et al. Long-term stability of neural prosthetic control signals from
silicon cortical arrays in rhesus macaque motor cortex. J. Neural. Eng. 8, 045005
(2011).
15. Kruger, J., Caruana, F., Volta, R. D. & Rizzolatti, G. Seven years of recording from
monkey cortex with a chronically implanted multiple microelectrode. Front.
Neuroeng. 3, 6 (2010).
16. Kim, S. P., Simeral, J. D., Hochberg, L. R., Donoghue, J. P. & Black, M. J. Neural
control of computer cursor velocity by decoding motor cortical spiking activity in
humans with tetraplegia. J. Neural Eng. 5, 455–476 (2008).
17. Burrow, M., Dugger, J., Humphrey, D. R., Reed, D. J. & Hochberg, L. R. in Proc. ICORR
’97: Int. Conf. Rehabilitation Robotics 83–86 (Bath Institute of Medical Engineering,
1997).
18. Shin, H. C., Aggarwal, V., Acharya, S., Schieber, M. H. & Thakor, N. V. Neural
decoding of finger movements using Skellam-based maximum-likelihood
decoding. IEEE Trans. Biomed. Eng. 57, 754–760 (2010).
19. Vargas-Irwin, C. E. et al. Decoding complete reach and grasp actions from local
primary motor cortex populations. J. Neurosci. 30, 9659–9669 (2010).
20. Mehring, C. et al. Inference of hand movements from local field potentials in
monkey motor cortex. Nat. Neurosci. 6, 1253–1254 (2003).
21. Stark, E. & Abeles, M. Predicting movement from multiunit activity. J. Neurosci. 27,
8387–8394 (2007).
22. Bansal, A. K., Vargas-Irwin, C. E., Truccolo, W. & Donoghue, J. P. Relationships
among low-frequency local fieldpotentials,spiking activity, and three-dimensional
reach and grasp kinematics in primary motor and ventral premotor cortices.
J. Neurophysiol. 105, 1603–1619 (2011).
23. Musallam, S., Corneil, B. D., Greger, B., Scherberger, H. & Andersen, R. A. Cognitive
control signals for neural prosthetics. Science 305, 258–262 (2004).
24. Mulliken, G. H., Musallam, S. & Andersen, R. A. Decoding trajectories from posterior
parietal cortex ensembles. J. Neurosci. 28, 12913–12926 (2008).
25. Santhanam, G., Ryu, S. I., Yu, B. M., Afshar, A. & Shenoy, K. V. A high-performance
brain–computer interface. Nature 442, 195–198 (2006).
26. Moritz, C. T., Perlmutter, S. I. & Fetz, E. E. Direct control of paralysed muscles by
cortical neurons. Nature 456, 639–642 (2008).
27. Pohlmeyer, E. A. et al. Toward the restoration of hand use to a paralyzed monkey:
brain-controlled functional electrical stimulation of forearm muscles. PLoS One 4,
e5924 (2009).
28. Chadwick, E. K. et al. Continuous neuronal ensemble control of simulated arm
reaching by a human with tetraplegia. J. Neural Eng. 8, 034003 (2011).
29. Kuiken, T. A. et al. Targeted reinnervation for enhanced prosthetic arm function
in a woman with a proximal amputation: a case study. Lancet 369, 371–380
(2007).
30. Fraser, G. W., Chase, S. M., Whitford, A. & Schwartz, A. B. Control of a brain
computer interface without spike sorting. J. Neural Eng. 6, 055004 (2009).
Supplementary Information is linked to the online version of the paper at
www.nature.com/nature.
Acknowledgements We thank participants S3 and T2 for their dedication to this
research. We thank M. Black for initial guidance in the BrainGate–DLR research. We
thank E. Gallivan, E. Berhanu, D. Rosler, L. Barefoot, K. Centrella and B. King for their
contributions to this research. We thank G. Friehs and E. Eskandar for their surgical
contributions. We thank K. Knoper for assistance with illustrations. We thank D. Van Der
Merwe and DEKA Research and Development for their technical support. The contents
do not represent the views of the Department of Veterans Affairs or the United States
Government. The research was supported by the Rehabilitation Research and
Development Service, Office of Research and Development, Department of Veterans
Affairs (Merit Review Awards B6453R and A6779I; Career Development Transition
Award B6310N). Support was also provided by the National Institutes of Health:
NINDS/NICHD (RC1HD063931), NIDCD (R01DC009899), NICHD-NCMRR
(N01HD53403 and N01HD10018), NIBIB (R01EB007401), NINDS-Javits
(NS25074); a Memorandum of Agreement between the Defense Advanced Research
Projects Agency (DARPA) and the Department of Veterans Affairs; the Doris Duke
Charitable Foundation; the MGH-Deane Institute for Integrated Research on Atrial
Fibrillation and Stroke; Katie Samson Foundation; Craig H. Neilsen Foundation; the
European Commission’s Seventh Framework Programme through the project The
Hand Embodied (grant 248587). The pilot clinical trial into which participant S3 was
recruited was sponsored in part by Cyberkinetics Neurotechnology Systems (CKI).
Author Contributions J.P.D. and L.R.H. conceived, planned and directed the BrainGate
research and the DEKA sessions. J.P.D., L.R.H. and P.v.d.S. conceived, planned and
directed the DLR robot control sessions. J.P.D. and P.v.d.S. are co-senior authors. D.B.,
B.J., N.Y.M., J.D.S. and J.V. contributed equally to this work and are listed alphabetically.
J.D.S., J.V. and D.B. developed the BrainGate–DLR interface. D.B., J.D.S. and J.L.
developed the BrainGate–DEKA interface. D.B. and J.V. created the three-dimensional
motorized target placement system. B.J., N.Y.M. and D.B. designed the behavioural task,
theneuralsignalprocessing approach, the filter building approachandthe performance
metrics. B.J., N.Y.M. and D.B.performed data analysis,further guidedbyL.R.H., J.D.S. and
J.P.D. N.Y.M., L.R.H. and J.P.D. drafted the manuscript, which was further edited by all
authors. D.B. and J.D.S. engineered the BrainGate neural interface system/assistive
technology system. J.V. and S.H. developed the reactive planner for the Light-Weight
Robot III(LWR). S.H. developed the internal control framework of the Light-WeightRobot
III. The internal control framework of the DEKA arm was developed by DEKA. L.R.H. is
principal investigator of the pilot clinical trial. S.S.C. is clinical co-investigator of the pilot
clinical trial and assisted in the clinical oversight of these participants.
Author Information Reprints and permissions information is available at
www.nature.com/reprints. The authors declare competing financial interests: details
accompany the full-text HTML version of the paper at www.nature.com/nature.
Readers are welcome to comment on the online version of this article at
www.nature.com/nature. Correspondence and requests for materials should be
addressed to J.P.D. (john_donoghue@brown.edu) or L.R.H. (leigh@brown.edu).
LETTER RESEARCH
17 MAY 2012 | VOL 485 | NATURE | 375
Macmillan Publishers Limited. All rights reserved
©2012

METHODS
Permission for these studies was granted by the US Food and Drug Administration
(Investigational Device Exemption) and the Partners Healthcare/Massachusetts
General Hospital Institutional Review Board. The two participants in this study,
S3 and T2, were enrolled in a pilot clinical trial of the BrainGate Neural
Interface System (additional information about the clinical trial is available at
http://www.clinicaltrials.gov/ct2/show/NCT00912041).
At the time of this study, S3 was a 58-year-old woman with tetraplegia caused by
brainstem stroke that occurred nearly 15 years earlier. As previously reported
7,31
,
she is unable to speak (anarthria) and has no functional use of her limbs. She has
occasional bilateral or asymmetric flexor spasm movements of the arms that are
intermittently initiated by any imagined or actual attempt to move. S3’s sensory
pathways remain intact. She also retains some head movement and facial expres-
sion, has intact eye movement and breathes spontaneously. On 30 November
2005, a 96-channel intracortical silicon microelectrode array (1.5 mm electrode
length, produced by Cyberkinetics Neurotechnology Systems, and now by its
successor, Blackrock Microsystems) was implanted in the arm area of motor cortex
as previously described
6,7
. One month later, S3 began regularly participating in one
or two research sessions per week during which neural signals were recorded and
tasks were performed towards the development, assessment and improvement of
the neural interface system. The data reported here are from S3’s trial days 1952–
1975, more than 5 years after implant of the array. Participant S3 provided per-
mission for photographs, videos and portions of her protected health information
to be published for scientific and educational purposes.
The second study participant, T2, was, at the time of this study, a 66-year-old
ambidextrous man with tetraplegia and anarthria as a result of a brainstem stroke
that occurred in 2006, five and a half years before the collection of the data
presented in this report. He has a tracheostomy and percutaneous gastrostomy
tube; he receives supportive mechanical ventilation at night but breathes without
assistance during the day, and receives all nutrition by percutaneous gastrostomy.
He has a left abducens palsy with intermittent diplopia. He can rotate his head
slowly over a limited range of motion. With the exception of unreliable and trace
right wrist and index finger extension (but not flexion), he is without voluntary
movement at and below C5. Occasional coughing results in involuntary hip flexion,
and intermittent, rhythmic chewing movements occur without alteration in con-
sciousness. Participant T2 also had a 96-channel Blackrock array with 1.5 mm
electrodes implanted into the dominant arm–hand area of motor cortex; the array
was placed 5 months before the session reported here.
Setup. During each session, the participant was seated in their wheelchair with their
feet located underneath the edge of the table supporting the target placement system.
The robot arm was positioned to the participant’s right (Fig. 1a). A technician used
aseptic technique to connect the 96-channel recording cable to the percutaneous
pedestal and then viewed neural signal waveforms using commercial software
(Cerebus Central, Blackrock Microsystems). The waveforms were used to identify
channels that were not recording signals and/or were contaminated with noise; for
S3, those channels were manually excluded and remained off for the remainder of the
recording session.
Robot systems. We used two robot systems withmulti-jointarmsandhands during
this study. The first was the DLR Light-Weight Robot III
10,32
, with the DLR Five-
Finger Hand
33
, developed at the German Aerospace Center (DLR). The arm weighs
14 kg and has seven degrees of freedom (DoF). The hand has 15 active DoF which
were combined into a single DoF (hand open/close) to execute a grasp for these
experimental sessions. Torque sensors were embedded in each joint of the arm and
hand, allowing the system to operate under impedance control, and enabling it to
handle collision safely, which is desirable for human–robot interactions
34
.Thehand
orientation was fixed in Cartesian space. The second robotic system was the DEKA
Generation 2 prosthetic arm system, which weighs 3.64 kg and has six DoF in the
arm (shoulder abduction, shoulder flexion, humeral rotation and elbow flexion,
wrist flexion, wrist rotation), and four DoF in the hand (also combined into a single
DoF to execute a grasp for these experimental sessions). The DEKA hand orienta-
tion was kept fixed in joint space; therefore, it could change in the Cartesian space
depending upon the posture of other joints derived from the inverse kinematics.
Both robotic arms were controlled in endpoint velocity space while a parallel
state switch, also under neural control from the same cortical ensemble, controlled
grasp. Virtual boundaries were placed in the workspace as part of the control
software to avoid collisions with the tabletop, support stand and participant. Of
the 158 trials performed by S3, 80 were performed during the first two sessions
using the DLR arm and 78 during the two sessions using the DEKA arm.
Target presentation. Targets were defined using a custom, automated servo-
based robotic platform. On each trial, one of the seven servos placed its target (a
6 cm diameter foam ball attached to the servo by a spring-loaded wooden dowel
rod) in the workspace by lifting it to its task-defined target location. Between trials,
the previous target was returned to the table while the next target was raised to its
position. The trials alternated between the lower right ‘home’ target and one of the
other six targets. The targets circumscribed an area of 30 cm from left to right,
52 cm in depth and 23 cm vertically (see Supplementary Figs 1 and 9).
Owing to variability in the position of the target-placing platform from session
to session and changes in the angles of the spring-loaded rods used to hold the
targets, estimates of true target locations in physical space relative to the software-
defined targets were not exact. This target placement error had no impact on the
three-dimensional reach and grasp task because the goal of the task was to grab the
physical target regardless of its exact location. However, for this reason, it was not
possible to use an automated method for scoring touches and grasps. Instead, scoring
was performed by visual inspection of the videos: for S3, by a group of three inves-
tigators (N.Y.M., D.B. and B.J.) and independently by a fourth investigator (L.R.H.);
for T2, independently by four investigators (J.D.S., D.B., and B.J. and L.R.H.). Of 203
trials, there was initial concordance in scoring in 190 of them. The remaining 13 were
re-reviewed using a second video taken from a different camera angle, and either a
unanimous decision was reached (n 5 10) or when there was any unresolved
discordance in voting, the more conservative score was assigned (n 5 3).
Signal acquisition. Raw neural signals for each channel were sampled at 30 kHz
and fed through custom Simulink (Mathworks) software in 100 ms bins (for
participant S3) or 20 ms bins (for participant T2). For participant T2, coincident
noise in the raw signal was reduced using common-average referencing: from the
50 channels with the lowest impedance, we selected the 20 with the lowest firing
rates. The mean signal from these 20 channels was subtracted from all 96 channels.
To extract threshold crossing rates
2,30
, signals in each bin were then filtered with
a fourth-order Butterworth filter with corners at 250 and 5,000 Hz, temporally
reversed and filtered again. Neural signals were buffered for 4 ms before filtering to
avoid edge effects. This symmetric (non-causal) filter is better matched to the
shape of a typical action potential
35
, and using this method led to better extraction
of low-amplitude action potentials from background noise and higher directional
modulation indices than would be obtained using a causal filter. Threshold
crossings were counted as follows. For computational efficiency, signals were
divided into 2.5 ms (for S3) or 0.33 ms (for T2) sub-bins, and in each sub-bin,
the minimum value was calculated and compared with a threshold. For S3, this
threshold was set at 24.5 times the filtered signal’s root mean square value in the
previous block. For T2, this threshold was set at 25.5 times the root mean square
of the distribution of minimum values collected from each sub-bin. (Offline
analysis showed that these two methods produced similar threshold values relative
to noise amplitude.) To prevent large spike amplitudes from inflating the root
mean square estimate for both S3 and T2, signal values were capped between
40 mV and 240 mV before calculating this threshold for each channel. The number
of minima that exceeded the channel’s threshold was then counted in each bin,
and these threshold crossing rates were used as the neural features for real-time
decoding and for closed-loop filter calibration.
Filter calibration. Filter calibration was performed at the beginning of each
session using data acquired over several ‘blocks’ of 18–24 trials (each block lasting
approximately 3–6 min). The process began with one open-loop filter initializa-
tion block, in which the participants were instructed to imagine that they were
controlling the movements of the robot arm as it performed pre-programmed
movements along the cardinal axes. The trial sequence was a centre–out–back
pattern. Each block began with the endpoint of the robot arm at the ‘home’ target
in the middle of the workspace. The hand would then move to a randomly selected
target (distributed equidistant from the home target on the cardinal axes), pause
there for 2 s, then move back to the home target. This pattern was repeated two or
three times for each target. To initialize the Kalman filter
12,36
, a tuning function
was estimated for each unit by regressing its threshold crossing rates against
instantaneous target directions (see below). For participant T2, a 0.3 s exponential
smoothing filter was applied to the thresholdcrossing rates before filter calibration.
Open-loop filter initialization was followed by several blocks of closed-loop
filter calibration (adapted to the Kalman filter from refs 37 and 38), in which
the participant actively controlled the robot to acquire targets, in a similar
home–out–back pattern, but with the home target at the right of the workspace
(Supplementary Fig. 1). In each closed-loop filter calibration block, the error in the
participant’s decoded trajectories was attenuated by scaling down decoded move-
ment commands orthogonal to the instantaneous target direction by a fixed per-
centage, similar to the technique used by Velliste et al.
9
. The amount of error
attenuation was decreased across filter calibration blocks until it was zero, giving
the participant full three-dimensional control of the robot.
During each closed-loop filter calibration block, the participant’s intended
movement direction at each moment was inferred to be from the current endpoint
of the robot hand towards the centre of the target. Time bins from 0.2 to 3.2 s
after the trial start were used to calculate tuning functions and the baseline rates
(see below) by regressing threshold crossing rates from each bin against the
RESEARCH LETTER
Macmillan Publishers Limited. All rights reserved
©2012

Citations
More filters
Journal ArticleDOI

25th Anniversary Article: The Evolution of Electronic Skin (E-Skin): A Brief History, Design Considerations, and Recent Progress

TL;DR: Electronic networks comprised of flexible, stretchable, and robust devices that are compatible with large-area implementation and integrated with multiple functionalities is a testament to the progress in developing an electronic skin akin to human skin.
Journal ArticleDOI

Deep learning with convolutional neural networks for EEG decoding and visualization.

TL;DR: This study shows how to design and train convolutional neural networks to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping.
Journal ArticleDOI

High-performance neuroprosthetic control by an individual with tetraplegia

TL;DR: With continued development of neuroprosthetic limbs, individuals with long-term paralysis could recover the natural and intuitive command signals for hand placement, orientation, and reaching, allowing them to perform activities of daily living.
Journal ArticleDOI

Control strategies for active lower extremity prosthetics and orthotics: a review

TL;DR: This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic P/O devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system.
References
More filters
BookDOI

The Case Study

TL;DR: On May 25, 1977, IEEE member, Virginia Edgerton, a senior information scientist employed by the City of New York, telephoned the chairman of CSIT's Working Group on Ethics and Employment Practices, having been referred to the committee by IEEE Headquarters.
Journal ArticleDOI

Neuronal ensemble control of prosthetic devices by a human with tetraplegia

TL;DR: Initial results for a tetraplegic human using a pilot NMP suggest that NMPs based upon intracortical neuronal ensemble spiking activity could provide a valuable new neurotechnology to restore independence for humans with paralysis.
PatentDOI

Direct cortical control of 3d neuroprosthetic devices

Dawn M. Taylor, +1 more
- 12 Nov 2002 - 
TL;DR: In this paper, a co-adaptive algorithm uses the firing rate of the sensed neurons or neuron groupings to help develop the control signals for an object is developed from the neuron-originating electrical impulses detected by electrode arrays implanted in a subject's cerebral cortex at the pre-motor locations known to have association with arm movements.
Journal ArticleDOI

Cortical control of a prosthetic arm for self-feeding

TL;DR: A system that permits embodied prosthetic control is described and monkeys (Macaca mulatta) use their motor cortical activity to control a mechanized arm replica in a self-feeding task, and this demonstration of multi-degree-of-freedom embodied prosthetics control paves the way towards the development of dexterous prosthetic devices that could ultimately achieve arm and hand function at a near-natural level.
Journal ArticleDOI

Brain-Controlled Interfaces: Movement Restoration with Neural Prosthetics

TL;DR: New technology to engineer the tissue-electrode interface, electrode design, and extraction algorithms to transform the recorded signal to movement will help translate exciting laboratory demonstrations to patient practice in the near future.
Related Papers (5)
Frequently Asked Questions (5)
Q1. What contributions have the authors mentioned in the paper "Reach and grasp by people with tetraplegia using a neurally controlled robotic arm" ?

The authors have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices. Here the authors demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. The study participants, referred to as S3 and T2 ( a 58-year-old woman, and a 66-year-old man, respectively ), were each tetraplegic and anarthric as a result of a brainstem stroke. 3 years before the beginning of this study ; for T2, in June 2011, 5 months before this study ). S3 had used the DLR robot on multiple occasions over the previous year for algorithm development and interface testing, but she had no exposure to the DEKA arm before the sessions reported here. T2 participated in three DEKA arm sessions for similar development and testing before the session reported here but had no other experience using the robotic arms. After decoder calibration, the authors assessed whether each participant could use the robotic arm to reach for and grasp foam ball targets of diameter 6 cm, presented in three-dimensional space one at a time by motorized levers ( Fig. 1a–c and Supplementary Fig. 1b ). To decode movement intentions from neural activity, electrical potentials from each of the 96 channels were filtered to reveal extracellular action potentials ( that is, ‘ unit ’ activity ). 

Owing to variability in the position of the target-placing platform from session to session and changes in the angles of the spring-loaded rods used to hold the targets, visual inspection was used for scoring successful grasp and successful touch trials. 

Raw neural signals for each channel were sampled at 30 kHz and fed through custom Simulink (Mathworks) software in 100 ms bins (S3) or 20 ms bins (T2) to extract threshold crossing rates2,30; these threshold crossing rates were used as the neural features for real-time decoding and for filter calibration. 

During each session, participants were seated in a wheelchair with their feet located near or underneath the edge of the table supporting the target placement system. 

The research was supported by the Rehabilitation Research and Development Service, Office of Research and Development, Department of Veterans Affairs (Merit Review Awards B6453R and A6779I; Career Development Transition Award B6310N).