scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Progress and prospects of the human---robot collaboration

TL;DR: The main purpose of this paper is to review the state-of-the-art on intermediate human–robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human-robot collaboration.
Abstract: Recent technological advances in hardware design of the robotic platforms enabled the implementation of various control modalities for improved interactions with humans and unstructured environments. An important application area for the integration of robots with such advanced interaction capabilities is human---robot collaboration. This aspect represents high socio-economic impacts and maintains the sense of purpose of the involved people, as the robots do not completely replace the humans from the work process. The research community's recent surge of interest in this area has been devoted to the implementation of various methodologies to achieve intuitive and seamless human---robot-environment interactions by incorporating the collaborative partners' superior capabilities, e.g. human's cognitive and robot's physical power generation capacity. In fact, the main purpose of this paper is to review the state-of-the-art on intermediate human---robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human---robot collaboration.

Summary (3 min read)

Introduction

  • Ideally, each active component of such a system must be capable of observing and estimating the counterparts’ contributions to the overall system’s response through the fusion and processing of the sensory information [Argall and Billard, 2010, Ebert and Henrich, 2002, Lallée et al., 2012].
  • Hence, their focus in this review paper will be on other important aspects of physical human-robot collaboration (PHRC).
  • Humans’ significant cognitive abilities in learning and adaptation to various tasks demands and disturbances can be used to supervise the collaborative robots’ superior physical capabilities.

2 Interfaces for Improved Robot Perception

  • This has contributed to the development of implicit and explicit communication standards so that task-related information can be perceived and communicated intuitively [Sebanz et al., 2006].
  • Their usage is mostly limited to activating high-level robot operations and the task complexity can potentially prevent the robot from deriving the desired sensorimotor behaviour from these higher-level modalities.
  • The interface was built on the use of EMG on the user arm and force sensors on robot end-effector.
  • Participants would naturally gaze at the robot’s hands or face to communicate the focus of attention of the collaborative action, while speaking to the robot to describe each action.
  • These results are in accord with previous work presented, see e.g.

3 Interfaces for Improved Human Perception

  • The visual and auditory systems of the humans provide powerful sensory inputs that contribute to a fast and accurate perception of the movement kinematics and the environment, and a constant update of the internal models.
  • Joint attention or proactivity can improve the mutual awareness, hence the task performance, as shown by [Ivaldi et al., 2014] for a dyadic learning task.
  • Since in most collaborative scenarios the human partner comes to physical contact(s) with the object and/or the robot in a closed dynamic chain, a large amount of meaningful information can be perceived by the human receptors.
  • The state-of-art reviews several non-invasive techniques to present haptic stimuli to robot operators, by delivering different types of stimuli to the human limb.
  • This is not only to provide cheaper and easily applicable feedback systems can replace full force feedback with none or small performance reduction, but also to resolve fundamental issues such as stability in closed loop [Tegin and Wikander, 2005].

4 Interaction Modalities

  • This section aims at presenting an overview of the different strategies to endow the robot with interaction capabilities.
  • While the interfaces and the underlying perception mechanisms are dealt with in Section 2, this part discusses different approaches to link the perception inputs to their actual effects in terms of robot behaviour.
  • The same formalism has been further developed in [Kosuge and Kazamura, 1997b,Tsumugiwa et al., 2002b,Albu-Schäffer et al., 2007,Albu-Schäffer et al., 2007,Gribovskaya et al., 2011b] to explicitly account for the human as the source of interaction forces.
  • The work in [Bestick et al., 2015] presented a framework for the parameter and state estimation of personalised human kinematic models using motion capture data.
  • Another control architecture considering both visual and force information was presented in [Zanchettin and Rocco, 2015], where visual information was used to track an object while the human and the robotic manipulator are interacting by means of an impedance control scheme.

5 Stability and Transparency of the PHRC Systems

  • When humans interact physically with robots, the robot control faces critical challenges in achieving performances while ensuring stability.
  • Ficuciello et al. in [Ficuciello et al., 2014] proposed to use the robot’s redundancy to ensure a decoupled apparent inertia at the end-effector thus increasing the range of stable impedance parameters.
  • Let us consider the simple case of the human arm interacting with a robotic manipulator at the end-effector.
  • These studies support the idea that the CNS acts like an impedance controller at the level of the endpoint, ensuring stability, and reducing movement variability by increasing the impedance to reject disturbances.

6 Benchmarking and Relevant Use Cases

  • Over the last decade, several research groups aimed at evaluating the quality of human-robot interaction and collaboration by examining the acceptability of the framework by human volunteers.
  • Authors in [Kahn et al., 2006] and [Feil-Seifer et al., 2007] proposed several benchmarking methods based on psychological assessments.
  • Some other developments are focused on collaborative aspects in manipulating non rigid or articulated objects [Colomé et al., 2015].
  • In [Kosuge et al., 1998] the robot was commanded to deform a flexible metal sheet and support its payload so that the human could easily handle it.
  • Finally, a recent trend in collaborative robotics research is devoted to the design of robots that take into account the ergonomics requirements typical of industrial applications [Maurice et al., 2017].

7 Discussions and outlook

  • The enhanced physical dexterity of the new generation of robotic platforms has paved the way towards their integration into robotics enabled service and care applications.
  • While HRC’s significant economic impact on industry is expected at large, it will also serve to maximise the social impact by maintaining the sense of purpose of the involved people in the work process.
  • In particular, despite the availability of several technologies, e.g. force feedback, augmented reality, etc., the amount of information (and its level of detail) the robot should communicate to the human is still an open research topic.
  • Such indices can be task-specific (a common approach in industry), however, this may limit the cross-application comparability of the HRC frameworks.
  • To conclude, this review paper was intended to give an updated overview of the state-of-the-art and recent research trends in human-robot collaboration.

Did you find this useful? Give us your feedback

Figures (5)

Content maybe subject to copyright    Report

HAL Id: hal-01643655
https://hal.archives-ouvertes.fr/hal-01643655
Submitted on 21 Nov 2017
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Progress and Prospects of the Human-Robot
Collaboration
Arash Ajoudani, Andrea Maria Zanchettin, Serena Ivaldi, Alin Albu-Schäer,
Kazuhiro Kosuge, Oussama Khatib
To cite this version:
Arash Ajoudani, Andrea Maria Zanchettin, Serena Ivaldi, Alin Albu-Schäer, Kazuhiro Kosuge, et al..
Progress and Prospects of the Human-Robot Collaboration. Autonomous Robots, Springer Verlag,
2017, pp.1-17. �10.1007/s10514-017-9677-2�. �hal-01643655�

Noname manuscript No.
(will be inserted by the editor)
Progress and Prospects of the
Human-Robot Collaboration
Arash Ajoudani, Andrea
Maria Zanchettin, Serena
Ivaldi, Alin Albu-Sch
¨
affer,
Kazuhiro Kosuge, and
Oussama Khatib
Received: date / Accepted: date
Abstract Recent technological advances in hardware de-
sign of the robotic platforms enabled the implementation
of various control modalities for improved interactions with
humans and unstructured environments. An important appli-
cation area for the integration of robots with such advanced
interaction capabilities is human-robot collaboration. This
aspect represents high socio-economic impacts and main-
tains the sense of purpose of the involved people, as the ro-
bots do not completely replace the humans from the work
process. The research community’s recent surge of interest
in this area has been devoted to the implementation of vari-
ous methodologies to achieve intuitive and seamless human-
robot-environment interactions by incorporating the collab-
orative partners’ superior capabilities, e.g. human’s cogni-
tive and robot’s physical power generation capacity. In fact,
the main purpose of this paper is to review the state-of-the-
art on intermediate human-robot interfaces (bi-directional),
robot control modalities, system stability, benchmarking and
relevant use cases, and to extend views on the required future
developments in the realm of human-robot collaboration.
Arash Ajoudani is with HRI
2
Lab of the Istituto Italiano di Tecnologia,
Genoa, Italy, Email: arash.ajoudani@iit.it
Andrea Maria Zanchettin is with Politecnico di Milano, Dipartimento
di Elettronica, Informazione e Bioingegneria, Milano, Italy, Email: an-
dreamaria.zanchettin@polimi.it
Serena Ivaldi is with INRIA Nancy Grand-Est, France, and the Intel-
ligent Autonomous Systems Lab of TU Darmstadt, Germany. Email:
serena.ivaldi@inria.fr
Alin Albu-Sch
¨
affer is with the Institute of Robotics and Mechatron-
ics, German Aerospace Center (DLR), Germany, Email: Alin.Albu-
Schaeffer@dlr.de
Kazuhiro Kosuge is with the System Robotics Laboratory, Tohoku Uni-
versity, Japan, Email: kosuge@m.tohoku.ac.jp
Oussama Khatib is with the Stanford Robotics Laboratory, Stanford
University, USA, Email: ok@robo.stanford.edu
0
100
200
300
400
500
600
700
800
Time [Year]
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
Number of published papers
Human Robot "Collaboration OR Cooperation"
"User OR Human" Intention
Fig. 1: Number of publications on the topic of human-robot collabo-
ration from 1996 to 2015. The contribution of the research keyword
“human intention” (red) to the numbers is illustrated in this plot. The
data is extracted from Google Scholar.
1 Introduction
The fast growing demands for service robot applications in
home or industrial workspaces have led to the development
of several well-performing robots equipped with rich pro-
prioception sensing and actuation control. Such systems that
range from robotic manipulators [Albu-Sch
¨
affer et al., 2007]
to full humanoids [Tsagarakis et al., 2016, Ott et al., 2006,
Kaneko et al., 2008, Radford et al., 2015] are expected to
help the human user in various tasks, some of which require
collaborative effort for a safe
1
, successful, and time and en-
ergy efficient execution. In fact, the integration of robotic
systems in collaborative scenarios has seen an extensive and
fast-growing research effort, a tentative estimation of which
is provided in Fig.1 by referring to the number of publica-
tions on this topic over the last two decades.
Physical human-robot collaboration (PHRC), which falls
within the general scope of physical human-robot interac-
tion (see [De Santis et al., 2008, Murphy, 2004,Alami et al.,
2006]), is defined when human(s), robot(s) and the environ-
ment come to contact with each other and form a tightly cou-
pled dynamical system to accomplish a task [Bauer et al.,
2008, Kr
¨
uger et al., 2009]. Ideally, each active component
of such a system must be capable of observing and estimat-
ing the counterparts’ contributions to the overall system’s
response through the fusion and processing of the sensory
information [Argall and Billard, 2010, Ebert and Henrich,
2002, Lall
´
ee et al., 2012]. As a consequence, an appropriate
reactive behaviour can be replicated (e.g. by the human from
a set of obtained skills in previous attempts of performing a
similar task) or developed to complement and improve the
performance of the collaborative partners.
1
The problem of safety in human-robot interaction (HRI) and the
related open issues have been extensively discussed in literature [Had-
dadin et al., 2009, De Santis et al., 2008, Alami et al., 2006]. Hence,
our focus in this review paper will be on other important aspects of
physical human-robot collaboration (PHRC).

2 Ajoudani et al.,
Similar to the humans’ anticipatory (feed-forward [Shad-
mehr and Mussa-Ivaldi, 1994b]) or feed-back [Todorov and
Jordan, 2002] mechanisms to develop a suitable motor be-
haviour, the collaborative robots’ response to sensory in-
put can be achieved through model/knowledge based tech-
niques [Tamei and Shibata, 2011,Ogata et al., 2003,Kimura
et al., 1999,Magnanimo et al., 2014], the implementation of
feedback controllers with pre-set interaction modalities [Pe-
ternel et al., 2016c, Donner and Buss, 2016a] or a combined
approach [Rozo et al., 2013,Peternel et al., 2016b,Lawitzky
et al., 2012b, Palunko et al., 2014]. A key strategy in this
direction is the establishment of a shared authority frame-
work in which the significant capabilities of both humans
and robots can be exploited. For instance, humans’ signifi-
cant cognitive abilities in learning and adaptation to various
tasks demands and disturbances can be used to supervise the
collaborative robots’ superior physical capabilities. The in-
creasing slope in the research community’s interest towards
the integration of human intention in real-time adaptation of
the robot behaviour is illustrated in Fig.1.
With this in mind, the main purpose of this paper is to re-
view the state-of-the-art on the key enabling technologies to
achieve a seamless and intuitive human-robot collaboration.
Although hardware-related components are among the most
critical to achieve this goal, a remarkable effort has been re-
cently devoted to overview the underlying progress in terms
of communication [Wang et al., 2005], sensing [Tegin and
Wikander, 2005], and actuation [Vanderborght et al., 2013a]
developments. Hence, our focus in this review will be on
other relevant key elements of the HRC, i.e. human-robot
interfaces (bi-directional), robot control modalities, system
stability, benchmarking and relevant use cases. In addition,
with the aim to achieve a reasonable convergence of views
for the required future developments, our focus will be mostly
on the physical aspects
2
of the human-robot collaboration.
2 Interfaces for Improved Robot Perception
Humans embrace a diversity of experiences from working
together in pairs or groups. This has contributed to the devel-
opment of implicit and explicit communication standards so
that task-related information can be perceived and commu-
nicated intuitively [Sebanz et al., 2006]. In fact, one of the
main objectives in the realm of physical human-robot col-
laboration is to design and establish similar communication
standards so that the robot is aware of human intentions and
needs in various phases of the collaborative task [Klingspor
et al., 1997, Bauer et al., 2008]. Despite the fact that the
robotic replication of the human sensory system is hardly
2
A good overview of the cognitive aspects in HRC can be found
in the literature, e.g., see [Fong et al., 2003, Freedy et al., 2007, Rani
et al., 2004]
possible with the current technology, the understanding and
implementation of the underlying communication principles
can potentially lead to an enhanced physical human-robot
interaction performance [Reed and Peshkin, 2008].
A widely known example of such a communication in-
terface is built on the use of visual [Perzanowski et al., 1998,
Agravante et al., 2014a,Morel et al., 1998] or language com-
mands [Medina et al., 2012a,Miyake and Shimizu, 1994,Pe-
tit et al., 2013], as user-friendly means of communicating
with a robot, from the human standpoint, given that the hu-
man is not required to learn additional tools. The use of
head, body or arm gestures are common examples in the ar-
eas of human-robot interaction and collaboration [Li et al.,
2005,Carlson and Demiris, 2012]. In this direction a method
to interpret the human intention from the latest history of
the gaze movements and to generate an appropriate reactive
response in a collaborative setup was proposed in [Sakita
et al., 2004]. Authors in [Hawkins et al., 2013] developed
a vision-based interface to predict in a probabilistic manner
when the human will perform different subtasks that may re-
quire robot assistance. The developed technique allows for
the tracking of the human variability, environmental con-
straints, and task structure to accurately analyse the timings
of the human partner’s actions.
Although such interfaces appear natural to the humans,
their usage is mostly limited to activating high-level robot
operations and the task complexity can potentially prevent
the robot from deriving the desired sensorimotor behaviour
from these higher-level modalities. In fact, a large degree
of robot autonomy, which is far beyond current capabilities
of the autonomous robots, is required for vision or auditory
based interfaces to function on a wide range of applications.
An alternative approach to the design of human-robot in-
terfaces recognises the use of force/pressure sensors in con-
tact to anticipate the objective of the human partner and/or
to control the cooperation effort. Due to the simplicity of the
underlying mechanism, it has been explored in several ap-
plications, examples include collaborative object transporta-
tion [Ikeura and Inooka, 1995a,Kosuge and Kazamura, 1997a,
Al-Jarrah and Zheng, 1997a,Tsumugiwa et al., 2002a,Duchaine
and Gosselin, 2007, Agravante et al., 2014a, Gribovskaya
et al., 2011a, Rozo et al., 2014, Rozo et al., 2015, Adams
et al., 1996], object lifting [Evrard and Kheddar, 2009,Evrard
et al., 2009], object placing [Tsumugiwa et al., 2002a,Gams
et al., 2014], object swinging [Donner and Buss, 2016a,Palunko
et al., 2014], posture assistance [Ikemoto et al., 2012, Pe-
ternel and Babi
ˇ
c, 2013], and industrial complex assembly
processes [Kr
¨
uger et al., 2009] (see also Fig. 2).
In most of the above techniques, the interaction forces/torques
are used to regulate the robot control parameters and tra-
jectories following the admittance [Duchaine and Gosselin,
2009,Lecours et al., 2012] or impedance [Tsumugiwa et al.,
2002a,Agravante et al., 2014a] causality [Hogan, 1985]. Notwith-

Progress and Prospects of the HRC 3
Fig. 2: An example of human-robot collaborative manipulation in
a productive environment (image courtesy of ABB received from
www.abb.cz).
standing the wide margin of applications, collaborative tasks
that involve simultaneous interaction with rough or uncer-
tain environments (e.g. co-manipulative tool-use) can induce
various unpredictable force components to the sensor read-
ings [Peternel et al., 2014]. This can significantly reduce the
suitability of such an interface in more complex interaction
scenarios since it can be difficult to distinguish the compo-
nents related to the active counterpart(s) behaviour from the
ones generated from the interaction with the environment.
Bio-signals such as electromyography (EMG) and elec-
troencephalography (EEG), or other physiological indices
such as electrodermal activity [Pecchinenda, 1996,Rani et al.,
2006] can be used to anticipate the human intention in PHRC.
In particular, due to the adaptability and ease-of-use of EMG
measurements, they have found a wide range of applica-
tions in human-in-the-loop robot control such as: prosthe-
sis [Farry et al., 1996, Jiang et al., 2009, Farina et al., 2014,
Castellini et al., 2014, Strazzulla et al., 2017], exoskeletons
[Rosen et al., 2001,Fleischer and Hommel, 2008] and indus-
trial manipulator control [Vogel et al., 2011, Peternel et al.,
2014, Ajoudani, 2016, Gijsberts et al., 2014]. Peternel et al.,
used EMG signals to anticipate the stiffening/complying be-
haviour of a torque controlled robotic arm in a co-manipulation
task [Peternel et al., 2016c]. Through this interface, the lead-
ing/following roles of human and the robot counterpart were
estimated online. In another study, Bell et al., used EEG sig-
nals to command a partially autonomous humanoid robot
through high-level descriptions of the task [Bell et al., 2008].
A remarkable use of bio-signals in the development of
HR interfaces is to estimate the human physical (e.g. fa-
tigue) or cognitive (anxiety, in-attention, etc.) state varia-
tions that might deteriorate the collaborative robot’(s) per-
formance. The authors in [Rani et al., 2004] developed a
method to detect human anxiety in a collaborative setup by
extracting features from EMG, electrocardiography (ECG)
and Electrodermal responses. In a similar work, the human
physical fatigue was detected and used to increase the ro-
bot’s contribution to the task execution [Peternel et al., 2016b].
Fig. 3: Authors in [Peternel et al., 2016b] proposed a human-robot co-
manipulation framework for robot adaptation to human fatigue. The
myoelectric interface provides the robot controller with a feedback
about human motor behaviour to achieve an appropriate impedance
profile in different phases of the task. The human fatigue estimation
system provides the robot with the state of the human physical en-
durance (image courtesy of L. Peternel).
Although an interface that is build on a unique source
of sensory data can configure a pre-defined robot behav-
iour in collaborative settings, the underlying functionality
is limited and cannot be easily generalised to cross domain
scenarios. For instance, the use of visual feedback for the
estimation of the exchanged amount of energy between the
counterparts is less effective than the use of force or pressure
sensors. Similarly, the use of bio-signals such as EMGs for
tracking of the human limb movements may result in less
accurate performances in comparison to the external opti-
cal or IMU-based (inertial measurement unit) tracking sys-
tems [Corrales et al., 2008]. To address this, a combined ap-
proach, associating multi-modal sensory information to dif-
ferent robot control modalities (commonly known as multi-
modal interfaces [Mittendorfer et al., 2015, Peternel et al.,
2016c]), can be exploited. In this direction, the authors in
[Agravante et al., 2014a] proposed a hybrid approach by
merging vision and force sensing, to decouple high- and
low-level interaction components in a joint transportation
task where a human and humanoid robot carry a table with
a freely moving ball on top (see also [Rozo et al., 2016]). A
similar work proposed a multi-modal scheme for intelligent
and natural human-robot interaction [B
¨
ohme et al., 2003] by
merging vision-based techniques for user localisation, per-
son localisation and person tracking and their embodiment
into a multi-modal overall interaction schema.
In a similar fashion, voice commands were used to pause,
stop or resume the execution of a dynamic co-manipulation
task, the control parameters of which regulated by an EMG
based interface (See Fig. 3). In this work, an external track-
ing system detected the human arm configuration to regulate
the robot task frame in real-time. By the same token, au-
thors in [Yang et al., 2016] developed a multi-modal teach-
ing interface on a dual-arm robotic platform. The interface
was built on the use of EMG on the user arm and force sen-
sors on robot end-effector. In this setup, one robotic arm is

4 Ajoudani et al.,
Fig. 4: In [Ivaldi et al., 2016] naive participants (not expert in robot-
ics) interacted with the humanoid iCub to build an object: the physical
interaction was at the level of the arms, covered by a soft tactile skin
(image courtesy of S. Ivaldi).
connected to the tutee’s arm providing guidance through a
variable stiffness control approach, and the other to the tu-
tor to capture the motion and to feedback the tutees perfor-
mance in a haptic manner. The reference stiffness for the
tutors arm stiffness was estimated in real-time and repli-
cated by the tutee’s robotic arm. Ivaldi et al., [Ivaldi et al.,
2016] studied multi-modal communication of people inter-
acting physically with the humanoid iCub to build objects
(see Fig. 4). Participants would naturally gaze at the ro-
bot’s hands or face to communicate the focus of attention
of the collaborative action, while speaking to the robot to
describe each action. The authors found that individual fac-
tors of the participants influence the production of refer-
ential cues, both in speech and gaze: particularly, people
with negative attitude towards robots avoid gazing at the ro-
bot, while extroverted people speak more to the robot dur-
ing the collaboration. The robot, controlled in impedance
with a low stiffness at joints, switched to zero-torque control
when the humans were grasping the robot arms covered by a
tactile skin (enabling precise contact estimation [Fumagalli
et al., 2012]) and giving the voice command to “be compli-
ant”. This multi-modal command strategy allowed the par-
ticipants, not experts in robotics and mostly interacting with
a robot for the first time, to physically move the robot arms
in an intuitive way, without generating anxiety for their safety
as reported by the participants in their interviews. Despite
the lack of experience, all the human participants were able
to interact physically with the robot to teach the task. Fa-
cilitated probably by the child-like appearance of the iCub,
the participants naturally acted as teachers/caregivers, in line
with the observations of [Nagai and Rohlfing, 2009]; how-
ever, it has to be remarked that in that situation of physical
interaction the authors did not observe exaggerated move-
ments typical of parental motionese/scaffolding situations
in HRI, where often there is little physical interaction and
more social cues as gaze or gestures. These results are in
accord with previous work presented, see e.g. [Kilner et al.,
2003, Ugur et al., 2015].
The improved performance of the multi-modal interfaces
in the generation of compound robot behaviours that are
required to execute more complex collaborative tasks has
shifted the attention towards the usage and fusion of multi-
source sensory information. Results of Google Scholar sug-
gest that over 76% of the publications in the area of human-
robot collaboration used multi-modal interfaces in year 2015.
Nevertheless, the inclusion of more communication chan-
nels in the development of the intermediate interfaces will
potentially contribute to an increase in the human cogni-
tive burden and the low level robot control complexity. This
may affect the intuitiveness of the interface and result in an
excessive human effort to operate a specific robot modal-
ity. A solution to this issue can be obtained by the intro-
duction of shared communication modalities [Green et al.,
2008,Lackey et al., 2011,Lackey et al., 2011]. Alternatively,
robotic learning techniques such as: gradual mutual adap-
tation [Ikemoto et al., 2012, Peternel et al., 2016a], rein-
forcement learning [Palunko et al., 2014] or learning from
demonstration [Evrard et al., 2009, Lawitzky et al., 2012a,
Rozo et al., 2015] can be exploited to weaken the communi-
cation loops’ demands (e.g. bandwidth, number of feedback
modalities) due to an increased level of robot autonomy.
3 Interfaces for Improved Human Perception
The visual and auditory systems of the humans provide pow-
erful sensory inputs that contribute to a fast and accurate per-
ception of the movement kinematics and the environment,
and a constant update of the internal models. The role of
such sensory inputs in dynamic perception of the environ-
ment, e.g. anticipating the weight of an object through vi-
sion [Gordon et al., 1993], and estimating a required amount
of force to move the object along a pre-defined path [Johans-
son, 1998], has also been investigated.
During collaboration and dyadic interaction, mutual gaze
and joint attention are common ways of conveying informa-
tion [Tomasello, 2009]. Such mechanisms are often imple-
mented in robots to make the interaction more effective by
providing additional back-channels. For example in [Ivaldi
et al., 2014] the robot was equipped with anticipatory gaze
mechanisms and proactive behaviours, increasing the pace
of the interaction and reducing the reaction time of the hu-
man to the robot’s cues.
In a similar study, Dumora et al. [Dumora et al., 2012],
studying haptic communication between an operator and a
robot for a bar transportation task, observed that wrench
measurements provide incomplete information to detect the
operator’s intent of motion. This has been also observed
in [Reed, 2012] for cooperating dyads able to communicate

Citations
More filters
Journal ArticleDOI
21 Jun 2019-Science
TL;DR: The progress made in robotics to emulate humans’ ability to grab, hold, and manipulate objects is reviewed, with a focus on designing humanlike hands capable of using tools.
Abstract: BACKGROUND Humans have a fantastic ability to manipulate objects of various shapes, sizes, and materials and can control the objects’ position in confined spaces with the advanced dexterity capabilities of our hands. Building machines inspired by human hands, with the functionality to autonomously pick up and manipulate objects, has always been an essential component of robotics. The first robot manipulators date back to the 1960s and are some of the first robotic devices ever constructed. In these early days, robotic manipulation consisted of carefully prescribed movement sequences that a robot would execute with no ability to adapt to a changing environment. As time passed, robots gradually gained the ability to automatically generate movement sequences, drawing on artificial intelligence and automated reasoning. Robots would stack boxes according to size, weight, and so forth, extending beyond geometric reasoning. This task also required robots to handle errors and uncertainty in sensing at run time, given that the slightest imprecision in the position and orientation of stacked boxes might cause the entire tower to topple. Methods from control theory also became instrumental for enabling robots to comply with the environment’s natural uncertainty by empowering them to adapt exerted forces upon contact. The ability to stably vary forces upon contact expanded robots’ manipulation repertoire to more-complex tasks, such as inserting pegs in holes or hammering. However, none of these actions truly demonstrated fine or in-hand manipulation capabilities, and they were commonly performed using simple two-fingered grippers. To enable multipurpose fine manipulation, roboticists focused their efforts on designing humanlike hands capable of using tools. Wielding a tool in-hand became a problem of its own, and a variety of advanced algorithms were developed to facilitate stable holding of objects and provide optimality guarantees. Because optimality was difficult to achieve in a stochastic environment, from the 1990s onward researchers aimed to increase the robustness of object manipulation at all levels. These efforts initiated the design of sensors and hardware for improved control of hand–object contacts. Studies that followed were focused on robust perception for coping with object occlusion and noisy measurements, as well as on adaptive control approaches to infer an object’s physical properties, so as to handle objects whose properties are unknown or change as a result of manipulation. ADVANCES Roboticists are still working to develop robots capable of sorting and packaging objects, chopping vegetables, and folding clothes in unstructured and dynamic environments. Robots used for modern manufacturing have accomplished some of these tasks in structured settings that still require fences between the robots and human operators to ensure safety. Ideally, robots should be able to work side by side with humans, offering their strength to carry heavy loads while presenting no danger. Over the past decade, robots have gained new levels of dexterity. This enhancement is due to breakthroughs in mechanics with sensors for perceiving touch along a robot’s body and new mechanics for soft actuation to offer natural compliance. Most notably, this development leverages the immense progress in machine learning to encapsulate models of uncertainty and support further advances in adaptive and robust control. Learning to manipulate in real-world settings is costly in terms of both time and hardware. To further elaborate on data-driven methods but avoid generating examples with real, physical systems, many researchers use simulation environments. Still, grasping and dexterous manipulation require a level of reality that existing simulators are not yet able to deliver—for example, in the case of modeling contacts for soft and deformable objects. Two roads are hence pursued: The first draws inspiration from the way humans acquire interaction skills and prompts robots to learn skills from observing humans performing complex manipulation. This allows robots to acquire manipulation capabilities in only a few trials. However, generalizing the acquired knowledge to apply to actions that differ from those previously demonstrated remains difficult. The second road constructs databases of real object manipulation, with the goal to better inform the simulators and generate examples that are as realistic as possible. Yet achieving realistic simulation of friction, material deformation, and other physical properties may not be possible anytime soon, and real experimental evaluation will be unavoidable for learning to manipulate highly deformable objects. OUTLOOK Despite many years of software and hardware development, achieving dexterous manipulation capabilities in robots remains an open problem—albeit an interesting one, given that it necessitates improved understanding of human grasping and manipulation techniques. We build robots to automate tasks but also to provide tools for humans to easily perform repetitive and dangerous tasks while avoiding harm. Achieving robust and flexible collaboration between humans and robots is hence the next major challenge. Fences that currently separate humans from robots will gradually disappear, and robots will start manipulating objects jointly with humans. To achieve this objective, robots must become smooth and trustable partners that interpret humans’ intentions and respond accordingly. Furthermore, robots must acquire a better understanding of how humans interact and must attain real-time adaptation capabilities. There is also a need to develop robots that are safe by design, with an emphasis on soft and lightweight structures as well as control and planning methodologies based on multisensory feedback.

371 citations

Journal ArticleDOI
TL;DR: In this article, the authors present an investigation into the industry-specific factors that limit the adoption of robotics and automated systems in the construction industry, focusing on three focus groups with 28 experts and an online questionnaire were conducted.
Abstract: The construction industry is a major economic sector, but it is plagued with inefficiencies and low productivity. Robotics and automated systems have the potential to address these shortcomings; however, the level of adoption in the construction industry is very low. This paper presents an investigation into the industry-specific factors that limit the adoption in the construction industry. A mixed research method was employed combining literature review, qualitative and quantitative data collection and analysis. Three focus groups with 28 experts and an online questionnaire were conducted. Principal component and correlation analyses were conducted to group the identified factors and find hidden correlations. The main identified challenges were grouped into four categories and ranked in order of importance: contractor-side economic factors, client-side economic factors, technical and work-culture factors, and weak business case factors. No strong correlation was found among factors. This study will help stakeholders to understand the main industry-specific factors limiting the adoption of robotics and automated systems in the construction industry. The presented findings will support stakeholders to devise mitigation strategies.

210 citations


Cites background from "Progress and prospects of the human..."

  • ...For example, exoskeletons require a high degree of automation and a considerable potential exists on human-robot collaboration [1,92]....

    [...]

Journal ArticleDOI
01 Jul 2021
TL;DR: A comprehensive survey of deep learning applications for object detection and scene perception in autonomous vehicles examines the theory underlying self-driving vehicles from deep learning perspective and current implementations, followed by their critical evaluations.
Abstract: This article presents a comprehensive survey of deep learning applications for object detection and scene perception in autonomous vehicles. Unlike existing review papers, we examine the theory underlying self-driving vehicles from deep learning perspective and current implementations, followed by their critical evaluations. Deep learning is one potential solution for object detection and scene perception problems, which can enable algorithm-driven and data-driven cars. In this article, we aim to bridge the gap between deep learning and self-driving cars through a comprehensive survey. We begin with an introduction to self-driving cars, deep learning, and computer vision followed by an overview of artificial general intelligence. Then, we classify existing powerful deep learning libraries and their role and significance in the growth of deep learning. Finally, we discuss several techniques that address the image perception issues in real-time driving, and critically evaluate recent implementations and tests conducted on self-driving cars. The findings and practices at various stages are summarized to correlate prevalent and futuristic techniques, and the applicability, scalability and feasibility of deep learning to self-driving cars for achieving safe driving without human intervention. Based on the current survey, several recommendations for further research are discussed at the end of this article.

175 citations

01 Jan 2005
TL;DR: This paper describes a general passivity-based framework for the control of flexible joint robots and shows how, based only on the motor angles, a potential function can be designed which simultaneously incorporates gravity compensation and a desired Cartesian stiffness relation for the link angles.
Abstract: This paper describes a general passivity-based framework for the control of flexible joint robots. Recent results on torque, position, as well as impedance control of flexible joint robots are summarized, and the relations between the individual contributions are highlighted. It is shown that an inner torque feedback loop can be incorporated into a passivity-based analysis by interpreting torque feedback in terms of shaping of the motor inertia. This result, which implicitly was already included in earlier work on torque and position control, can also be used for the design of impedance controllers. For impedance control, furthermore, potential energy shaping is of special interest. It is shown how, based only on the motor angles, a potential function can be designed which simultaneously incorporates gravity compensation and a desired Cartesian stiffness relation for the link angles. All the presented controllers were experimentally evaluated on DLR lightweight robots and their performance and robustness shown with respect to uncertain model parameters. Experimental results with position controllers as well as an impact experiment are presented briefly, and an overview of several applications is given in which the controllers have been applied.

174 citations

Journal ArticleDOI
TL;DR: In this article , the authors present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of the neuromorphic computing community.
Abstract: Abstract Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 10 18 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community.

99 citations

References
More filters
Journal ArticleDOI
TL;DR: The context for socially interactive robots is discussed, emphasizing the relationship to other research fields and the different forms of “social robots”, and a taxonomy of design methods and system components used to build socially interactive Robots is presented.

2,869 citations


"Progress and prospects of the human..." refers background in this paper

  • ...…replication of the human sensory system is hardly 2 A good overview of the cognitive aspects in HRC can be found in the literature, e.g., see [Fong et al., 2003, Freedy et al., 2007, Rani et al., 2004] possible with the current technology, the understanding and implementation of the…...

    [...]

Journal ArticleDOI
TL;DR: This work shows that the optimal strategy in the face of uncertainty is to allow variability in redundant (task-irrelevant) dimensions, and proposes an alternative theory based on stochastic optimal feedback control, which emerges naturally from this framework.
Abstract: A central problem in motor control is understanding how the many biomechanical degrees of freedom are coordinated to achieve a common goal. An especially puzzling aspect of coordination is that behavioral goals are achieved reliably and repeatedly with movements rarely reproducible in their detail. Existing theoretical frameworks emphasize either goal achievement or the richness of motor variability, but fail to reconcile the two. Here we propose an alternative theory based on stochastic optimal feedback control. We show that the optimal strategy in the face of uncertainty is to allow variability in redundant (task-irrelevant) dimensions. This strategy does not enforce a desired trajectory, but uses feedback more intelligently, correcting only those deviations that interfere with task goals. From this framework, task-constrained variability, goal-directed corrections, motor synergies, controlled parameters, simplifying rules and discrete coordination modes emerge naturally. We present experimental results from a range of motor tasks to support this theory.

2,776 citations


"Progress and prospects of the human..." refers methods in this paper

  • ...Similar to the humans’ anticipatory (feed-forward [Shadmehr and Mussa-Ivaldi, 1994b]) or feed-back [Todorov and Jordan, 2002] mechanisms to develop a suitable motor behaviour, the collaborative robots’ response to sensory input can be achieved through model/knowledge based techniques [Tamei and…...

    [...]

Journal ArticleDOI
TL;DR: The investigation of how the CNS learns to control movements in different dynamical conditions, and how this learned behavior is represented, suggests that the elements of the adaptive process represent dynamics of a motor task in terms of the intrinsic coordinate system of the sensors and actuators.
Abstract: We investigated how the CNS learns to control movements in different dynamical conditions, and how this learned behavior is represented. In particular, we considered the task of making reaching movements in the presence of externally imposed forces from a mechanical environment. This environment was a force field produced by a robot manipulandum, and the subjects made reaching movements while holding the end-effector of this manipulandum. Since the force field significantly changed the dynamics of the task, subjects' initial movements in the force field were grossly distorted compared to their movements in free space. However, with practice, hand trajectories in the force field converged to a path very similar to that observed in free space. This indicated that for reaching movements, there was a kinematic plan independent of dynamical conditions. The recovery of performance within the changed mechanical environment is motor adaptation. In order to investigate the mechanism underlying this adaptation, we considered the response to the sudden removal of the field after a training phase. The resulting trajectories, named aftereffects, were approximately mirror images of those that were observed when the subjects were initially exposed to the field. This suggested that the motor controller was gradually composing a model of the force field, a model that the nervous system used to predict and compensate for the forces imposed by the environment. In order to explore the structure of the model, we investigated whether adaptation to a force field, as presented in a small region, led to aftereffects in other regions of the workspace. We found that indeed there were aftereffects in workspace regions where no exposure to the field had taken place; that is, there was transfer beyond the boundary of the training data. This observation rules out the hypothesis that the subject's model of the force field was constructed as a narrow association between visited states and experienced forces; that is, adaptation was not via composition of a look-up table. In contrast, subjects modeled the force field by a combination of computational elements whose output was broadly tuned across the motor state space. These elements formed a model that extrapolated to outside the training region in a coordinate system similar to that of the joints and muscles rather than end-point forces. This geometric property suggests that the elements of the adaptive process represent dynamics of a motor task in terms of the intrinsic coordinate system of the sensors and actuators.

2,505 citations

Journal ArticleDOI
TL;DR: How studies on joint attention, action observation, task sharing, action coordination and agency contribute to the understanding of the cognitive and neural processes supporting joint action are outlined.

1,598 citations


"Progress and prospects of the human..." refers background in this paper

  • ...This has contributed to the development of implicit and explicit communication standards so that task-related information can be perceived and communicated intuitively [Sebanz et al., 2006]....

    [...]

Journal ArticleDOI
J. R. Napier1
TL;DR: It is shown that movements of the hand consist of two basic patterns of movements which are termed precision grip and power grip, which appear to cover the whole range of prehensile activity of the human hand.
Abstract: 1. The prehensile movements of the hand as a whole are analysed from both an anatomical anda functional viewpoint. 2. It is shown that movements of the hand consist of two basic patterns of movements which are termed precision grip and power grip. 3. In precision grip the object is pinched between the flexor aspects of the fingers and that of the opposing thumb. 4. In power grip the object is held as in a clamp between the flexed fingers and the palm, counter pressure being applied by the thumb lying more or less in the plane of the palm. 5. These two patterns appear to cover the whole range of prehensile activity of the human hand.

1,446 citations

Frequently Asked Questions (17)
Q1. What are the contributions mentioned in the paper "Progress and prospects of the human-robot collaboration" ?

This aspect represents high socio-economic impacts and maintains the sense of purpose of the involved people, as the robots do not completely replace the humans from the work process. In fact, the main purpose of this paper is to review the state-of-theart on intermediate human-robot interfaces ( bi-directional ), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human-robot collaboration. 

Bio-signals such as electromyography (EMG) and electroencephalography (EEG), or other physiological indices such as electrodermal activity [Pecchinenda, 1996,Rani et al., 2006] can be used to anticipate the human intention in PHRC. 

Collaborative transportation of bulky and/or heavy objects is one the most common candidates to test collaboration modalities and interfaces. 

the inclusion of more communication channels in the development of the intermediate interfaces will potentially contribute to an increase in the human cognitive burden and the low level robot control complexity. 

The main advantage of using admittance control schemes or optimal control schemes for adapting the gains and regulating the exchanged forces is that control theory provides rigorous frameworks and methods to prove the stability of the system and designing controls that are robust to perturbations. 

With the aim of mimicking the way humans interact with each other, the first combination of perception and control algorithms exploited the two mostly important senses: vision and touch (tactile perception). 

In bimanual visuomotor tasks, even perturbed by deviating force fields, humans rapidly learn to control the interaction-forces by a combination of arm stiffness properties and direct force control [Squeri et al., 2010]. 

observed in [Dimeas and Aspragathos, 2016] that the bandwidth of voluntary motion in humans is relatively low and below 2Hz [de Vlugt et al., 2003], hence during physical interaction with the robot it is possible to discriminate the human operator’s intent from the unstable motions thanks to frequency analysis. 

As mentioned before, a traditional control concept for achieving these performances is impedance control [Hogan, 1985], which consists in controlling the dynamic behaviour of the robot under the action of an external force, modelling the system as a spring-mass duo, with desired stiffness and damping. 

The approach used GMM for modelling the data set of acquired trajectories and a Hidden Markov Model (HMM) for the online prediction phase. 

robotic learning techniques such as: gradual mutual adaptation [Ikemoto et al., 2012, Peternel et al., 2016a], reinforcement learning [Palunko et al., 2014] or learning from demonstration [Evrard et al., 2009, Lawitzky et al., 2012a, Rozo et al., 2015] can be exploited to weaken the communication loops’ demands (e.g. bandwidth, number of feedback modalities) due to an increased level of robot autonomy. 

Haptic information provided by receptors in human limbs (fingertips, arm skin, etc.), on the other hand, represents a very important and complementary input to explore the external environment and for everyday task accomplishments. 

The approach is based on a combination of HMMs and Gaussian Mixture Regression (GMR) to learn and reproduce from a demonstrated set of data. 

the impedance control concept is used to simultaneously account for the load sharing problem, the motion control of the manipulated object and the minimisation of internal wrenches. 

Although this topic was not directly addressed in this review (due to the existence of a dense body if literature to discuss this aspect [Haddadin et al., 2009, De Santis et al., 2008, Alami et al., 2006]), it out to be mentioned that a great deal of effort must be directed towards ensuring safety for collaborating humans (to avoid injuries and accidents) and robots (to avoid unacceptable economic losses). 

There may be cases where a continuous re-adaptation from both sides would make sense and lead to more efficient collaboration, for example when the robot has to collaborate with a variety of different partners and could improve its skills by continuous learning from the different partners. 

When it comes to validating research results within effective and significant demonstrations, several benchmarking applications are introduced.