scispace - formally typeset
Open AccessProceedings ArticleDOI

Proactive behavior of a humanoid robot in a haptic transportation task with a human partner

TLDR
A control scheme that allows a humanoid robot to perform a complex transportation scenario jointly with a human partner and takes over the leadership of the task to complete the scenario.
Abstract
In this paper, we propose a control scheme that allows a humanoid robot to perform a complex transportation scenario jointly with a human partner. At first, the robot guesses the human partner's intentions to proactively participate to the task. In a second phase, the human-robot dyad switches roles: the robot takes over the leadership of the task to complete the scenario. During this last phase, the robot is remotely controlled with a joystick. The scenario is realized on a real HRP-2 humanoid robot to assess the overall approach.

read more

Content maybe subject to copyright    Report

HAL Id: lirmm-00773403
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00773403
Submitted on 13 Jan 2013
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Proactive Behavior of a Humanoid Robot in a Haptic
Transportation Task with a Human Partner
Antoine Bussy, Pierre Gergondet, Abderrahmane Kheddar, François Keith,
André Crosnier
To cite this version:
Antoine Bussy, Pierre Gergondet, Abderrahmane Kheddar, François Keith, André Crosnier. Proactive
Behavior of a Humanoid Robot in a Haptic Transportation Task with a Human Partner. Ro-Man’2012:
International Symposium on Robot and Human Interactive Communication, Sep 2012, Université de
Versaille, France. pp.962-967, �10.1109/ROMAN.2012.6343874�. �lirmm-00773403�

Proactive Behavior of a Humanoid Robot in a Haptic Transportation
Task with a Human Partner
Antoine Bussy
1
Pierre Gerg ondet
1,2
Abderrahmane Kheddar
1,2
Franc¸ois Keith
1
Andr´e Crosnier
1
Abstract In this paper, we propose a control scheme that
allows a humanoid robot to perform a complex transportation
scenario jointly with a human partner. At first, the robot guesses
the human partner’s int entions to p roactively participate to the
task. In a second phase, th e human-robot dyad switches roles:
the robot takes over the leadership of t he task to complete
the scenario. During this last phase, the robot is remotely
controlled with a joystick. The scenario is realized on a real
HRP-2 humanoid robot to assess the overall approach.
I. INTRODUCTION
When two hum ans perform the transportation of an object
together, such as a table, they are able to guess the other
partner’s intentions and act accordingly. The mutual under-
standing of each partner’s intentions by the other generates
proactive behaviors and good synch ronization of the dyad
during the task. Moreover, both partn e rs may alterna tively
share th e leadership of the task during its execution and
take decisions such as turning or stopping, relying on the
informa tion they get. Because one might know an d/or per-
ceive something the other does not, a share of the leadership
is desirable [1]. These are two cha racteristics we want to
reproduce with a humanoid robot performing such a task
with a human partner (see illustration on Fig. 1): proactivity
and role switching.
Early works on physical Human- Ro bot Interaction (pHRI)
gave the robot a passive role [2] where the human partner
had to ap ply m ore forces than necessary in order to move
the object due to the cau sality of the robot’s contr ol law.
Its role was to c arry a part of the object’s vertical load at
the cost of an increase of the horizontal load’. Proactivity
aims a t solving this problem: g uessing the human partners
intentions to decrea se this horizontal load. An approach is
to regulate the robot’s impedance according to the perceived
intentions [3][4]. An other way to be pr oactive is to guess
the human partner’s intended trajectory. This is the approach
chosen in [5][6] and also the one we choose in this paper.
Note th at both approaches are not incompatible.
Relatively to existing work, our approach distinguishes in
its capability to guess the human partner’s intentions for a
wide variety of motions, where [5] and [6] only c onsider
point-to-point movements. Furthermore, we distinguish the
recogn ition of the partner’s intended trajectory from the
action und e rtaken to help him/her. Our proactive follower
acts similarly to a leader. The difference is that it chooses
1
Universit´e Montpellier 2-CNRS, LIRMM, Interactive Digital Human
group, 161 rue Ada, F-34095, Montpellier cedex 5, France. Contact:
<name>@lirmm.fr
2
CNRS-AIST Joint Robotics Laboratory UMI3218/CRT, Tsukuba, Japan.
to follow a trajectory determined from a guess of its part-
ner’s in te ntions rather than fro m its own volition. Thus
our approach allows a natural ro le switching. We proposed
a simpler one-degree-of-freedom co ntrol law based on a
study performed with human subjects in [7] and this article
generalizes it.
In Section II, we propose a compliant position control
law for both leader an d follower modes and how it can be
used for role switching. We describe how a motion decom-
position allows to recognize various intende d trajectories in
Section I II. Because how a robot may beh ave as a leader
is beyond the scope of this paper, we pr esent how a human
operator takes the con trol of the robot in the leader mode
with a joystick in Section IV. We test our control scheme in
Section V by makin g our HRP-2 humanoid r obot perform
the tra nsportation scenario of Fig. 1 with a human partner.
II. TRAJECTORY-BASED CONTROL LAW FOR PHRI
A. Notations and Hypothesis
In this article, we are interested in a horizontal transporta-
tion task (two translation and one angular coord inates). We
also want our robot to be compliant on the vertical axis. How-
ever, we first present our control law with only tran slatio ns
before introducing the rotation around the vertical axis at the
end of this section. We therefore use a classical Cartesian
world coordinate sy stem for trajectories X and forces F
X =
x
y
z
F =
f
x
f
y
f
z
(1)
As the transported object is supposed rigid and we are
only considering translations, every point of the object has
the same trajectory. Besides the rob ot is handling the object
firmly without slipp ing, so every point of th e robot’s hands
follows the same trajectory as the ob je ct. Thus we use
the middle position of the hands to describe the object’s
trajectory and note it X. Forces are calculated at this po int.
This ch oice is discussed further in Section III.
In this case, the object has the simple following dynamics
M(
¨
X G) = F (2)
where M is the object in ertia matrix, G the gravity vector
and F the resultan t of the forces applied on the objec t.
Unless sspecified otherwise, all variables are time-
dependent.

Fig. 1. Scenario of the experiment. The human-robot dyad has to carry the table through two doors that form a 90° angle. The dimensions of the table
are too big to perform the task with a single bend, so that the human has to pass backward through the first door and forward through the second one.
The human assumes the leadership of the task as he is walking backward through the first door, and then is guided by the robot through the second door.
During this second phase, the robot is remotely controlled by a second human thanks to a joystick.
B. Proposed Control Law
In this section, our goal is to control the Cartesian
position of a supposedly rigid object, while maintaining
a safe physical inter action with the human partner. Based
on the equilibrium trajectory hypothesis [8], we propose
the following simple trajectory-referenced admittance control
law [9]:
F = B
˙
X + K(X
0
X) (3)
where:
X
0
is the input equilibrium trajectory,
F is the force applied by the manipulator,
B and K are constant diagonal damping and stiffness
matrices.
The problem is to determine the trajectory X
0
that realizes
accurate position control, while keeping the damping B and
stiffness K at human-like levels. Given a twice continuously
differentiable desired trajectory X
d
, the object inertia m atrix
M and the gravity vector G, we choose
X
0
= K
1
M(
¨
X
d
G) + B
˙
X
d
+ KX
d
. (4)
In the case of a standalone manipulation, the simple object
dynamic equation (2) becomes, with (3) and (4 )
M(
¨
X
¨
X
d
) + B(
˙
X
˙
X
d
) + K(X X
d
) = 0 (5)
whose solution X converges asymptotically to X
d
with stable
gains M, B and K: with no perturbation , the object follows
the tra jectory X
d
. Th e admittance control law (3) b ecomes
F = M(
¨
X
d
G) + B(
˙
X
d
˙
X) + K(X
d
X) (6)
Equation (5) shows that if we can correctly predict the
dynamics of the object, i.e. its inertia and all the forces
exerted on it, we can adapt the equilibrium trajectory X
0
so that the desired trajectory X
d
is re a ched. In our simple
case, it only means estimating the object inertia matrix M.
It can easily be estimated at the experiment start-up by
measuring the force vertica l component. N ote that an error
of the dynamics prediction results in an error in X
0
which
is “filtered” by the a dmittance the same way as an error on
X is.
C. Behavior in Collaborative Mode
Here, we assume tha t the forces applied by the other
partner in collaborative m ode cannot be predicted. Thus, the
method used previously cannot be repeated. However, we
show that there is an alternative method to achieve a desired
trajectory. We also assume that both partners are able to shar e
the load of the object dynamics. In our simple case, there is
only the object predicted inertia and weight M(
¨
X
d
G), i.e.
the object mass. As the partne rs are sharing the object weight,
they can und e rtake the inertia of the portion of the object
they are carrying. In the following, we assume an equal
sharing of M/2 for simplicity. All the reasoning described in
this subsection can be straightforwardly extended to several
partners. Th e notations of the previous subsection are reused,
indexed with the number of the partner i {1 , 2}.
Applying (3) a nd (4) for each par tner, we obtain the
following o bject dynamic equation
M
¨
X =
2
i=1
M
2
¨
X
d,i
+ B
i
(
˙
X
d,i
˙
X) + K
i
(X
d,i
X) (7)
For the sake of clarity, we assume that the human/robot
control law is the one we propo se, but it is only necessary
for X
d,2
to be the solution of
M
2
(
¨
X G) = F
2
(X) (8)
i.e. the human/robot is able to perform accurately a desire d
trajectory whe n transporting half the object alone. In the case
where
X
d,1
= X
d,2
= X
d
(9)
the realized trajectory is X
d
and F
1
= F
2
= M
¨
X
d
/2 which is
the equal sharing of the task. In practice, we would have
(
˙
X
d,1
=
˙
X
d,2
=
˙
X
d
X
d
= (K
1
+ K
2
)
1
K
1
X
d,1
+ K
2
X
d,2
(10)
The position offset between X
d,1
and X
d,2
results in a co-
contraction force between partners which is observed in [10].
Thus, in orde r to ac hieve an equal sharing of the task,
both partners must have the same desired trajectory [1].
Otherwise, different desired trajecto ries results in internal

(i.e. not working) forces exerted by the two p a rtners. These
statements are already well-known and predicting a human
partner’s desired trajectory is one of the main challenges in
the pHRI field. However, how X
d
is determined is completely
indepen dent of our control law, so that it can be used in both
standalone and collaborative modes (leader and follower).
The difference between these modes lies in the trajectory
planning of X
d
. For a proactive follower behavior, X
d
must
be planned to match the human partner s intentions at best.
Besides, a ssuming we have trajector y pla nners for each of
the three modes (standalone, leader, follower), it is possible
to switch the robot behavior as theorized in [1][11] by
switching the planners, without changing the control law th at
regulates the physical interaction.
D. Rotation around the vertical axis
If we consid e r a rotation around the vertical axis in our
control law, the object dynamic equations become more
complex with terms that depend on its geometry, even in
the translational part. Predicting and compensating it is not
as simple. We decide not to compensate the ob je c t dyna mics
that are introduced by the additiona l rotation, and to let them
act as a perturbation on our control law.
III. PROACTIVE TRAJECTORY PLANNER
To be proactive, the robot first needs to correctly gu ess
the human partners intentions, and thus to locally predict
his/her intended actions or trajectories. Motion prediction
of the human partner has been addressed throughout the
literature in pHRI. The strategy generally aims at reducing
the problem to the estimation of handful parameters that
allows generating a complete motion. The most famous
example is the minimum jerk model [5][6]. However this
model is always rather applied to poin t-to-point motion and
does not fit for motions going beyond the reach of the arm or
even to motion for which the target point is not well defined.
When two humans perform a transportation task of an object,
they might talk to give each other indication s, such as “turn
left”, “go forward” or stop”. Based on this observation, we
suggest to decompose the motion in phases, as it had been
addressed f or handshaking [4] and dancing [12].
The purpose of this part is to generate a plan for the robot
in the form of a desired trajectory X
d
that matches the huma n
partner’s intentions.
A. Motion Primitives
We d e compose the motion into template sub-motions, or
motion primitives, pictured in Fig. 2:
Stop: no motion;
Walk: walk forward or backward;
Side: walk sideways;
Turn: tu rn on itself;
Walk/Turn: turn while walking forward or backward.
Sequencin g these primitives allows to gene rate various
motions, as in Fig . 3, while preventing some unnatural
motions like walking in diagonal, i.e. Walk/Side. Mo reover,
we do not a llow every sequence. For instance, Side cannot
1234
5678
9ABC
5678
9ABC
1DEF
Fig. 2. Finite State Machine describing the possible primitives sequenc-
ing. It can generate sequences for both leader and follower modes. T he
transitions are triggered differently depending on the chosen mode.
0 1 2 3 4
0
0.5
1
1.5
2
X (in m)
Y (in m)
Fig. 3. Example of desired trajectory from yellow dot to red dot. The
sequence of primitives is Stop, Walk, Walk(with a different V ), Stop,
Side, Walk, Walk/Turn, Stop. The alternation of black and gray pictures
the alternation of primitives. The sequence of V used is given by Table I.
follow Walk: the robot must stop walking before moving
sideways. Eac h primitive is associated with a three dimension
velocity vector V in a local frame
1
(fronta l, lateral and
angular veloc ities) which is updated at each transition. As
it can be seen from Table I , the signal V is piecewise
constant over time and therefor e does not represent a fe asible
trajectory. It should rather be considered as a simplified
velocity plan, i.e. a template.
The local desired velocity V
d,l
is generated from this plan
by using a critically damped second order filter
V
d,l
V
=
ω
2
0
(s +
ω
0
)
2
(11)
where
ω
0
characterizes the rise tim e of the desired trajectory.
For example, when transiting from Stop to Walk, the value
of the first component of V instantly switches from 0 to
0.5 m/s. This velocity step n eed to be smoothed into a more
human-like motion with filter (11).
Then, a change of frame is performed on V
d,l
to get the
desired velocity V
d
in the global frame (x,y,yaw). As w e
only consider planar mo tions, the vertical component, as well
as th e roll and pitch ones, are set to zero to obtain a six
1
This frame has the s ame orientation as the robot but has a fixed origin
in the world frame.

TABLE I
EXAMPLE OF PRIMITIVE AND V SEQUENCES.
Primitive V
Stop (0.0,0.0,0.0)
Walk (0.5,0.0,0.0)
Walk (0.4,0.0,0.0)
Stop (0.0,0.0,0.0)
Side (0.0,0.4,0.0)
Walk (0.5,0.0,0.0)
Walk/Turn (0.5,0.0,0.5)
Stop (0.0,0.0,0.0)
components vector. Finally, V
d
is integrated into the de sire d
trajectory X
d
in the global frame.
B. Turning
We parametrize the turning motion with an angular veloc -
ity and omit the position of the center of rotation. Therefore
the robot is only able to turn around the center of its hands
because of our c hoice of X in Sectio n II. When collaborating
with direct contact (the human partner is d irectly holding
the robot’s hands) or in standalone mode, it is a reasonable
choice and it has the advantage not to rely on the object’s
geometry. However, whe n carrying a table, the ch oice of the
center of rotation depends on the motion to perfor m. For
instance, when only rotating the tab le (primitive Turn), the
center of the object is the most rea sonable choice if we wish
to minimize the distance both partners have to travel.
Putting the center of rotation too far away from the robot’s
body also forces the robot to perform lateral steps which
puts a lot of strain on it. We therefore keep to our choice of
Section I I and leave the dynamic determination of a center
of rotation to futur e work. Th e drawback is that it forces the
human partner to travel much more distance than the robot
when ro ta ting the object on the spot.
C. Reactive Generation of Primitives Sequences
In our approach, predicting the leader s intended tra jectory
consists in deter mining a primitives sequence that matches it.
We mainly use velocity thresholds to detect the switches of
primitives. For example, when the current primitive is Sto p
and the effective velocity V o f the object is zero, the robot
senses a force on its wrists and updates V with (3). If the
first component of V exceeds a given threshold, the robot
switches to the primitive Walk.
We use velocity thresholds instead of force thresholds
because of the co-contraction force between the partners.
This co-contraction force may vary between dyads an d trials,
so that good force thresholds cannot be found. There is
no su ch problem with the velocity because it is pa rtly the
high-pass filtered force (3). Nevertheless, the leader might
indefinitely increase the force very slowly without triggering
the velocity thresholds. To avoid such a situation, we also
add high force thresholds, which ar e tun ed to be less rea c tive
than the velocity ones.
Self-transitions are also regularly triggered to update V ,
e.g. every second, with th e current velocity V of the object,
so that the rob ot is able to adapt its desired velocity. The
subsequen ce Walk, Walk(with a different V ) of Fig. 3 is an
example of self-transition. When switching from a primitive
to a different one, i.e. not a self-transition, we set V to a fixed
default value and let it be updated at the next self-transition
one second later.
IV. SWITCH TO LEADER MODE WITH A JOYSTICK
As stated in Sec tion II, our pHRI control law is in-
dependent of how the desired trajectory X
d
is generated
and thus allows easy role switching between follower and
leader behaviors. To demonstrate the capability of our control
scheme to do so , we generates an in te nded trajectory X
d
for
the robot from a joystick. Thus a second human can pilot
the robot during the task of tra nsporting the table with the
first human partner.
We use a joystick with a digital directional touchpa d to
control the rob ot in leader mode. We use the same FSM
as in the follower mode (Fig. 2), where the transitions are
triggered by the touchpad state instead of haptic clues, thus
determining the motio n direction. The velocity amplitude is
set constant and not controlled by the joystick. The ou tput
plan V from the FSM is then used the same way it is in
Section III to compute the desired trajectory X
d
for the
impedance co ntrol. The joystick operator can assum e or give
up the leade rship of the task by pressing a specific key on
the joystick. The minimal input we use from the joystick and
the unn ecessary force feedback assess the robustness of our
control scheme.
V. EXPERIMENTATION ON THE HRP-2 HUMANOID
ROBOT
A. Scenario
To validate our pro posed control scheme, we realize the
scenario de scribed in Fig. 1.
B. Whole Body Motion and Walking
The HRP-2 humanoid robot interacts with its environme nt
through two force-torque sensors mounted on ea c h wrist
that measure two forces F
L
and F
R
, that we transport at
point X and sum to get the force feedback F for the
admittance controller. The stiffness K and dam ping B di-
agonal coefficients are experimentally tu ned (Table II). The
admittance controller output X is used to position-control the
hands through the Stack-of-Tasks (SoT) developed in [13],
a generalize d inverted kinema tics. The SoT allows to define
various tasks –positioning the hands in the world frame in
our case– and uses the robot redundancy to realize them
simultaneou sly.
TABLE II
STIFFNES S AND DAMPING COEFFICIENTS .
Stiffness Damping
K
xy
= 40 N/m B
xy
= 85 N.s/m
K
z
= 250 N/m B
z
= 200 N.s/m
K
θ
= 25 N.m/rad B
θ
= 50 N.m.s/rad

Figures
Citations
More filters
Journal ArticleDOI

Trends and challenges in robot manipulation

TL;DR: The progress made in robotics to emulate humans’ ability to grab, hold, and manipulate objects is reviewed, with a focus on designing humanlike hands capable of using tools.
Journal ArticleDOI

Learning Physical Collaborative Robot Behaviors From Human Demonstrations

TL;DR: This work proposes a framework for a user to teach a robot collaborative skills from demonstrations, and presents an approach that combines probabilistic learning, dynamical systems, and stiffness estimation to encode the robot behavior along the task.
Journal ArticleDOI

User profiling and behavioral adaptation for HRI

TL;DR: A survey of the recent literature on user profiling and behavioral adaptation in human-robot interaction is presented and introduces a general classification scheme for both the profiling and the behavioral adaptation research topics in terms of physical, cognitive, and social interaction viewpoints.
Proceedings ArticleDOI

Collaborative Human-Humanoid Carrying Using Vision and Haptic Sensing

TL;DR: A framework for combining vision and haptic information in human-robot joint actions that consists of a hybrid controller that uses both visual servoing and impedance controllers to allow for a more collaborative setup.
References
More filters
Journal ArticleDOI

Impedance Control: An Approach to Manipulation: Part I—Theory

TL;DR: It is shown that components of the manipulator impedance may be combined by superposition even when they are nonlinear, and a generalization of a Norton equivalent network is defined for a broad class of nonlinear manipulators which separates the control of motion from theControl of impedance while preserving the superposition properties of the Norton network.
Proceedings ArticleDOI

Impedance Control: An Approach to Manipulation

TL;DR: In this paper, a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance is presented, which results in a unified control of manipulator behaviour.
Journal ArticleDOI

The control of hand equilibrium trajectories in multi-joint arm movements

TL;DR: The success of the predicted behavior in capturing both the qualitative features and the quantitative kinematic details of the measured movements supports the equilibrium trajectory hypothesis and the control strategy suggested here may allow the motor system to avoid some of the complicated computational problems associated with multi-joint arm movements.
Proceedings ArticleDOI

General Model of Human-Robot Cooperation Using a Novel Velocity Based Variable Impedance Control

TL;DR: A new method is presented based on an online variable impedance control using differentiation of the force as a natural sensor of human intention to make this cooperation more transparent.
Related Papers (5)