scispace - formally typeset
Open AccessJournal ArticleDOI

Collision Avoidance Interaction Between Human and a Hidden Robot Based on Kinect and Robot Data Fusion

Reads0
Chats0
TLDR
This work developed and applied an original new approach that combines data of one 3D depth sensor (Kinect) and proprioceptive robot sensors, and uses the principle of limited safety contour around the obstacle to dynamically estimate the robot-obstacle distance, and then generate the repulsive force that controls the robot.
Abstract
Human-Robot Interaction (HRI) is a largely addressed subject today. In order to ensure co-existence and space sharing between human and robot, collision avoidance is one of the main strategies for interaction between them without contact. It is thus usual to use a 3D depth camera sensor (Microsoft Kinect V2) which may involve issues related to occluded robot in the camera view. While several works overcame this issue by applying infinite depth principle or increasing the number of cameras, in the current work we developed and applied an original new approach that combines data of one 3D depth sensor (Kinect) and proprioceptive robot sensors. This method uses the principle of limited safety contour around the obstacle to dynamically estimate the robot-obstacle distance, and then generate the repulsive force that controls the robot. For validation, our approach is applied in real time to avoid collisions between dynamical obstacles (humans or objects) and the end-effector of a real 7-dof Kuka LBR iiwa collaborative robot. Our method is experimentally compared with existing methods based on infinite depth principle when the robot is hidden by the obstacle with respect to the camera view. Results showed smoother behavior and more stability of the robot using our method. Extensive experiments of our method, using several strategies based on distancing and its combination with dodging were done. Results have shown a reactive and efficient collision avoidance, by ensuring a minimum obstacle-robot distance (of $\approx \text{240 mm}$ ), even when the robot is in an occluded zone in the Kinect camera view.

read more

Content maybe subject to copyright    Report

OATAO is an open access repository that collects the work of Toulouse
researchers and makes it freely available over the web where possible
Any correspondence concerning this service should be sent
to the repository administrator: tech-oatao@listes-diff.inp-toulouse.fr
This is an author’s version published in: http://oatao.univ-toulouse.fr/27368
To cite this version:
Nascimento, Hugo and Mujica, Martin and Benoussaad,
Mourad Collision avoidance interaction between human and a
hidden robot based on kinect and robot data fusion. (2021) IEEE
Robotics and Automation Letters, 6 (1). 88-94. ISSN 2377-3766
Official URL:
https://doi.org/10.1109/LRA.2020.3032104

Collision Avoidance Interaction Between Human
and a Hidden Robot Based on Kinect and Robot
Data Fusion
Hugo Nascimento
1
, Martin Mujica
2
and Mourad Benoussaad
2
Abstract—Human-Robot Interaction (HRI) is a largely ad-
dressed subject today. In order to ensure co-existence and space
sharing between human and robot, collision avoidance is one
of the main strategies for interaction between them without
contact. It is thus usual to use a 3D depth camera sensor
(Microsoft
R
Kinect V2) which may involve issues related to
occluded robot in the camera view. While several works overcame
this issue by applying infinite depth principle or increasing the
number of cameras, in the current work we developed and
applied an original new approach that combines data of one
3D depth sensor (Kinect) and proprioceptive robot sensors. This
method uses the principle of limited safety contour around the
obstacle to dynamically estimate the robot-obstacle distance, and
then generate the repulsive force that controls the robot. For
validation, our approach is applied in real time to avoid collisions
between dynamical obstacles (humans or objects) and the end-
effector of a real 7-dof Kuka LBR iiwa collaborative robot.
Our method is experimentally compared with existing methods
based on infinite depth principle when the robot is hidden by
the obstacle with respect to the camera view. Results showed
smoother behavior and more stability of the robot using our
method. Extensive experiments of our method, using several
strategies based on distancing and its combination with dodging
were done. Results have shown a reactive and efficient collision
avoidance, by ensuring a minimum obstacle-robot distance (of
240mm), even when the robot is in an occluded zone in the
Kinect camera view.
Index Terms—Collision Avoidance, Sensor-based Control,
Peception-Action Coupling
I. INTRODUCTION
H
Umans and robots working together or sharing the same
space could reach an extraordinary level of performance
if they combine the human decision-making capabilities and
the robot’s efficiency [1], [2], however, this collaboration has
to be safe for humans beings.
Human-Robot Interaction (HRI) is a novel and promising
trend in robotics research since an increasing number of works
were addressed in this field [3], [4]. One aspect of HRI is
physical Human-Robot Interaction (pHRI), which deals with
1
Hugo N. is with Polytechnic school, Automation and control department,
University of Pernambuco, R. Benfica, 455 - Madalena, Recife - PE, 50720-
001. Brazil hugonascimentoal@gmail.com
2
Martin M. and Mourad B. are with LGP-ENIT, Uni-
versity of Toulouse, Tarbes, France; {martin.mujica,
mourad.benoussaad}@enit.fr
collision detection [5] and a continuous physical interaction
[6]. Another aspect of HRI is collision avoidance, where
the robot adapts its predefined trajectory to avoid collision
with dynamic obstacles (humans or objects) [7], [8], [9],
[10]. Collision Avoidance using human wearable sensors were
explored in [10]. However, this solution presents the equip-
ment complexity and thus limits the number of interacting
people. Furthermore, collision avoidance based on 3D depth
camera (Microsoft Kinect) were explored [7], [8], [9]. In these
works, it is usual to retract the robot from the scene to detect
and track only obstacles. Authors in [7] explored the depth
space to compute distances between the robot and dynamic
obstacles in real-time and then, the robot was controlled using
virtual repulsive forces principle. The obstacle-robot distance
estimation methods were more deeply explored in [11] by
developing an improved and faster method for a real time
application.
However, using Kinect implies robot occlusion issues when
the obstacle is between the robot and the camera. To over-
come these issues, different approaches were explored. One
approach used multiple Kinects to increase the workspace
representation. Authors of [8], [9] used two Kinects in a
similar way, however authors in [9] applied a collision avoid-
ance of a 6-dof robot manipulator, while keeping its task
by including Cartesian constraints. Furthermore, the use of
multiple Kinects, increases the calibration complexity between
them and the computational cost.
Other approach using one Kinect only, considered the
obstacle with an infinite depth, called a gray area [6], [7],
[11]. This approach prioritizes the human safety with a too
conservative behavior, but highlighted efficient results when
the robot is not hidden by the obstacle. However, when the
obstacle is placed between the robot and the camera, it will
be considered close to the robot even when it is far from it
in the depth axis, and thus it can not deal with the case of
obstacles that hide completely the robot from the camera’s
view. Indeed, since the robot is percepted by only the camera,
when it is hidden, its posture can not be estimated. Moreover,
all these previous works that used Kinect to extract the robot
pose had to manage the unavoidable noise that comes from
the vision system.
In the current work, we explore a new approach for collision
avoidance between dynamic obstacles and the robot’s End-
Effector (E-E), which can be completely hidden by obstacles.
For dealing with this case, our method differs from previous
works by merging the robot kinematic model and its proprio-

Fig. 1. Collision avoidance system overview.
ceptive data in the 3D depth data of the environment. Hence,
the robot posture can be estimated, specially when it is not
seen by the camera. Moreover, as an alternative to infinite
depth strategy [7], [11] and its above-mentioned issues, we
applied a limited safety contour around the obstacle to avoid
unnecessary robot movement and deal with the case of hidden
robot’s E-E. In these conditions, a comparison between our
approach and existing ones above-cited will be explored to
show the efficiency of the method and the current work con-
tribution. In the next Section an overview of the system and the
description of the materials are presented. Then our collision
avoidance approach is described in Section III. Results and
comparison with previous methods are presented and discussed
in Section IV, then a conclusion and the perspectives of this
work are summarized in Section V.
II. SYSTEM SETUP OVERVIEW
This section describes the whole system overview (hard-
ware/software) and introduces the Kinect’s depth principle and
the collaborative robot used.
A. Whole system overview
The whole system overview is presented in Fig. 1, which
is composed of a Perception system and a Control system
working in a closed-loop and in real-time. The perception
combines vision acquisition through Kinect {1} and the robot
pose using joint angles {2} along with its kinematic model
{3}. This robot pose is projected in the depth space using data
fusion {4}, which allow removing the robot from the image.
Then, the obstacle’s nearest point to the robot in a supervised
zone is detected {5} and its coordinates are filtered using a
Kalman filter {6} to handle the noise related to the depth
image. The distance between the obstacle’s nearest point and
the robot’s E-E is used in the control part, by generating a
repulsive vector {7} to control and adapt the robot posture
{8} in order to avoid collisions with dynamic obstacles.
B. Depth space representation with Kinect
The used Kinect V2 is placed in the range between 0.5m
and 4.5m from robot, where the maximum data rate is about
30Hz. From depth image (grayscale image of 512 × 424
resolution) [12], (1) is used to get a point in the Kinect’s
frame from its pixel address on the depth image.
x
r
= (x
i
c
x
)d
p
/f
x
y
r
= (c
y
y
i
)d
p
/f
y
z
r
= d
p
(1)
Where c
x
and c
y
are the coordinates of a so-called generic
Cartesian point in X and Y axis, f
x
and f
y
are the focal
lengths along X and Y axis and d
p
is the depth of the
pixel. (x
i
, y
i
) are coordinates of the pixel on the image and
(x
r
, y
r
, z
r
) represents real point coordinates in the Kinect
frame.
C. Practical aspects of the collaborative robot
A 7-dof redundant manipulator (Kuka LBR Iiwa R820
collaborative robot) has been used. To control the robot and get
its proprioceptive data using an external system in real time,
Fast Robot Interface (FRI) software option was adapted and
used [13]. The FRI control is based on an overlay principle
which consists of superposing a control input, derived from the
external system (with our method), and a local robot control
law.
III. STRATEGIES OF COLLISION AVOIDANCE
This section describes the used methodology. It starts from
the perception of the robot and the environment until the
control law, following steps presented in Fig. 1.
A. Kinematic model of the Robot
Kinematic model of our robot was established from [14]. It
is used to describe and update the robot’s pose (a skeleton of
Fig. 2-left) from the measured robot’s joint angles (FRI §II-C).
Hence, the robot’s pose is updated in real-time, even when the
Kinect camera does not see it (robot in occluded zone).
B. Data Fusion of depth image and robot posture
To handle the obstacle-robot collision, it is necessary to
know what points correspond to the robot in order to consider
all the other points as corresponding to the environment
(possible obstacles). Indeed, if a point of the robot is not
identified and removed from image, it can be considered as
a possible obstacle, particularly if it is in a supervised zone.
Therefore, with the kinematic model and joint angles, a robot
skeleton was implemented and updated, as a real robot, on the
3D depth image, which makes possible the robot identification.
This skeleton augmented with a predefined 3D robot form is
then used to remove it from the image and obtaining a depth
image without the robot. These steps are illustrated by Fig. 2,
where the left side shows the robot skeleton added to the depth
image, and the right side shows the depth image with the robot
removed. However, these steps are possible if a robot data
fusion is done between a depth image space and its intrinsic
data. This data fusion consists of linking the robot skeleton,
updated from its intrinsic data and model, with points of robot
(or a visible part) in the image. Hence, a precise representation
between Kinect frame and the robot frame is required. For
that, an offline calibration procedure was implemented using
the three known points technique [15].

Fig. 2. Robot depth image with its skeleton (left) and after the its extraction
from image (right).
Fig. 3. Supervised zone and its nearest point.
C. Searching and filtering of the obstacle nearest point
A supervised zone, where an obstacle is searched, was
chosen and implemented as a spherical shape, which center is
the robot’s E-E, as illustrated by Fig. 3. The method searches
in the depth image inside this sphere for the obstacle nearest
point from the sphere center (Fig. 3). The collision avoidance
strategy is based on the position of this point with respect to
(w.r.t.) the robot’s E-E, which makes its estimation quality
esssential for robot control and smooth motion. Therefore,
to ensure the quality of this estimation, the point’s position
was filtered with a Kalman filter since it is known to be
fast, optimal and lite [16]. To apply this Kalman filter, a
constant velocity model [9] of point motion was adopted and
implemented.
D. Distance estimation
Our approach of collision avoidance is based on robot-
obstacle distance estimation. It is calculated using Euclidean
distance d
1
between obstacle nearest point P
o
(see §III-C) and
a robot E-E point P
e
(see §III-B). To consider the occlusion
risk of the robot by the obstacle, we distinguish two use cases
(Fig. 4):
Case 1: No occlusion risk. When the obstacle has a greater
depth than the robot E-E (z
1
> z
e
) in the camera point of
view (Fig. 4-Case 1), the distance d
1
is used in the collision
avoidance method.
Case 2: risk of occlusion. When the obstacle has a lower
depth than the robot E-E (z
e
> z
1
), there is a risk of occlusion.
In this case, we do not consider infinite depth strategy for the
obstacle as in [7]. Instead, we used a safety contour around the
point that we are dealing with (visible nearest point), as shown
by Fig. 4-Case 2. Hence, we limit the influence of obstacle on
the robot, even keeping a safety distance:
d
2
= d
1
R (2)
Kinect
z1
ze
Distance - Point of
interest to obstacle
point
Obstacle Point
Point of
Interest
Kinect
z1
ze
Distance - Point of
interest to obstacle
point
Obstacle Point
Point of
Interest
Safety
Contour
Case 1 Case 2
Fig. 4. Methods for distance evaluation (two cases). Case 1: no risk of
occlusion. Case 2: with risk of occlusion
Fig. 5. Distancing strategy principle.
Where d
1
is calculated as mentioned before and R is the radius
of the safety contour. The choice of its value is based on the
rough estimation of the obstacle (or the human hand) size,
by considering the longest distance between two points of it.
Hence, In case 2, it is the safety distance d
2
which is used in
the collision avoidance method.
E. Potential field method
To ensure collision avoidance of the robot with dynamic
obstacles, the potential field method was applied [17]. In this
method, the dynamic obstacle creates a repulsive force which
is used here through two strategies: Distancing and Dodging.
These strategies are based on intuitive human collision avoid-
ance (example of bullfight).
Distancing strategy: It is an intuitive method that consists
of distancing the robot from the obstacle in the same line than
vector
d , which links the obstacle to robot E-E, by applying
a repulsive force as illustrated by Fig. 5.
The model of repulsive force is defined as in [7]:
F 1 =
d
k
d k
V (3)
Where V is the force intensity defined as an inverted sigmoid
function of the distance between obstacle and robot’s E-E:
V =
V
max
1 + e
(k
d k (2)1)α
(4)
V
max
is the maximal force intensity, α a shape factor and ρ a
parameter related to the supervised zone size [7]. Therefore,

X
e
Y
e
Z
e
Y
e
(a) Dodging vector (b) Dodging and distancing
Fig. 6. Dodging and distancing combination strategy.
the repulsive force intensity V will be V
max
when robot-
obstacle distance vanish, and should approach zero when the
distance reaches supervised zone limits, since the force is not
defined beyond.
Dodging strategy: In this technique, instead of moving like
the obstacle and in the same direction, the end-effector dodges
the obstacle by moving in another direction thanks to the
Cartesian force
F 2 (see Fig. 6a). In the current work, this
direction is chosen to be on the plane (X
e
, Y
e
) (in yellow) of
the robot’s E-E frame (Fig. 6a), where Z
e
is the axis of the
last joint robot.
Therefore, the force
F 2 is given by the equation:
F 2 =
P roj(d)
(X
e
,Y
e
)
k
P roj(d)
(X
e
,Y
e
)
k
V (5)
Where
P roj(d)
(X
e
,Y
e
)
is the projection of vector
d in the
plane (X
e
, Y
e
) and V is the repulsive force intensity defined
by (4). A generalization of this dodging strategy is made here
in practice. In fact, a Cartesian force applied on the robot’s E-
E is actually
F , which is a linear combination of distancing
vector
F 1 and dodging vector
F 2, as illustrated by Fig. 6b
and described by (6):
F = β
1
F 1 + β
2
F 2 (6)
Where β
1
and β
2
are parameters to adjust to give the robot
more distancing or more dodging behavior, as required by the
application.
For both strategies, the calculated repulsive force was ap-
plied as a wrench (Cartesian forces) at the robot E-E. In
Kuka LBR iiwa, this wrench is superposed to an existing local
control law, by using FRI software tool (§II-C).
IV. EXPERIMENTAL RESULTS AND DISCUSSION
In this section, experimental tests and results of the collision
avoidance strategies are presented and discussed to analyze
and assess our proposed method. The experimental setup
includes a 7-dof Kuka LBR Iiwa with its controller, a Kinect
V2 and an external computer (Intel Core i7; 2.5Ghz × 8;
16 GiB Memory; NVIDIA Quadro K2100M as graphics;
Ubuntu 18.04 as OS). Kinect was placed on a rigid support
at 2.346m from the robot, precisely in the robot’s coordinates
(140mm, 431mm, 2302mm).
Parameters of our methods and strategies defined above can
be adjusted for each application purpose. However, they were
adjusted in the current experiment as follows. The supervised
zone diameter value is fixed to 1.1m and safety contour
radius R = 150mm (based on the obstacle’s size knowledge);
Parameters of force intensity function of (4) are α = 5.0,
ρ = 0.425 and V max = 45.0N; Parameters of forces linear
combination of (6) are β
1
= 1.8 and β
2
= 1.0. The way to
adjust β
1
and β
2
can be explored in future works, meanwhile
it is experimentally adjusted and fixed in the current work to
test the method.
The background task (robot task when there is no obstacle)
is to keep its initial configuration with a compliant Cartesian
behavior in translation (i.e. with a virtual Cartesian mass-
spring-damper system) using an impedance controller of Kuka
LBR iiwa [18]. For the three axis, the stiffness was fixed
to 300N/m and the damping ratio to 1. Then, our collision
avoidance strategies and robot control overlay (i.e. external
control superposed to local robot control) were applied at a
wrench level through FRI command mode in real time.
In the following experiments, distancing strategy is applied
when robot is in an occluded zone to highlight the robustness
of our method. To point out the interest of our method
compared to the existing ones, a comparative test is done in
the condition of robot’s E-E in the occluded zone w.r.t the
camera point of view. Then, a test with multiple and repetitive
collision avoidance is proposed to show results reproductibility
and finally, the dodging strategy is tested and analyzed. Results
are then discussed where dodging and distancing strategies are
compared.
A. Collision avoidance : methods comparison
In the current test, an experimental comparison between
our method, based on the safety contour principle, and the
method based on the infinite depth principle commonly used
in previous works [6]. For that, a Cardboard box (dynamic
obstacle) was used to move in a parallel plane between
the robot and the camera image hiding the robot’s E-E in
the camera view. Hence, Most of the time the robot’s E-E
remains behind the obstacle w.r.t. the Kinect viewpoint. For
the simplicity of the comparison, a distancing strategy was
used for both methods. This experimental results highlight
an unsmooth behavior of the robot when it is hidden, in
the method using infinite depth while it is smooth with our
method. This difference of behavior is illustrated by Fig. 7
that shows the evolution of the obstacle-robot’s E-E distance
while the robot is in an occluded zone. Considering a slow
and smooth movement of the obstacle, we can conclude that
the distance between the robot’s E-E and the obstacle with
the infinite depth method (black dashdotted line) demonstrates
an unsmooth robot behavior. Moreover, this unstable behavior
may introduce unsafety during the interaction, since we can
notice that this robot-obstacle distance is close to the limit
defined at 150mm (red line). On the other hand, this defined
safety distance is largely respected using our method of safety
contour. To illustrate our method principle and how it estimates
the robot posture when it is hidden, Fig. 8 shows an images
sequence of the experiment using this method, where a RGB
image is the main picture and a grayscale image (Kinect

Citations
More filters
Journal ArticleDOI

Control Techniques for Safe, Ergonomic, and Efficient Human-Robot Collaboration in the Digital Industry: A Survey

TL;DR: A review of the most recent and relevant contributions to the related literature, focusing on the control perspective is presented in this paper , where researchers and practitioners are provided with a reference source in the related field, which can help them designing and developing suitable solutions to control problems in safe, ergonomic, and efficient collaborative robotics.
Posted ContentDOI

Action recognition for the robotics and manufacturing automation using 3-D binary micro-block difference

TL;DR: An action recognition algorithm for robotics and manufacturing automation using a binary micro-block difference representation of 3-D patches from video with a complex background in several scales and orientations leads to an informative description of the scene action.
Journal ArticleDOI

Real-time motion control of robotic manipulators for safe human–robot coexistence

TL;DR: A computationally efficient control scheme for safe human–robot interaction that relies on the Explicit Reference Governor (ERG) formalism to enforce input and state constraints in real-time, thus ensuring that the robot can safely operate in close proximity to humans.
Journal ArticleDOI

Proactive human-robot collaboration: Mutual-cognitive, predictable, and self-organising perspectives

TL;DR: The Proactive Human-Robot Collaboration (HRC) as discussed by the authors is a collaborative human-robot symbiotic relation with a 5C intelligence, from Connection, Coordination, Cyber, Cognition to Coevolution, and finally embracing mutual-cognitive, predictable, and self-organising intelligent capabilities.
Journal ArticleDOI

Real-time motion control of robotic manipulators for safe human–robot coexistence

TL;DR: In this paper , the Explicit Reference Governor (ERG) formalism is used to enforce input and state constraints in real-time, thus ensuring that the robot can safely operate in close proximity to humans.
References
More filters
Book

Real-time obstacle avoidance for manipulators and mobile robots

TL;DR: This paper reformulated the manipulator control problem as direct control of manipulator motion in operational space-the space in which the task is originally described-rather than as control of the task's corresponding joint space motion obtained only after geometric and kinematic transformation.
Book ChapterDOI

Real-time obstacle avoidance for manipulators and mobile robots

TL;DR: In this article, a real-time obstacle avoidance approach for manipulators and mobile robots based on the "artificial potential field" concept is presented, where collision avoidance, traditionally considered a high level planning problem, can be effectively distributed between different levels of control.
Journal ArticleDOI

An atlas of physical human-robot interaction

TL;DR: The present atlas is a result of the EURON perspective research project “Physical Human–Robot Interaction in anthropic DOMains (PHRIDOM)”, aimed at charting the new territory of pHRI, and constitutes the scientific basis for the ongoing STReP project ‘Physical Human-Robots Interaction: depENDability and Safety (PHRIENDS’.
Journal ArticleDOI

Cooperation of human and machines in assembly lines

TL;DR: In this article, a survey about forms of human-machine cooperation in assembly and available technologies that support the cooperation is presented, including organizational and economic aspects of cooperative assembly including efficient component supply and logistics.
Journal ArticleDOI

The DLR lightweight robot : design and control concepts for robots in human environments

TL;DR: The first systematic experimental evaluation of possible injuries during robot‐human crashes using standardized testing facilities is presented, and a consistent approach for using these sensors for manipulation in human environments is described.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What have the authors contributed in "Collision avoidance interaction between human and a hidden robot based on kinect and robot data fusion" ?

While several works overcame this issue by applying infinite depth principle or increasing the number of cameras, in the current work the authors developed and applied an original new approach that combines data of one 3D depth sensor ( Kinect ) and proprioceptive robot sensors. For validation, their approach is applied in real time to avoid collisions between dynamical obstacles ( humans or objects ) and the endeffector of a real 7-dof Kuka LBR iiwa collaborative robot. 

In future works, other robot control law and the online adaptation of parameters, vector weights and safety contour size to make a robust behavior should be addressed.