scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Hybrid Visual Servoing With Hierarchical Task Composition for Aerial Manipulation

TL;DR: A hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm that suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes.
Abstract: In this letter, a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.

Summary (4 min read)

Introduction

  • The modeling and control of an unmanned aerial vehicle (UAV) able of interacting with the environment to accomplish simple robotic-manipulation tasks have been proposed in [2], which is based on a force-position control law designed through a feedback linearizing technique.
  • *This work was supported by the ARCAS and AEROARMS large-scale integrating projects, funded by the European Community’s under Grants FP7-ICT-287617 and H2020-ICT-644271, respectively.
  • The authors are solely responsible for its content.
  • However in these papers redundancy is employed in a rigid way and the interaction between dependent tasks is not considered.

II. REFERENCE FRAMES AND CAMERA MODEL

  • Let ob ∈ R3 and Rb ∈ SO(3) be the position and rotation matrix, respectively, of the body reference frame {B : Ob − xbybzb}, which is fixed with the UAV base, with respect to the inertial reference frame {I : O−xyz} (see Fig. 1).
  • Therefore, the roll and pitch rotations are constrained by imposing a desired linear motion.
  • Oe − xeyeze} be the reference frame fixed with the arm end-effector (e.g. gripper).
  • Notice that, by considering the DoFs of the vehicle base, even if a VToL UAV is an underactuated system, the addition of enough DoFs of the robot arm can generate a task-redundant system (n = 4 + ν total DoFs), e.g. with respect to the positioning of the gripper.
  • Finally, the vehicle base is endowed with a downward looking camera.

III. OBJECT POSE ESTIMATION

  • The authors assume that m known visual markers are attached to the target object.
  • By using a visual tracker [12], [13], the pose of any visible marker can be measured with respect to the camera.
  • (3) However, if more markers are simultaneously visible, a more robust and accurate solution can be achieved by combining the available information.
  • Each marker contour is divided in several points (e.g. the corners and the middle points of the edges).
  • A RANSAC outlier rejection mechanism is integrated to remove point correspondences resulting from imaging artifacts that might be inconsistent with the computed transformation [15].

IV. DYNAMIC TASK PRIORITY CONTROL

  • The kinematic redundancy in the system allows control of the UAM end-effector by simultaneously achieving a number of secondary tasks.
  • Indeed, redundancy means that the same gripper pose can be reached with several, even infinite, system configurations.
  • Notice that the angular velocity ̟ can be measured with a standard onboard inertial measurement unit (IMU) under the assumption of a classical time-scale separation between the attitude controller (faster control loop, e.g. up to 1 kHz) and the velocity controller (slower control, in their case at camera frame rate of 25 Hz).
  • The hierarchical formulation (13) guarantees that the execution of all the higher-priority tasks from 0 (main task) to i − 1 will not be affected by the ith subtask and by the variation of the uncontrolled state variables.
  • Without loss of generality, the authors suppose that the transition phase starts at t = 0, i.e. the rth task stack has to be deactivated and substituted by the new one (r + 1)th.

V. HYBRID VISUAL SERVOING AND SYSTEM BEHAVIOR CONTROL FOR AERIAL MANIPULATION TASKS

  • In this section several elementary tasks useful for the composition of an aerial manipulation (grasping/plugging) task by exploiting visual measurements will be proposed.
  • Besides tasks allowing the control of the gripper pose with respect to the observed objects, tasks able to guarantee the camera FoV constraint, to minimize the effect of the motion of the arm on the vehicle positional stability, and that addresses the issue of the joint mechanical limits are proposed.
  • These tasks are not all orthogonal and/or independent but they are essential for the specific mission purposes.
  • The priority of the proposed tasks can be arranged on the basis of the desired behaviour, even if some general constraints have to be considered.
  • On the other hand, the constraint on the joint limits is very invasive because it affects all the arm joints.

A. Gripper position and orientation

  • Let o∗e be the desired position of the gripper, e.g. suitable to perform the object grasping in the considered case study.
  • This value can be computed by combining the pose measurement of the target object provided by the visual system, with the desired relative displacement of the gripper with respect to the target object in the grasping configuration.
  • Moreover, with the proposed choice of σp, only one DoF is required to execute this subtask, because only the norm of ep will be nullified, i.e. the motion of the gripper during the transient is constrained on a sphere of radius ‖ep‖.
  • The corresponding task Jacobian Jp becomes singular when ep → 0.
  • Notice that these two subtasks are orthogonal.

B. Camera field of view

  • Without loss of generality, any point of the observed target can be chosen to be controlled in the image.
  • (31) Notice that only one DoF is required to accomplish this subtask.
  • In fact, the distance of obc with respect to the desired optical ray corresponding to s∗c is controlled.
  • The corresponding task Jacobian matrix Jc is singular when ec → 0, but since it is not strictly required to accomplish the main mission that the target object is exactly in the desired position of the image (e.g. the center), this subtask can be activated only when σc exceeds a safety threshold.

C. Center of gravity

  • The weight of the robot arm can generate an undesired torque on the vehicle base depending on the configuration.
  • In particular, the arm motion statically perturbs the system attitude and position when the center of gravity (CoG) of the arm pg is not aligned with the CoG of the vehicle base along the gravitational line of action ιz .
  • Without loss of generality, the CoG of the vehicle base is assumed to be in Ob.
  • The corresponding task Jacobian matrix can be computed from the corresponding Jacobian represented.
  • In fact, pbg is only a function of the arm joint configuration defined as pbg = 1 m ν∑ i=1 mip b gi, (33) where mi and pbgi are the mass and the position of the CoG of the ith arm link, respectively, and m = ∑ν i=1mi.

D. Joint-limits avoidance constraint

  • A possible solution to avoid mechanical joint limits is to make attractive the central position of the joint ranges.
  • The desired task variable is σ∗l = 0 (i.e. σ̃l = −σl), and the corresponding task Jacobian matrix is J l = [ O1×4 −2ΛLeq ⊤ ] . (42) Notice that the uncontrolled state variables do not affect this subtask, hence the corresponding Jacobian is the null matrix.
  • Due to the higher priority tasks, some joint could reach anyway its limit.
  • If mechanically viable, the system will reconfigure its internal DoFs to achieve all the remaining subtasks until the dangerous condition will disappear and the original priority will be restored but starting from a different system configuration.

VI. SIMULATION RESULTS

  • The proposed approach has been tested in the simulator developed in the ARCAS project (www.arcas-project.eu), which is based on GAZEBO physics engine (http://gazebosim.org) (see Fig. 2(f)).
  • The camera has been positioned 50 cm ahead the vehicle base with an inclination of 30 deg with respect to the vertical axis in a way to observe the grasping manoeuvre without self-occlusion.
  • Figure 2(a) shows the time history of the position error in norm during the task execution for each case study.
  • Notice how the behavior of both the position and orientation errors are similar in all the cases coherently with the hierarchical task combination adopted in the proposed formulation, i.e. the activation of subtasks cannot affect significantly the behavior of the higher priority tasks.
  • The achieved results show how except for case 1), i.e. when this subtask is activated, the FoV error is improved without affecting the movement of the gripper.

VII. EXPERIMENTAL RESULTS

  • The UAM employed for the experimental tests has been developed in the ARCAS project.
  • The employed autopilot has been developed by CATEC (www.catec.aero) and allows also the control of the robot arm.
  • Then, a pose average of the difference between the camera and the optical frames is computed with respect to the object.
  • As for the simulated case studies, the mission has been decomposed into two steps: the approaching phase, to move the bar over the plugging base at a distance of 5 cm, and the final plugging phase.
  • The achieved results are shown in Fig. 6. Figures 6(a) and 6(b) show the time history of the norm of the position and orientation errors, respectively.

VIII. CONCLUSIONS

  • A hybrid visual servoing with a hierarchical task-priority control framework suitable for UAM has been presented in this work.
  • The underactuation of the VToL vehicle has been explicitly taken into account in a new general formulation that also guarantees the decoupling of independent tasks.
  • Several subtasks have been proposed to reduce the error of the desired gripper position and orientation, to maintain the target in the camera FoV, to vertically align the arm CoG with the quadrotor gravitational vector, and to avoid arm joint limits.
  • Simulations and experiments validated the proposed solution.

Did you find this useful? Give us your feedback

Figures (7)

Content maybe subject to copyright    Report

IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. DECEMBER, 2015 1
Hybrid Visual Servoing with Hierarchical Task
Composition for Aerial Manipulation*
Vincenzo Lippiello
1
, Jonathan Cacace
1
, Angel Santamaria-Navarro
2
, Juan Andrade-Cetto
2
,
Miguel
´
Angel Trujillo
3
, Yamnia Rodr
´
ıguez Esteves
3
, and Antidio Viguria
3
Abstract—In this paper a hybrid visual servoing with a hierar-
chical task-composition control framework is described for aerial
manipulation, i.e. for the control of an aerial vehicle endowed
with a robot arm. The proposed approach suitably combines
into a unique hybrid-control framework the main benefits of
both image-based and position-based control schemes. Moreover,
the underactuation of the aerial vehicle has been explicitly taken
into account in a general formulation, together with a dynamic
smooth activation mechanism. Both simulation case studies and
experiments are presented to demonstrate the performance of
the proposed technique.
Index Terms—Aerial Robotics; Visual Servoing.
I. INTRODUCTION
I
N the last years new application domains in the field of
aerial service robotics have been addressed by researchers
from different disciplines, e.g. surveillance, inspection, agri-
culture, delivering, etc. Sophisticated prototypes have been
developed with the capacity to physically interact with the
environment [1]. The modeling and control of an unmanned
aerial vehicle (UAV) able of interacting with the environment
to accomplish simple robotic-manipulation tasks have been
proposed in [2], which is based on a force-position control
law designed through a feedback linearizing technique.
With the improvement of the batteries and the miniatur-
ization of motors and servos, new high-performance UAV
prototypes endowed with a robot arm —called unmanned
aerial manipulators (UAMs)— have been designed. A control
algorithm which is able to exploit all the degrees of freedom
(DoFs) of a UAM is proposed in [3], where the execution of
tasks with physical interaction with the environment has been
achieved. However the employed UAM is completly actuated
only along one direction and has no redundancy. In [4], [5] the
dynamic model of a UAM and a Cartesian impedance control
Manuscript received: July, 29, 2015; Revised October, 4, 2015; Accepted
December, 2, 2015.
This paper was recommended for publication by Jonathan Roberts upon
evaluation of the Associate Editor and Reviewers’ comments.
*This work was supported by the ARCAS and AEROARMS large-scale
integrating projects, funded by the European Community’s under Grants
FP7-ICT-287617 and H2020-ICT-644271, respectively. The authors are solely
responsible for its content. It does not represent the opinion of the European
Community and the Community is not responsible for any use that might be
made of the information contained therein.
1
V. Lippiello (corresponding author) and J. Cacace are with the University
of Naples Federico II, Prisma Lab, DIETI, Naples, Italy.
2
A. Santamaria-Navarro and J. Andrade-Cetto are with the Institut de
Rob
`
otica i Inform
`
atica Industrial, CSIC-UPC, Barcelona, Spain.
3
M.
´
A. Trujillo, Y. Rodr
´
ıguez Esteves, A. Viguria are with the Center for
Advanced Aerospace Technologies (CATEC), Sevilla, Spain.
Digital Object Identifier (DOI): see top of this page.
have been designed providing a desired relationship between
external wrench and the system motion. However, redundancy
is exploited in a rigid way. Aerial manipulation tasks executed
with a UAM endowed with a 2-DoFs robot arm have been
presented in [6], where an adaptive sliding-mode controller
has been adopted. A control solution considering valve turning
with a dual-arm UAM has been proposed in [7]. In all these
works no vision is employed for the task execution and there
is no redundancy.
The use of vision for the execution of aerial robotic tasks is a
widely adopted solution to cope with unknown environments.
In [8], [9] new image-based control laws endowing a UAM
with the capability of automatically positioning parts on target
structures have been proposed, where the system redundancy
and underactuation of the vehicle base have been explicitly
taken into account. A task-oriented control law for aerial
surveillance has been proposed in [10], where a camera is
attached to the end-effector of the robot arm to perform visual
servoing towards a desired target. However in these papers
redundancy is employed in a rigid way and the interaction
between dependent tasks is not considered.
In this paper a hybrid image- and position-based visual ser-
voing via a hierarchical task-composition control is presented
for the control of a UAM. The presence of redundancy in a
UAM system allows combining a number of subtasks with the
a new hierarchical-task formulation. Different subtasks can be
designed both in the Cartesian space, e.g. obstacle avoidance,
manipulation tasks, and in the image space of the camera,
e.g. field-of-view constraints, as well as in the arm joint
space, e.g. center-of-gravity balancing, joint-limits avoidance,
manipulability, etc. Moreover, the underactuation of the aerial
vehicle base has been systematically taken into account within
a new recursive formulation. A number of practical tasks have
been designed requiring only few DoFs to be accomplished,
hence allowing an accurate profiling of the system behavior.
The study of the task Jacobian singularity and a smooth task
activation are also shown.
With respect to our previous work [11], a new advanced
formulation is derived with the capability to guarantee de-
coupling of independent tasks (not only orthogonal as in the
previous work), the stability analysis of the new proposed
control law is discussed together with the derivation of all
the task Jacobian matrices (in [11] the Jacobian matrices of
the uncontrollable variables are missing), and finally both
simulation and experimental results are provided to evaluate
the effectiveness of the new proposed control law.

2 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. DECEMBER, 2015
x
y
z
O
x
b
y
b
z
b
O
b
q
i+1
x
e
y
e
z
e
O
e
q
i
x
t
y
t
z
t
O
t
P
j
P
j+1
Fig. 1: Reference frames.
I
I. REFERENCE FRAMES AND CAMERA MODEL
Let o
b
R
3
and R
b
SO(3) be the position and rotation
matrix, respectively, of the body reference frame {B : O
b
x
b
y
b
z
b
}, which is fixed with the UAV base, with respect to the
inertial reference frame {I : O xyz} (see Fig. 1). The triple
φ = (ϕ, ϑ, ψ) of Euler roll-pitch-yaw angles is considered for
the representation of the vehicle orientation, which in matrix
form can be expressed as R
b
(φ).
A standard VToL UAV, e.g. a quadrotor, is an underactuated
system with only 4 DoFs. In fact, the linear motion of the
vehicle is generated by modifying the attitude, and thus the
total propeller thrust generates a linear acceleration in the
desired direction. Therefore, the roll and pitch rotations are
constrained by imposing a desired linear motion.
A robot arm with ν DoFs attached to the UAV base is
considered. Let q =
q
1
. . . q
ν
be the arm joint vector
describing the arm configuration, where q
i
is the ith joint
variable, with i = 1, . . . , ν, and let {E : O
e
x
e
y
e
z
e
} be the
reference frame fixed with the arm end-effector (e.g. gripper).
Notice that, by considering the DoFs of the vehicle base, even
if a VToL UAV is an underactuated system, the addition of
enough DoFs of the robot arm can generate a task-redundant
system (n = 4 + ν total DoFs), e.g. with respect to the
positioning of the gripper.
Finally, the vehicle base is endowed with a downward look-
ing camera. Without loss of generality, the camera reference
frame is considered coincident with B. The pin-hole camera
model is employed, i.e. by denoting with p
b
=
x
b
y
b
z
b
the position of an observed point P expressed with respect to
B (see Fig. 1), the optical projection of P onto the normalized
image plane of the camera determines the so-called image
feature vector s:
s =
X
Y
=
1
z
b
x
b
y
b
. (
1)
III. OBJECT POSE ESTIMATION
We assume that m known visual markers are attached to
the target object. Let {T : O
t
x
t
y
t
z
t
} be a reference frame
fixed with the object, and o
t
j
and R
t
j
, with j = 1, . . . , m, be
the known position and rotation matrix, respectively, of the
reference frame attached to the jth marker in T .
By using a visual tracker [12], [13], the pose of any visible
marker can be measured with respect to the camera. Then,
the pose of the target object can be reconstructed from the
measurement of each visible marker as follows:
o
b
t
= o
b
j
R
b
j
o
t
j
(2)
R
b
t
= R
b
j
R
t
j
. (3)
However, if more markers are simultaneously visible, a more
robust and accurate solution can be achieved by combining
the available information. Each marker contour is divided in
several points (e.g. the corners and the middle points of the
edges). Assuming the 3D model of the marker is known, we
define the 3D coordinates of each contour point l relative to
its marker frame as p
j
l
. Then, each 3D point of a marker is
associated to the corresponding object frame T with
p
t
l
= R
t
j
p
j
l
+ o
t
j
. (4)
Once all contour correspondences are associated to the object
frame, a Perspective-n-Point method [14] is used to obtain
the camera pose with respect to the object. A RANSAC
outlier rejection mechanism is integrated to remove point
correspondences resulting from imaging artifacts that might
be inconsistent with the computed transformation [15].
IV. DYNAMIC TASK PRIORITY CONTROL
The kinematic redundancy in the system allows control of
the UAM end-effector by simultaneously achieving a number
of secondary tasks. Indeed, redundancy means that the same
gripper pose can be reached with several, even infinite, system
configurations. Hence, the system can be suitably reconfigured
by using internal motions, i.e. without affecting the gripper
pose, to satisfy mechanical constraints (e.g. arm joint limits)
and several subtasks (e.g. field of view, arm manipulability,
robot-arm center of gravity control).
Let x =
o
b
φ
q
be the system state vector,
and ω
b
=
ω
x
ω
y
ω
z
the angular velocity of the body
frame. To easily address the system underactuation we ex-
tract from x only the controlled variables in the new vector
ξ =
o
b
ψ q
, and analogously for the corresponding
velocity variables in υ =
˙
o
b
ω
z
˙
q
, while =
ω
x
ω
y
represents the remaining vector of the uncon-
trolled angular velocities. Notice that the angular velocity
can be measured with a standard onboard inertial measurement
unit (IMU) under the assumption of a classical time-scale
separation between the attitude controller (faster control loop,
e.g. up to 1 kHz) and the velocity controller (slower control,
in our case at camera frame rate of 25 Hz).
Moreover, let σ
0
= f
0
(x) R
µ
0
be the variables of
a configuration-dependent main task; hence the following
differential relationship holds:
˙
σ
0
=
f
0
(x)
x
˙
x = J
0
(x)υ +
J
0
(x), (
5)

VINCENZO LIPPIELLO et al.: HYBRID VISUAL SERVOING WITH HIERARCHICAL TASK COMPOSITION FOR AERIAL MANIPULATION 3
where J
0
(x) R
µ
0
×n
and
J
0
(x) R
µ
0
×2
are the main
task Jacobian matrices of the controlled and uncontrolled state
variables, respectively. By inverting (5) and considering a
regulation problem of σ
0
to the desired value σ
0
, hence by
defining
e
σ
0
= σ
0
σ
0
as the main task error, the following
velocity command can be considered:
υ
= J
0
(Λ
0
e
σ
0
J
0
), (6)
where Λ
0
R
µ
0
×µ
0
is a positive-definite gain matrix, and
J
0
is the generalized inverse of J
0
, which has been assumed
to be full-rank. By substituting (6) into (5), the following
exponentially stable error dynamics is achieved
˙
e
σ
0
= Λ
0
e
σ
0
. (7)
In case of µ
0
< n a second lower-priority subtask σ
1
=
f
1
(x) R
µ
1
can be added with the following command:
υ
= J
0
Λ
0
e
σ
0
+ (J
1
N
0
)
Λ
1
˜
σ
1
J
0|1
, (8)
with J
1
the Jacobian matrix of the second subtask, which is
assumed to be full-rank, and
J
0|1
= (J
1
N
0
)
J
1
+ (I
n
(J
1
N
0
)
J
1
)J
0|0
, (9)
where N
0
= I
n
J
0
J
0
is the projector onto the null space of
J
0
, with I
n
the n-dimension identity matrix,
J
0|0
= J
0
J
0
,
Λ
1
R
µ
1
×µ
1
is a positive definite gain matrix. The Jacobian
matrix
J
0|1
allows the compensation of the variation of the
. Notice that matrix J
1
N
0
is full-rank only if the two
tasks are orthogonal (J
1
J
0
= O
µ
1
×µ
0
) or independent (not
orthogonal and rank(J
0
) + rank(J
1
) = rank([J
0
J
1
])).
See [16] for more details. Substituting (8) into (5) and
by noticing that N
0
is idempotent and Hermitian, hence
(J
1
N
0
)
= N
0
(J
1
N
0
)
, the dynamics of the main task (7)
is again achieved and so the exponential stability is proven.
To study the behavior of the secondary task σ
1
we can
differentiate the subtask variables as follows
˙
σ
1
=
f
1
(x)
x
˙
x = J
1
(x)υ +
J
1
(x). (10)
Then, by substituting (8) and by assuming that the tasks are
at least independent, the following error dynamics is achieved
˙
e
σ
1
= J
1
J
0
Λ
0
e
σ
0
J
1
(J
1
N
0
)
Λ
1
˜
σ
1
+
J
1
J
0
J
0
+ J
1
(J
1
N
0
)
(J
1
J
1
J
0
J
0
) J
1
= J
1
J
0
Λ
0
e
σ
0
Λ
1
˜
σ
1
,
(
11)
where we used the property J
1
(J
1
N
0
)
= I
µ
1
. Finally the
dynamics of the error system can be written as follows
"
˙
e
σ
0
˙
e
σ
1
#
=
Λ
0
O
µ
0
×µ
1
J
1
J
0
Λ
0
Λ
1
e
σ
0
e
σ
1
, (12)
that is characterized by a Hurwitz matrix, hence the expo-
nential stability of the system is guaranteed. Moreover, we
can notice a term representing the coupling effect of the main
task on the secondary task. In case of orthogonal tasks this
term is zero, i.e.
˙
e
σ
1
= Λ
1
˜
σ
1
, and the behavior of the main
and that of the secondary tasks are decoupled.
By generalizing (8) to the case of η prioritized subtasks, the
following general velocity command can be formulated:
υ
= J
0
Λ
0
e
σ
0
+
η
X
i=1
(J
i
N
0|...|i1
)
Λ
i
e
σ
i
J
0|...|η
, (13)
with the recursively-defined compensating matrix
J
0|...|η
= (J
η
N
0|...|η1
)
J
η
+ (I
n
(J
η
N
0|...|η1
)
J
η
)
J
0|...|η1
,
(14)
where N
0|...|i
is the projector onto the null space of the
augmented Jacobian J
0|...|i
of the ith subtask, with i =
0, . . . , η 1, which are respectively defined as follows
J
0|...|i
=
J
0
· · · J
i
(15)
N
0|...|i
= (I
n
J
0|...|i
J
0|...|i
). (16)
The previous stability analysis can be straightforwardly ex-
tended to the general case of η subtasks.
The hierarchical formulation (13) guarantees that the ex-
ecution of all the higher-priority tasks from 0 (main task)
to i 1 will not be affected by the ith subtask and by the
variation of the uncontrolled state variables. In other words,
the execution of the ith task is subordinated to the execution of
the higher priority tasks present in the task stack, i.e. it will
be fulfilled only if suitable and enough DoFs are available,
while the complete fulfillment of the main task, instead,
is always guaranteed. However, with this new formulation
for all reciprocally annihilating or independent tasks, a fully
decoupling of the error dynamics is guaranteed.
The task composition and priority can be modified at
runtime as needed, i.e. by activating or deactivating subtasks
as well as by changing the priority order of the current active
tasks already present in the task stack. However, in order to
avoid discontinuity of the control input, a smooth transition
between different task stacks has to be considered. This goal
can be achieved by adopting a time-vanishing smoothing term
when a new task stack is activated. Without loss of generality,
we suppose that the transition phase starts at t = 0, i.e. the
rth task stack has to be deactivated and substituted by the new
one (r + 1)th. During the transition the velocity command is
computed as follows:
υ
(t) = υ
r+1
(t) + e
t
τ
(υ
r
(0) υ
r+1
(0)), (17)
where τ is a time constant determining the transition phase
duration, and υ
k
is the velocity command corresponding to
the kth task stack. When t becomes sufficiently greater than
τ, the rth task stack is fully removed and a new transition can
start. Notice that the smoothing term e
t
τ
(υ
r
(0) υ
r+1
(0))
is bounded and exponentially vanishing, hence it will not
affect the stability of the proposed control law. The time
constant τ has to be smaller than the inverse of the maximum
eigenvalue of the gain matrices Λ
i
to ensure a short transient
time response in comparison with the nullifying time of the
task errors
e
σ
i
.

4 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. DECEMBER, 2015
V. HYBRID VISUAL SERVOING AND SYSTEM BEHAVIOR
CONTROL FOR AERIAL MANIPULATION TASKS
In this section several elementary tasks useful for the
composition of an aerial manipulation (grasping/plugging) task
by exploiting visual measurements will be proposed. Besides
tasks allowing the control of the gripper pose with respect to
the observed objects, tasks able to guarantee the camera FoV
constraint, to minimize the effect of the motion of the arm on
the vehicle positional stability, and that addresses the issue of
the joint mechanical limits are proposed. These tasks are not
all orthogonal and/or independent but they are essential for the
specific mission purposes. The priority of the proposed tasks
can be arranged on the basis of the desired behaviour, even if
some general constraints have to be considered. For example,
the FoV constraint is essential because the loss of the observed
object from the camera image will determine the failure of
the whole mission (depending on the available camera optics,
this problem could be less significant). On the other hand, the
constraint on the joint limits is very invasive because it affects
all the arm joints. For this reason it is advisable to move this
task initially to the lower priority and increase it only when
some joint limits are being approached.
A. Gripper position and orientation
Let o
e
be the desired position of the gripper, e.g. suitable to
perform the object grasping in the considered case study. This
value can be computed by combining the pose measurement
of the target object provided by the visual system, with the
desired relative displacement of the gripper with respect to the
target object in the grasping configuration. The corresponding
position error is defined as e
p
= o
e
o
e
, and the task function
is chosen equal to its square norm, yielding
σ
p
= e
p
e
p
, (18)
where the desired task variable is σ
p
= 0 (i.e. eσ
p
= σ
p
).
The corresponding task Jacobian matrix is
J
p
= 2e
p
I
3
S(R
b
o
b
e
)ı
z
J
q,P
, (19)
where S(·) is the skew-symmetric matrix representing the
vectorial product, ı
z
=
0 0 1
, and
J
q
=
J
q,P
J
q,O
=
R
b
O
3
O
3
R
b
J
b
q
, (20)
with J
b
q
(q) the arm Jacobian matrix with respect to B, with
J
q,P
and J
q,O
(3 × ν)-matrices. The corresponding task
Jacobian matrix of the uncontrolled state variables is
J
p
= 2e
p
S(R
b
o
b
e
)
ı
x
ı
y
, (21)
where ı
x
=
1 0 0
, ı
y
=
0 1 0
.
Notice that if o
e
is also measured by using visual markers
attached to the gripper, the camera calibration error and
the arm direct kinematics error will not affect the grasping
accuracy in a similar way as in an image-based approach.
Moreover, with the proposed choice of σ
p
, only one DoF is
required to execute this subtask, because only the norm of
e
p
will be nullified, i.e. the motion of the gripper during the
transient is constrained on a sphere of radius ke
p
k. However,
the corresponding task Jacobian J
p
becomes singular when
e
p
0. Nevertheless, in the task composition the generalized-
inverse J
p
is multiplied by σ
p
. Hence, if J
p
is full-rank, its
determinant goes to zero only linearly when e
p
0, but σ
p
goes to zero squarely.
Let {η
e
, ǫ
e
} and {η
e
, ǫ
e
} be the unit quaternions corre-
sponding to R
e
and to its desired value R
e
, respectively. The
corresponding orientation error can be expressed as
e
o
= η
e
ǫ
e
η
e
ǫ
e
S(ǫ
e
)ǫ
e
. (22)
The task function is chosen equal to
σ
o
= e
o
e
o
, (23)
with the desired task variable σ
o
= 0 (i.e. eσ
o
= σ
o
), while
the corresponding task Jacobian matrix is
J
o
= 2e
o
O
3
ı
z
J
q,O
. (24)
The Jacobian matrix of the uncontrolled state variables is
J
o
= 2e
o
ı
x
ı
y
. (25)
Remarks similar to the position case concerning the number
of required DoFs, the singularity of the task Jacobian matrix,
and the direct visual measurement of the gripper orientation
can be repeated straightforwardly. Notice that these two sub-
tasks are orthogonal.
B. Camera field of view
Let s
c
R
2
be the image feature vector of the projection
of the observed markers centroid o
b
c
=
x
b
c
y
b
c
z
b
c
onto
the normalized image plane, i.e.
s
c
=
X
c
Y
c
=
1
z
b
c
x
b
c
y
b
c
. (26)
The FoV subtask consists in constraining s
c
within a max-
imum distance with respect to a desired position s
c
in the
normalized image plane (e.g. the center of the image) by
moving the vehicle base, i.e. the camera point of view (notice
that we assumed the camera mounted on the vehicle base).
Without loss of generality, any point of the observed target
can be chosen to be controlled in the image. To achieve this
goal, the following task function is considered:
σ
c
= e
c
e
c
, (27)
where e
c
= s
c
s
c
, and the desired task variable is σ
c
= 0
(i.e. eσ
c
= σ
c
), while the corresponding task Jacobian is
J
c
=
2e
c
L
p
R
b
2e
c
L
o
R
b
ı
z
O
1×ν
, (28)
where
L
p
=
1
z
b
c
1 0 X
c
0 1 Y
c
, (29)
L
o
=
X
c
Y
c
(1 + X
2
c
) Y
c
1 + Y
2
c
X
c
Y
c
X
c
. (30)
Finally, the Jacobian of the uncontrolled state variables is
J
c
= 2e
c
L
o
R
b
ı
x
ı
y
. (31)

VINCENZO LIPPIELLO et al.: HYBRID VISUAL SERVOING WITH HIERARCHICAL TASK COMPOSITION FOR AERIAL MANIPULATION 5
Notice that only one DoF is required to accomplish this
subtask. In fact, the distance of o
b
c
with respect to the desired
optical ray corresponding to s
c
is controlled. However, the
corresponding task Jacobian matrix J
c
is singular when e
c
0, but since it is not strictly required to accomplish the main
mission that the target object is exactly in the desired position
of the image (e.g. the center), this subtask can be activated
only when σ
c
exceeds a safety threshold.
C. Center of gravity
The weight of the robot arm can generate an undesired
torque on the vehicle base depending on the configuration.
In particular, the arm motion statically perturbs the system
attitude and position when the center of gravity (CoG) of the
arm p
g
is not aligned with the CoG of the vehicle base along
the gravitational line of action ι
z
. Without loss of generality,
the CoG of the vehicle base is assumed to be in O
b
. Hence, by
denoting with e
g
=
(p
g
o
b
)
ι
z
ι
z
(p
g
o
b
) the error
between the desired and the current position of the arm’s CoG,
the designed task function is defined as follows:
σ
g
= e
g
e
g
, (32)
with the desired task variable σ
g
= 0 (i.e. eσ
g
= σ
g
). The
corresponding task Jacobian matrix can be computed from
the corresponding Jacobian represented. In fact, p
b
g
is only
a function of the arm joint configuration defined as
p
b
g
=
1
m
ν
X
i=
1
m
i
p
b
g i
, (33)
where m
i
and p
b
g i
are the mass and the position of the CoG
of the ith arm link, respectively, and m =
P
ν
i=1
m
i
.
The CoG of a partial chain of links can be represented, with
respect to B, from the link j to the end-effector, yielding
r
b
g j
=
1
m
R
b
j
ν
X
i=j
m
i
p
b
g i
, (34)
where R
b
j
is the rotation matrix between the jth arm link and
B. Finally, the differential relationship between p
g
and the arm
joint configuration is
˙
p
b
g
= J
b
g
˙
q, (35)
where J
b
g
R
3×ν
is the CoG Jacobian expressed in B and
defined as follows
J
b
g
=
p
b
g
q
=
j
b
g 1
· · · j
b
g ν
, (36)
with j
b
g i
the ith joint Jacobian formulated from the partial CoG
as follows
j
b
g j
=
P
ν
i=j
m
i
m
S(R
b
j
ı
z
)r
b
g j
. (37)
Finally, the corresponding task Jacobian is defined as
J
g
= 2e
g
O
1×3
S(R
b
p
b
g
)ı
z
R
b
J
b
g
, (38)
while the uncontrolled state-variables Jacobian is
J
g
= 2e
g
S(R
b
p
b
g
)
ı
x
ı
y
. (39)
Notice that only one DoF is required and similar consider-
ations as in the previous tasks definitions on the singularity of
the Jacobian matrix when e
g
0 can be done.
D. Joint-limits avoidance constraint
A possible solution to avoid mechanical joint limits is
to make attractive the central position of the joint ranges.
Let e
q
= q
q be the corresponding error, where q
=
q
L
+
1
2
(q
H
q
L
), with q
L
=
q
1L
. . . q
νL
and
q
H
=
q
1H
. . . q
νH
the vectors of the low and high
joint limits, respectively. The corresponding weighted square
distance can be used as a task function to push the system to
reach a safe configuration as follows
σ
l
= e
q
Λ
l
e
q
, (40)
where Λ
l
is a following weighting matrix needed to normalize
the control action with respect to the joint range
Λ
L
= diag{
(q
1H
q
1L
)
2
. . . (q
νH
q
νL
)
2
}. (41)
The desired task variable is σ
l
= 0 (i.e. eσ
l
= σ
l
), and the
corresponding task Jacobian matrix is
J
l
=
O
1×4
2Λ
L
e
q
. (42)
Notice that the uncontrolled state variables do not affect this
subtask, hence the corresponding Jacobian is the null matrix.
Due to the higher priority tasks, some joint could reach
anyway its limit. However, when a joint is approaching a
mechanical limit, the corresponding component of the task
function can be extracted from the previous subtask to form
a new isolated subtask that can be activated on the top of the
task stack. With this policy, if mechanically viable, the system
will reconfigure its internal DoFs to achieve all the remaining
subtasks until the dangerous condition will disappear and the
original priority will be restored but starting from a different
system configuration.
VI. SIMULATION RESULTS
The proposed approach has been tested in the simulator de-
veloped in the ARCAS project (www.arcas-project.eu), which
is based on GAZEBO physics engine (http://gazebosim.org)
(see Fig. 2(f)). A quadrotor with a weight of 5 kg and endowed
with a downward looking camera at 25 Hz and a 6-DoFs robot
arm plus a gripper has been employed. The camera has been
positioned 50 cm ahead the vehicle base with an inclination of
30 deg with respect to the vertical axis in a way to observe the
grasping manoeuvre without self-occlusion. The target object
is a bar endowed with two visual markers at the ends. A UAM
velocity control has also been employed [17], [18].
The assigned task is composed of two phases:
approaching phase the UAM starts from a distance
of about 125 cm and has to move the gripper to a pre-
grasping pose at 10 cm over the grasping pose;
grasping phase once the intermediate pose has been
reached with an error less than a suitable threshold (2 cm
for the position and 2 deg for the orientation), the target
pose is moved towards the final grasping pose in 10 s;

Citations
More filters
Journal ArticleDOI
TL;DR: A metaheuristic-based control framework, called beetle antennae olfactory recurrent neural network, for simultaneous tracking control and obstacle avoidance of a redundant manipulator and simulations results using an LBR IIWA seven-DOF manipulator are presented.
Abstract: In this article, we present a metaheuristic-based control framework, called beetle antennae olfactory recurrent neural network, for simultaneous tracking control and obstacle avoidance of a redundant manipulator. The ability to avoid obstacles while tracking a predefined reference path is critical for any industrial manipulator. The formulated control framework unifies the tracking control and obstacle avoidance into a single constrained optimization problem by introducing a penalty term into the objective function, which actively rewards the optimizer for avoiding the obstacles. One of the significant features of the proposed framework is the way that the penalty term is formulated following a straightforward principle: maximize the minimum distance between a manipulator and an obstacle. The distance calculations are based on Gilbert–Johnson–Keerthi algorithm, which calculates the distance between a manipulator and an obstacle by directly using their three-dimensional geometries, which also implies that our algorithm works for a manipulator and an arbitrarily shaped obstacle. Theoretical treatment proves the stability and convergence, and simulations results using an LBR IIWA seven-DOF manipulator are presented to analyze the performance of the proposed framework.

162 citations


Cites background from "Hybrid Visual Servoing With Hierarc..."

  • ...Similarly, visual servoing based approaches have also been proposed to use computer vision algorithms in improving the control of industrial manipulators [18]....

    [...]

Journal ArticleDOI
TL;DR: An extensive study of aerial vehicles and manipulation/interaction mechanisms in aerial manipulation is presented and the shortcomings of current aerial manipulation research are highlighted and a number of directions for future research are suggested.

144 citations

Journal ArticleDOI
Moju Zhao1, Tomoki Anzai1, Fan Shi1, Xiangyu Chen1, Kei Okada1, Masayuki Inaba1 
15 Jan 2018
TL;DR: A novel transformable aerial robot called DRAGON, which is a dual-rotor-embedded multilink robot with the ability of multi-degree-of-freedom (DoF) aerial transformation, is introduced.
Abstract: In this letter, we introduce a novel transformable aerial robot called DRAGON, which is a dual-rotor-embedded multilink robot with the ability of multi-degree-of-freedom (DoF) aerial transformation. The new aerial robot can control the full pose in $\mathsf {SE}(3)$ regarding the center of gravity (CoG) of multilinks and can render the multi-DoF aerial transformation, which is accomplished by the original two-DoF force vectoring mechanism on each link called the dual-rotor gimbal module. The dynamics is derived on the basis of the special definition of CoG orientation, followed by a control method decoupled into thrust force control and rotor gimbal control. In the thrust force control, the minimum force norm is considered to avoid force saturation, and the rotor gimbal control method is developed to enhance both translational and rotational stabilities during hovering and large-scale aerial transformation. The prototype composed of four links is constructed, and associated preliminary experiments are performed. The feasibility of the novel mechanical design and the proposed control method for the aerial transformation is demonstrated.

133 citations


Cites background from "Hybrid Visual Servoing With Hierarc..."

  • ..., [11]–[13]), thus permitting a wide workspace of the end-effector (SE(3))....

    [...]

Journal ArticleDOI
TL;DR: This survey highlights meaningful research works of several groups worldwide, considering two basic approaches to load transportation, namely grasped and cable-suspended load transportation.
Abstract: Load transportation by quadrotors and similar aircrafts is a topic of great interest to the robotics community nowadays, most likely due to logistic gains for deliveries of commercial cargo. Aiming at being the first reading for novice researchers and graduate students, this survey highlights meaningful research works of several groups worldwide, considering two basic approaches, namely grasped and cable-suspended load transportation. Different control techniques and maneuver strategies are analyzed, and their benefits and drawbacks are discussed. Moreover, experimental validation was a key aspect to the highlighted works, thus, links to the videos showing the experimental results are provided for each work.

118 citations

Journal ArticleDOI
TL;DR: A literature review of the last 10 years on SRURSs, and details achievements and challenges is provided to serve as a resource for researchers interested in aerial manipulation of SRurSs.

101 citations

References
More filters
Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
TL;DR: A non-iterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3D-to-2D point correspondences—whose computational complexity grows linearly with n, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12×12 matrix.
Abstract: We propose a non-iterative solution to the PnP problem--the estimation of the pose of a calibrated camera from n 3D-to-2D point correspondences--whose computational complexity grows linearly with n This is in contrast to state-of-the-art methods that are O(n 5) or even O(n 8), without being more accurate Our method is applicable for all n?4 and handles properly both planar and non-planar configurations Our central idea is to express the n 3D points as a weighted sum of four virtual control points The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12×12 matrix and solving a small constant number of quadratic equations to pick the right weights Furthermore, if maximal precision is required, the output of the closed-form solution can be used to initialize a Gauss-Newton scheme, which improves accuracy with negligible amount of additional time The advantages of our method are demonstrated by thorough testing on both synthetic and real-data

2,598 citations


"Hybrid Visual Servoing With Hierarc..." refers methods in this paper

  • ...Once all contour correspondences are associated to the object frame, a Perspective-n-Point method [14] is used to obtain the camera pose with respect to the object....

    [...]

Proceedings ArticleDOI
20 Jun 2005
TL;DR: This proposed new marker system, ARTag has very low and numerically quantifiable error rates, does not require a grey scale threshold as does other marker systems, and can encode up to 2002 different unique ID's with no need to store patterns.
Abstract: Fiducial marker systems consist of patterns that are mounted in the environment and automatically detected in digital camera images using an accompanying detection algorithm. They are useful for augmented reality (AR), robot navigation, and general applications where the relative pose between a camera and object is required. Important parameters for such marker systems is their false detection rate (false positive rate), their inter-marker confusion rate, minimal detection size (in pixels) and immunity to lighting variation. ARTag is a marker system that uses digital coding theory to get a very low false positive and inter-marker confusion rate with a small required marker size, employing an edge linking method to give robust lighting variation immunity. ARTag markers are bi-tonal planar patterns containing a unique ID number encoded with robust digital techniques of checksums and forward error correction (FEC). This proposed new system, ARTag has very low and numerically quantifiable error rates, does not require a grey scale threshold as does other marker systems, and can encode up to 2002 different unique ID's with no need to store patterns. Experimental results are shown validating this system.

909 citations


"Hybrid Visual Servoing With Hierarc..." refers methods in this paper

  • ...By using a visual tracker [12], [13], the pose of any visible marker can be measured with respect to the camera....

    [...]

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Overall result shows that the proposed approach demonstrates satisfactory performance as a potential platform which can be utilized in various applications such as inspection, manipulation, or transportation in remote places.
Abstract: This paper presents aerial manipulation using a quadrotor with a two-DOF robot arm. By considering a quadrotor and robot arm as a combined system, the kinematic and dynamic models are developed, and an adaptive sliding mode controller is designed. With the controller, an autonomous flight experiment is conducted including picking up and delivering an object, which requires accurate control of a quadrotor and robot arm. Overall result shows that the proposed approach demonstrates satisfactory performance as a potential platform which can be utilized in various applications such as inspection, manipulation, or transportation in remote places.

319 citations


"Hybrid Visual Servoing With Hierarc..." refers methods in this paper

  • ...Aerial manipulation tasks executed with a UAM endowed with a 2-DoFs robot arm have been presented in [6], where an adaptive sliding-mode controller has been adopted....

    [...]

Journal ArticleDOI
TL;DR: Stability analysis of priority-based kinematic control algorithms for redundant robotic systems is approached by resorting to a Lyapunov-based stability discussion for several prioritized inverse kinematics algorithms, providing sufficient conditions for the control gains and the tasks' design for the regulation problem.
Abstract: Stability analysis of priority-based kinematic control algorithms for redundant robotic systems is approached in this paper. Starting from the classical applications in position control of manipulators, the kinematic-based approaches have lately been applied to, e.g., visual servoing and quadruped or multirobot coordination control. A common approach consists in the definition of several tasks properly combined in priority. In this paper, by resorting to a Lyapunov-based stability discussion for several prioritized inverse kinematics algorithms, sufficient conditions for the control gains and the tasks' design are given for the regulation problem. Two case studies show the practical implementation of the results.

214 citations

Frequently Asked Questions (16)
Q1. What are the contributions mentioned in the paper "Hybrid visual servoing with hierarchical task composition for aerial manipulation" ?

In this paper a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i. e. for the control of an aerial vehicle endowed with a robot arm. 

Besides tasks allowing the control of the gripper pose with respect to the observed objects, tasks able to guarantee the camera FoV constraint, to minimize the effect of the motion of the arm on the vehicle positional stability, and that addresses the issue of the joint mechanical limits are proposed. 

In particular, during the approaching phase the position error decreases almost linearly due to the saturation of the maximumvehicle cruise velocity (10 cm/s in these case studies). 

it is never activated because the high-priority FoV subtask determines arm configurations already compatible with the CoG subtask. 

A RANSAC outlier rejection mechanism is integrated to remove point correspondences resulting from imaging artifacts that might be inconsistent with the computed transformation [15]. 

The corresponding task Jacobian matrix isJp = 2e ⊤ p [ I3 S(Rbo b e)ız Jq,P ] , (19)where S(·) is the skew-symmetric matrix representing the vectorial product, ız = [ 0 0 1 ]⊤ , andJq = [ Jq,P Jq,O ] = [ Rb O3 O3 Rb ] Jbq, (20)with Jbq(q) the arm Jacobian matrix with respect to B, with Jq,P and Jq,O (3 × ν)-matrices. 

By generalizing (8) to the case of η prioritized subtasks, the following general velocity command can be formulated:υ∗ = J†0Λ0σ̃0 +η∑i=1(J iN0|...|i−1) † Λiσ̃i − J0|...|η̟, (13)with the recursively-defined compensating matrixJ0|...|η = (JηN0|...|η−1) †Jη+ (In − (JηN0|...|η−1) †Jη)J0|...|η−1,(14)where N0|...|i is the projector onto the null space of the augmented Jacobian J0|...|i of the ith subtask, with i = 0, . . . , η − 1, which are respectively defined as followsJ0|...|i = [ J⊤0 · · · J ⊤ i ]⊤ (15) N0|...|i = (In − J † 0|...|iJ0|...|i). (16)The previous stability analysis can be straightforwardly extended to the general case of η subtasks. 

Notice that the smoothing term e− tτ (υ∗r(0)− υ ∗ r+1(0))is bounded and exponentially vanishing, hence it will not affect the stability of the proposed control law. 

The corresponding task Jacobian matrix of the uncontrolled state variables isJp = 2e ⊤ p S(Rbo b e) [ ıx ıy ] , (21)where ıx = [ 1 0 0 ]⊤ , ıy = [ 0 1 0 ]⊤ . 

Notice that, by considering the DoFs of the vehicle base, even if a VToL UAV is an underactuated system, the addition of enough DoFs of the robot arm can generate a task-redundant system (n = 4 + ν total DoFs), e.g. with respect to the positioning of the gripper. 

By taking into account the camera pose with respect to B, the desired position of the image features centroid has been chosen equal to s∗c = [ 0,−0.1 ]⊤ . 

The kinematic redundancy in the system allows control of the UAM end-effector by simultaneously achieving a number of secondary tasks. 

By knowing the pose of the camera attached to the quadrotor body frame, the authors can trivially obtain the frame transformation between the camera and the object. 

the pose of the target object can be reconstructed from the measurement of each visible marker as follows:obt = o b j −R b j ⊤ otj (2)Rbt = R b jR t j⊤ . 

Even if this is the lower priority task, a safety distance of more than 20% of the joint ranges, in the worst case, is always preserved. 

As described before, it is possible to increase the priority of this task in the stack when a joint limit is excessively close in a way to guarantee mechanical safety at the expense of other tasks.