scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Uncalibrated Visual Servo for Unmanned Aerial Manipulation

TL;DR: This paper hierarchically adds one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits.
Abstract: This paper addresses the problem of autonomous servoing an unmanned redundant aerial manipulator using computer vision. The overactuation of the system is exploited by means of a hierarchical control law, which allows to prioritize several tasks during flight. We propose a safety-related primary task to avoid possible collisions. As a secondary task, we present an uncalibrated image-based visual servo strategy to drive the arm end-effector to a desired position and orientation by using a camera attached to it. In contrast to the previous visual servo approaches, a known value of camera focal length is not strictly required. To further improve flight behavior, we hierarchically add one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits. The performance of the hierarchical control law, with and without activation of each of the tasks, is shown in simulations and in real experiments confirming the viability of such prioritized control scheme for aerial manipulation.

Summary (4 min read)

Introduction

  • Unmanned aerial vehicles (UAVs), and in particular multirotor systems, have substantially gained popularity in recent years, motivated by their significant increase in maneuverability, together with a decrease in weight and cost [1].
  • In addition, IBVS is more robust than PBVS with respect to uncertainties and disturbances affecting the model of the robot, as well as the calibration of the camera [4], [5].
  • In all image-based and hybrid approaches the resulting image Jacobian or interaction matrix, which relates the camera velocity to the image feature velocities, depends on a priori knowledge of the intrinsic camera parameters.
  • The second contribution is the proposal of a hierarchical control law that exploits the extra degrees of freedom of the UAV-arm system which, in contrast to their previous solution [23], uses a less restrictive control law that only actuates on the components of the secondary tasks that do not conflict directly with tasks higher up in the hierarchy.
  • The next section presents their uncalibrated approach to visual servo.

II. UNCALIBRATED IMAGE-BASED VISUAL SERVOING

  • Drawing inspiration on the UPnP algorithm [25], the authors describe in the following subsection a method to solve for the camera pose and focal length using a reference system attached to the target object.
  • The method is extended in Sec. II-B to compute a calibration-free image Jacobian for their servo task, and in Sec. II-C to compute the desired control law.

A. Uncalibrated PnP

  • 3D target features are parameterized with their barycentric coordinates, and the basis of these coordinates is used to define a set of control points.
  • A least squares solution for the control point coordinates albeit scale, is given by the null eigenvector of a linear system made up of all 2D to 3D perspective projection relations between the target points.
  • The terms aij are the barycentric coordinates of the i-th target feature which are constant regardless of the location of the camera reference frame, and α is their unknown focal length.
  • In the noise-free case, M>M is only rank deficient by one, but when image noise is severe it might loose rank, and a more accurate solution can be found as a linear combination of the basis of its null space.
  • It is sufficient for their purposes to consider only the least squares approximation; that is, to compute the solution only using the eigenvector associated to the smallest eigenvalue.

A. Coordinate Systems

  • Consider the quadrotor-arm system equipped with a camera mounted at the end-effector’s arm as shown in Fig.
  • The position of the camera (c) with respect to the target frame, expressed as a homogeneous transform Twc , can be computed integrating the camera velocities obtained from the uncalibrated visual servo approach presented in the previous section.
  • A quadrotor is at the high level of control an underactuated vehicle with only 4 DoF, namely the linear velocities plus the yaw angular velocity (νqx, νqy, νqz, ωqz) acting on the body frame.
  • And at the low level, the attitude controller stabilizes horizontally the quadrotor body.
  • With the arm base frame coincident with the quadrotor body frame, the relation between the quadrotor body and camera frames is Tbc = T b t(qa) T t c, with T b t(qa) the arm kinematics and Ttc the tool-camera transform.

B. Robot Kinematics

  • Combining Eqs. 9 and 10 the authors can relate the desired highlevel control velocities with their visual servo task, which they term now σS Jqavqa = −ΛS J+vse︸︷︷︸ σS . (13) Unfortunately as said before, the quadrotor is an underactuated vehicle.
  • So, to remove the non-controllable variables from the control command, their contribution to the image error can be isolated from that of the controllable ones by extracting the columns of Jqa and the rows of vqa corresponding to ωqx and ωqy , reading out these values from the platform gyroscopes, and subtracting them from the camera velocity [26].

C. Motion Distribution

  • In order to penalize the motion of the quadrotor vs the arm to account for their different motion capabilities, the authors can define a weighted norm of the whole velocity vector ‖q̇‖W = √ q̇>.
  • Large movements should be achieved by the quadrotor whereas the precise movements should be devoted to the robotic arm due to its dexterity when the platform is close to the target.
  • The blocks of W weight differently the velocity components of the arm and the quadrotor by increasing the velocity of the quadrotor when the distance to the target d > ∆W , while for distances d < δW the quadrotor is slowed down and the arm is commanded to accommodate for the precise movements.

A. Hierarchical Task Composition

  • Exploiting this redundancy, the authors can achieve additional tasks acting on the null space of the quadrotor-arm Jacobian [29], while preserving the primary task.
  • These tasks can be used to reconfigure the robot structure without changing the position and orientation of the arm end-effector.
  • Multiple secondary tasks can be arranged in hierarchy and, to avoid conservative stability conditions [31], the augmented inversebased projections method is here considered [21].
  • Lower priority tasks are not only projected onto the null space of the task up in the hierarchy, but onto the null space of an augmented Jacobian with all higher priority tasks.
  • In Section III-B the authors showed how to compute a visual servo control law that takes into account the uncontrollable state variables.

B. Stability analysis

  • To assess the stability of each i-th individual task, the authors use Lyapunov analysis by considering the positive definite candidate Lyapunov function L = 12 ‖σi(t)‖.
  • Notice how the secondary task does not affect the dynamics of the main task thanks to the null space projector, hence the stability of the main task is again achieved.
  • The previous stability analysis can be straightforwardly extended to the general case of η subtasks.

C. Task Order

  • And lower in the hierarchy, the alignment of the center of gravity of the UAM (G), and a technique to stay away from the arm’s joint limits (L).the authors.
  • Notice that the safety task Jacobian pseudo-inverse J#I is also weighted and how in the null space projectors NI|S and NI|S|G from Eq. 31, the involved pseudo-inverses do not need to be weighted because the center of gravity alignment and the joint limits avoidance involve only arm movements and also should be accomplished during the flight.
  • The authors now give more detailed descriptions of task Jacobians and task errors involved.

D. Collision Avoidance

  • The most important task during a mission is to preserve flight safety.
  • When a rotor operates near an obstacle, different aerodynamic effects are revealed, such as the so called "ground" or "ceiling" effects, that can lead to an accident.
  • Hence, to avoid them, the authors propose a task with the highest priority to maintain a safety distance to obstacles by defining a safety sphere around the flying platform, and comparing the Euclidean distance to the obstacle (do) with the sphere radius (rI ).
  • Note that this corresponds to a proportional control law although integral or derivative errors could also be considered.
  • This is usually called joint clamping (JC).

E. Center of Gravity

  • If the arm and quadrotor center of gravity (CoG) are not vertically aligned, the motion of the arm produces an undesired torque on the quadrotor base that perturbs the system attitude and position.
  • This effect can be mitigated by minimizing the distance between the arm CoG and the vertical line of the quadrotor gravity vector.
  • The position of the arm CoG pbG is a function of the arm joint configuration defined as pbG = ∑ν i=1 mi p b Gi∑ν i=1 , (37) where mi and pbGi are the mass and the position of the CoG of link i.
  • The authors can compute the arm CoG with respect to the body frame for the sequence of links j to the end-effector with p∗bGj = R b j ∑ν i=j mi p b Gi∑ν i=j , (38) where Rbj is the rotation between link j and the body frame.
  • Notice that all these quantities are a function of the current joint configuration qa.

B. UAM system

  • To demonstrate the proposed hierarchical task composition the authors designed and built a lightweight robotic arm with a joint setting to compensate the possible noise existing in the quadrotor positioning while hovering, and to avoid self collisions during take off and landing maneuvers.
  • To address the dynamical effects of the overall system their cascaded architecture considers two different control loops at very high frequency (1KHz), one for the arm and one for the attitude of the UAV; and a hierarchical task controller running at much lower frequency (camera frame rate), hence avoiding dynamic coupling between them.
  • Then the authors use the inter-distance constraints to solve for scale and focal length.
  • When only the visual servoing (S) is executed, the time to reach the target is significantly higher than those cases in which the arm CoG is vertically aligned (S+G and S+G+L).

VI. REAL ROBOT EXPERIMENTS

  • The authors conducted a series of experiments with missions similar to those shown in simulations, i.e. autonomously taking off and flying to a location in which the target appears in the field of view of the camera, turning on then the hierarchical task controller to servo the system towards a desired camera pose, and finally autonomously landing the system.
  • The last task is designed to favor a desired arm configuration and it can be used to push the joints away from singularities and potentially increase maneuverability.
  • It does not imply that the subtask can always be fulfilled.

VII. CONCLUSIONS

  • The authors have presented an uncalibrated image-based visual servo scheme for manipulation UAVs.
  • The authors have presented a control law to achieve not only the visual servoing but also other tasks taking into account their specific priorities.
  • Moreover, the presented control law only requires independent tasks for the uncontrollable variables to guarantee exponentially stability of the system.
  • The technique is demonstrated using Matlab and ROS in both simulation and a real UAM.
  • The authors can think of two avenues for further research.

Did you find this useful? Give us your feedback

Figures (9)

Content maybe subject to copyright    Report

IEEE/ASME TRANSACTIONS ON MECHATRONICS 1
Uncalibrated Visual Servo for
Unmanned Aerial Manipulation
Angel Santamaria-Navarro, Patrick Grosch, Vincenzo Lippiello, Joan Solà and Juan Andrade-Cetto
Abstract—This paper addresses the problem of autonomous
servoing an unmanned redundant aerial manipulator using
computer vision. The over-actuation of the system is exploited
by means of a hierarchical control law which allows to prioritize
several tasks during flight. We propose a safety related primary
task to avoid possible collisions. As a secondary task we present
an uncalibrated image-based visual servo strategy to drive the
arm end-effector to a desired position and orientation using
a camera attached to it. In contrast to previous visual-servo
approaches, a known value of camera focal length is not strictly
required. To further improve flight behavior we hierarchically
add one task to reduce dynamic effects by vertically aligning the
arm center of gravity to the multirotor gravitational vector, and
another one that keeps the arm close to a desired configuration
of high manipulability and avoiding arm joint limits. The
performance of the hierarchical control law, with and without
activation of each of the tasks, is shown in simulations and in real
experiments confirming the viability of such prioritized control
scheme for aerial manipulation.
I. INTRODUCTION
Unmanned aerial vehicles (UAVs), and in particular multi-
rotor systems, have substantially gained popularity in recent
years, motivated by their significant increase in maneuver-
ability, together with a decrease in weight and cost [1].
Until recently, UAVs were not usually required to interact
physically with the environment, however this trend is set to
change. Some examples are the ARCAS, AEROARMS and
AEROWORKS EU funded projects with the aim to develop
UAV systems with advanced manipulation capabilities for
autonomous industrial inspection and repair tasks, such as the
UAM manipulator Kinton from the ARCAS project shown
in Fig. 1. Physical interaction with the environment calls for
positioning accuracy at the centimeter level, which in GPS
denied environments is often difficult to achieve. For indoor
UAV systems, accurate localization is usually obtained from
infrared multi-camera devices, like Vicon or Optitrack. How-
ever, these devices are not suited for outdoor environments
and other means should be used, such as visual servoing.
Vision-based robot control systems are usually classified
in three groups: position-based visual servo (PBVS), image-
A. Santamaria-Navarro, P. Grosch, J. Solà and J. Andrade-Cetto are
with the Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Llorens
Artigas 4-6, Barcelona 08028, Spain, e-mail: {asantamaria, pgrosch, jsola,
cetto}@iri.upc.edu
V. Lippiello is with Università degli Studi di Napoli Federico II. Via
Claudio 21, 80125 Napoli, Italy, e-mail: lippiello@unina.it
This work has been funded by the EU project AEROARMS H2020-ICT-
2014-1-644271 and by the Spanish Ministry of Economy and Competitiveness
project ROBINSTRUCT TIN2014-58178-R.
The paper has supplementary multimedia material available at
http://www.angelsantamaria.eu/multimedia
Fig. 1: The UAM used in the experiments is composed of a 4
DoF quadrotor, commanded at high-level by 3 linear and an angular
velocities (ν
x
, ν
y
, ν
z
and ω
z
), and a 6 DoF robotic arm with joints
q
j
, j = 1...6; and world, camera, tool and body reference frames
indicated by the letters w, c, t and b, respectively.
based visual servo (IBVS), and hybrid control systems [2],
[3]. In PBVS, the geometric model of the target is used in
conjunction with image features to estimate the pose of the
target with respect to the camera frame. The control law is
designed to reduce such pose error in pose space and, in
consequence, the target could be easily lost in the image during
the servo loop. In IBVS on the other hand, both the error and
control law are expressed in the image space, minimizing the
error between observed and desired image feature coordinates.
As a consequence, IBVS schemes do not need any a priori
knowledge of the 3D structure of the observed scene. In
addition, IBVS is more robust than PBVS with respect to
uncertainties and disturbances affecting the model of the robot,
as well as the calibration of the camera [4], [5]. Hybrid
methods, also called 2-1/2-D visual servo [6], combine IBVS
and PBVS to estimate partial camera displacements at each
iteration of the control law minimizing a functional of both.
In all image-based and hybrid approaches the resulting
image Jacobian or interaction matrix, which relates the cam-
era velocity to the image feature velocities, depends on a
priori knowledge of the intrinsic camera parameters. Al-
though image-based methods, and in extension some hybrid
approaches, have shown some robustness in these parameters,
they usually break down at error levels larger than 10% [5].
In contrast, our method indirectly estimates the focal length
online which, as shown in the experiments section, allows to
withstand calibration errors up to 20%.
To do away with this dependence, one could optimize
for the parameters in the image Jacobian whilst the error
in the image plane is being minimized. This is done for
instance, using Gauss-Newton to minimize the squared image
error and non-linear least squares optimization for the image
Jacobian [7]; using weighted recursive least squares, not to
obtain the true parameters, but instead an approximation that

IEEE/ASME TRANSACTIONS ON MECHATRONICS 2
still guarantees asymptotic stability of the control law in the
sense of Lyapunov [8], [9]; using k-nearest neighbor regres-
sion to store previously estimated local models or previous
movements, and estimating the Jacobian using local least
squares [10], or building a secant model using population of
the previous iterates [11]. To provide robustness to outliers in
the computation of the Jacobian, [12] proposes the use of an
M-estimator.
In this paper we extend our prior work on uncalibrated
image-based visual servo (UIBVS) [13], which was demon-
strated only in simulation, to a real implementation for the
case of aerial manipulation. UIBVS contains mild assumptions
about the principal point and skew values of the camera, and
does not require prior knowledge of the focal length. Instead,
in our method, the camera focal length is iteratively estimated
within the control loop. Independence of focal length true
value makes the system robust to noise and to unexpected
large variations of this parameter (e.g., poor initialization or
an unaccounted zoom change).
Multirotors, and in particular quadrotors such as the one
used in this work, are underactuated platforms. That is, they
can change their torque load and thrust/lift by altering the
velocity of the propellers, with only four degrees-of-freedom
(DoF), one for the thrust and three torques. But, as shown in
this paper, the attachment of a manipulator arm to the base of
the robot can be seen as a strategy to alleviate underactuation
allowing unmanned aerial manipulators (UAM) to perform
complex tasks.
In [14] a vision-based method to guide a UAM with a
three DoF arm is described. To cope with underactuation
of the aerial platform, roll and pitch motion compensation
is moved to the image processing part, requiring projective
transformations. Therefore, errors computing arm kinematics
are to be coupled with the image-based control law and the
scale (i.e. camera-object distance) cannot be directly measured.
Flying with a suspended load is a challenging task and it is
essential to have the ability to minimize the undesired effects
of the arm in the flying system [15]. Among these effects,
there is the change of the center of mass during flight, that can
be solved designing a low-level attitude controller such as a
Cartesian impedance controller [16], or an adaptive controller.
Moreover, a desired end-effector pose might require a non-
horizontal robot configuration that the low level controller
would try to compensate, changing in turn the arm end-effector
position. In this way, [17] designs a controller exploiting the
whole system model. However, flight stability is preserved
by restricting the arm movements to those not jeopardizing
UAM integrity. To cope with these problems, parallel robots
are analyzed in [18] and [19]. The main advantages they offer
are related with the torque reduction in the platform base.
However, they are limited in workspace and are difficult to
handle due to their highly nonlinear motion models.
The redundancy of quadrotor-arm systems in the form
of extra DoF could be exploited to develop a low priority
stabilizing task or to optimize some given quality indices,
e.g. manipulability, joint limits, etc., [20], [21]. In [22] is
presented an image-based control law explicitly taking into
account the system redundancy and underactuation of the
vehicle base. The camera is attached on the aerial platform
and the positions of both arm end-effector and target are
projected onto the image plane in order to perform an image-
based error decrease, which creates a dependency on the
precision of the odometry estimator that is rarely achieved
in a real scenario without motion capture systems. Moreover,
the proposed control scheme is only validated in simulation.
In this work, we exploit the DoF redundancy of the overall
system not only to achieve the desired visual servo task, but
to do so whilst attaining also other tasks during the mission.
We presented in [23] a close approach consisting on a hybrid
servoing scheme. In contrast to [23] which uses a combination
of classical PBVS and IBVS, in this article we present a fully
vision-based self-calibrated scheme that can handle poorly
calibrated cameras. Moreover, we attach a light-weight serial
arm to a quadrotor with a camera at its end-effector, see Fig. 1,
instead of allocating it in the platform frame.
We present a new safety task intended for collision avoid-
ance, designed with the highest priority. Our servo task is
considered second in the hierarchy with two low priority
tasks, one to vertically align the arm and platform centers
of gravity and another to avoid arm joint limits. In contrast
to [23] we combine the tasks hierarchically in a less restrictive
manner, minimizing secondary task reconstruction only for
those components not in conflict with the primary task. This
strategy is known to achieve possibly less accurate secondary
task reconstruction but with the advantage of decoupling
algorithmic singularities between tasks [24].
Although hierarchical task composition techniques are well
known for redundant manipulators, its use on aerial manipu-
lation is novel. Specifically, the underactuation of the flying
vehicle has critical effects on mission achievement and here we
show how the non-controllable DoF must be considered in the
task designs. While the control law presented in [23] requires
orthogonal tasks to guarantee stability of the system, in our
case only independence of non-controllable DoF is required.
We validate the use of this task hierarchy in simulations and in
extensive real experiments, using our UIBVS scheme to track
the target, and also with the aid of an external positioning
system.
To summarize, the main contributions of the paper are two-
fold. On the one hand, we demonstrate now in real experi-
ments (on-board, and in real time) the proposed uncalibrated
image-based servo law which was previously only shown in
simulation in [13]. The second contribution is the proposal
of a hierarchical control law that exploits the extra degrees
of freedom of the UAV-arm system which, in contrast to our
previous solution [23], uses a less restrictive control law that
only actuates on the components of the secondary tasks that
do not conflict directly with tasks higher up in the hierarchy.
The remainder of this article is structured as follows. The
next section presents our uncalibrated approach to visual servo.
Section III describes the kinematics of our UAM and Sec-
tion IV contains the proposed task priority controller and task
definitions. Simulations and experimental results are presented
in Section V. Finally, conclusions are given in Section VII.

IEEE/ASME TRANSACTIONS ON MECHATRONICS 3
II. UNCALIBRATED IMAGE-BASED VISUAL SERVOING
Drawing inspiration on the UPnP algorithm [25], we de-
scribe in the following subsection a method to solve for the
camera pose and focal length using a reference system attached
to the target object. The method is extended in Sec. II-B to
compute a calibration-free image Jacobian for our servo task,
and in Sec. II-C to compute the desired control law.
A. Uncalibrated PnP
3D target features are parameterized with their barycentric
coordinates, and the basis of these coordinates is used to define
a set of control points. Computing the pose of the object with
respect to the camera resorts to computing the location of
these control points with respect to the camera frame. A least
squares solution for the control point coordinates albeit scale,
is given by the null eigenvector of a linear system made up
of all 2D to 3D perspective projection relations between the
target points. Given the fact that distances between control
points must be preserved, these distance constraints can be
used in a second least squares computation to solve for scale
and focal length. More explicitly, the perspective projection
equations for each target feature become
4
X
j=1
a
ij
x
j
+ a
ij
(u
0
u
i
)
z
j
α
= 0 (1a)
4
X
j=1
a
ij
y
j
+ a
ij
(v
0
v
i
)
z
j
α
= 0, (1b)
where s
i
= [u
i
, v
i
]
>
are the image coordinates of the target
feature i, and c
j
= [x
j
, y
j
, z
j
]
>
are the 3D coordinates of
the j-th control point in the camera frame. The terms a
ij
are
the barycentric coordinates of the i-th target feature which
are constant regardless of the location of the camera reference
frame, and α is our unknown focal length.
These equations can be jointly expressed for n 2D-3D
correspondences as a linear system
Mx = 0 , (2)
where M is a 2n × 12 matrix made of the coefficients
a
ij
, the 2D points s
i
and the principal point, and x is
our vector of 12 unknowns containing both the 3D coordi-
nates of the control points in the camera reference frame
and the camera focal length, dividing the z terms x =
[x
1
, y
1
, z
1
/α, ..., x
4
, y
4
, z
4
]
>
. Its solution lies in the null
space of M, and can be computed as a scaled product of the
null eigenvector of M
>
M via singular value decomposition
x = βv , (3)
the scale β becoming a new unknown. In the noise-free case,
M
>
M is only rank deficient by one, but when image noise is
severe it might loose rank, and a more accurate solution can be
found as a linear combination of the basis of its null space. In
this work we are not interested on recovering accurate camera
pose, but on minimizing the projection error within a servo
task. It is sufficient for our purposes to consider only the least
squares approximation; that is, to compute the solution only
using the eigenvector associated to the smallest eigenvalue.
To solve for β we add constraints that preserve the distance
between control points of the form ||c
j
c
j
0
||
2
= d
2
jj
0
, where
d
jj
0
is the known distance between control points c
j
and
c
j
0
in the world coordinate system. Substituting x in these
six distance constraints, we obtain a system of the form
Lb = d, where b = [β
2
, α
2
β
2
]
>
, L is a 6 × 2 matrix
built from the known elements of v, and d is the 6-vector
of squared distances between the control points. We solve
this overdetermined linearized system using least squares and
estimate the magnitudes of α and β by back substitution
α =
s
|b
2
|
|b
1
|
, β =
b
1
. (4)
B. Calibration-free Image Jacobian
As the camera moves, the velocity of each target control
point c
j
in camera coordinates can be related to the camera
spatial velocity (t, ) with
˙
c
j
= t × c
j
. Which com-
bined with Eq. 3, we obtain
˙x
j
˙y
j
˙z
j
=
t
x
ω
y
α βv
z
+ ω
z
βv
y
t
y
ω
z
βv
x
+ ω
x
α βv
z
t
z
ω
x
βv
y
+ ω
y
βv
x
, (5)
where v
x
, v
y
, and v
z
are the x, y, and z components of
eigenvector v related to the control point c
j
, and whose image
projection and its time derivative are given by
u
j
v
j
=
"
α
x
j
z
j
+ u
0
α
y
j
z
j
+ v
0
#
,
˙u
j
˙v
j
= α
˙x
j
z
j
x
j
˙z
j
z
2
j
˙y
j
z
j
y
j
˙z
j
z
2
j
. (6)
Substituting Eqs. 3 and 5 in Eq. 6 we have
˙u
j
=
t
x
αβv
z
ω
y
+ βv
y
ω
z
βv
z
v
x
(t
z
βv
y
ω
x
+ βv
x
ω
y
)
αβv
2
z
(7a)
˙v
j
=
t
y
αβv
z
ω
x
+ βv
x
ω
z
βv
z
v
y
(t
z
βv
y
ω
x
+ βv
x
ω
y
)
αβv
2
z
,
(7b)
which can be rewritten as
˙
s
j
= J
j
v
c
, with
˙
s
j
= [ ˙u
j
, ˙v
j
]
>
,
the image velocities of control point j, and v
c
= [t
>
,
>
]
>
.
J
j
is our desired calibration-free image Jacobian for the j-th
control point, and takes the form
J
j
=
1
βv
z
0
v
x
αβv
2
z
v
x
v
y
αv
2
z
v
2
x
α
2
v
2
z
αv
2
z
v
y
v
z
0
1
βv
z
v
y
αβv
2
z
v
2
y
+α
2
v
2
z
αv
2
z
v
x
v
y
αv
2
z
v
x
v
z
.
(8)
Stacking these together, we get the image Jacobian for all
control points J
vs
=
J
1
. . . J
4
>
.
C. Control Law
The aim of our image-based control scheme is to minimize
the error e(t) = s(t) s
, where s(t) are the current image
coordinates of the set of target features, and s
are their
final desired positions in the image plane, computed with our
initial value for α. If we select s to be the projection of the
control points c, and disregarding the time variation of α,
and consequently of s
, the derivative of the error becomes
˙
e =
˙
s, and, for a desired exponential decoupled error decrease

IEEE/ASME TRANSACTIONS ON MECHATRONICS 4
˙
e = Λ
S
e, we have a desired camera velocity
v
c
= Λ
S
J
+
vs
e (9)
where Λ
S
is a 6 × 6 positive definite gain matrix and J
+
vs
=
(J
>
vs
J
vs
)
1
J
>
vs
is the left Moore-Penrose pseudoinverse of
J
vs
.
III. ROBOT MODEL
A. Coordinate Systems
Consider the quadrotor-arm system equipped with a camera
mounted at the end-effector’s arm as shown in Fig. 1. Without
loss of generality, we consider the world frame (w) to be
located at the target. With this, the position of the camera (c)
with respect to the target frame, expressed as a homogeneous
transform T
w
c
, can be computed integrating the camera ve-
locities obtained from the uncalibrated visual servo approach
presented in the previous section.
A quadrotor is at the high level of control an underactuated
vehicle with only 4 DoF, namely the linear velocities plus the
yaw angular velocity (ν
qx
, ν
qy
, ν
qz
, ω
qz
) acting on the body
frame. And at the low level, the attitude controller stabilizes
horizontally the quadrotor body. Now, let q
a
=
q
1
, . . . , q
m
>
be the joint vector of the robotic arm attached to the UAM.
With the arm base frame coincident with the quadrotor body
frame, the relation between the quadrotor body and camera
frames is T
b
c
= T
b
t
(q
a
) T
t
c
, with T
b
t
(q
a
) the arm kinematics
and T
t
c
the tool-camera transform. Moreover, the pose of
the quadrotor with respect to the target is determined by the
transform T
b
w
= T
b
c
(T
w
c
)
1
.
B. Robot Kinematics
We are in the position now to define a joint quadrotor-
arm Jacobian that relates the local translational and angular
velocities of the platform and those of the m arm joints,
v
qa
= (ν
qx
, ν
qy
, ν
qz
, ω
qx
, ω
qy
, ω
qz
, ˙q
1
, . . . , ˙q
m
), to the desired
camera velocities computed from the visual servo
v
c
= J
qa
v
qa
. (10)
with J
qa
the Jacobian matrix of the whole robot.
This velocity vector in the camera frame, can be expressed
as a sum of the velocities added by the arm kinematics and
the quadrotor movement v
c
= v
c
a
+ v
c
q
(superscripts indicate
the reference frame to make it clear to the reader), where v
c
a
is obtained with the arm Jacobian
v
c
a
=
R
c
b
0
0 R
c
b
J
a
˙
q
a
= R
c
b
J
a
˙
q
a
, (11)
with R
c
b
the rotation matrix of the body frame with respect to
the camera frame, and where v
c
q
corresponds to the velocity
of the quadrotor expressed in the camera frame
v
c
q
= R
c
b
ν
b
q
+ ω
b
q
× r
b
c
ω
b
q
=
R
c
b
R
c
b
r
b
c
×
0 R
c
b
v
b
q
, (12)
with r
b
c
(q
a
) the distance vector between the body and camera
frames, and v
b
q
= [ν
qx
, ν
qy
, ν
qz
, ω
qx
, ω
qy
, ω
qz
]
>
the velocity
vector of the quadrotor in the body frame.
Combining Eqs. 9 and 10 we can relate the desired high-
level control velocities with our visual servo task, which we
term now σ
S
J
qa
v
qa
= Λ
S
J
+
vs
e
|{z}
σ
S
. (13)
Unfortunately as said before, the quadrotor is an underactu-
ated vehicle. So, to remove the non-controllable variables from
the control command, their contribution to the image error can
be isolated from that of the controllable ones by extracting the
columns of J
qa
and the rows of v
qa
corresponding to ω
qx
and
ω
qy
, reading out these values from the platform gyroscopes,
and subtracting them from the camera velocity [26]
J
S
˙
q +
J
S
$ = Λ
S
σ
S
, (14)
where $ = [ω
qx
, ω
qy
]
>
, J
S
is the Jacobian formed by the
columns of J
qa
corresponding to ω
qx
and ω
qy
, and J
S
is the
Jacobian formed by all other columns of J
qa
, corresponding
to the actuated variables
˙
q = [ν
qx
, ν
qy
, ν
qz
, ω
qz
, ˙q
1
, . . . , ˙q
m
]
>
.
Rearranging terms
J
S
˙
q = Λ
S
σ
S
J
S
$
| {z }
ξ
(15)
and with this, our main task velocity corresponding to the
visual servo is
˙
q = J
+
S
ξ , (16)
where, with 6 linearly independent rows and 4 + m > 6
columns, J
+
S
is computed with the right Moore-Penrose pseu-
doinverse J
>
S
(J
S
J
>
S
)
1
.
C. Motion Distribution
In order to penalize the motion of the quadrotor vs the
arm to account for their different motion capabilities, we
can define a weighted norm of the whole velocity vector
k
˙
qk
W
=
p
˙
q
>
W
˙
q as in [27], and use a weighted task
Jacobian to solve for the weighted controls
˙
q
W
= W
1/2
(J
S
W
1/2
)
+
ξ = J
#
S
ξ , (17)
with
J
#
S
= W
1
J
>
S
(J
S
W
1
J
>
S
)
1
(18)
the weighted generalized Moore-Penrose pseudoinverse of the
servoing Jacobian. With this, large movements should be
achieved by the quadrotor whereas the precise movements
should be devoted to the robotic arm due to its dexterity when
the platform is close to the target. To achieve this behavior,
we define a time-varying diagonal weight-matrix, as proposed
in [28], W(d) = diag((1 γ) I
4
, γ I
n
), with n = 4 + m the
whole UAM DoF (4 for the quadrotor and m for the arm) and
γ(d) =
1 + γ
2
+
1 γ
2
tanh
2 π
d δ
W
W
δ
W
π
, (19)
where γ [γ, 1], and δ
W
and
W
,
W
> δ
W
, are the
distance thresholds corresponding to γ
=
1 and γ
=
γ,
respectively. The blocks of W weight differently the velocity
components of the arm and the quadrotor by increasing the
velocity of the quadrotor when the distance to the target
d >
W
, while for distances d < δ
W
the quadrotor is slowed
down and the arm is commanded to accommodate for the
precise movements.

IEEE/ASME TRANSACTIONS ON MECHATRONICS 5
IV. TASK PRIORITY CONTROL
A. Hierarchical Task Composition
Even though the quadrotor itself is underactuated (4 DoF),
by attaching a robotic arm with more than 2 DoF we can
attain over-actuation (n = 4 + m). In our case, m = 6.
Exploiting this redundancy, we can achieve additional tasks
acting on the null space of the quadrotor-arm Jacobian [29],
while preserving the primary task. These tasks can be used to
reconfigure the robot structure without changing the position
and orientation of the arm end-effector. This is usually referred
to as internal motion of the arm. One possible way to specify
a secondary task is to choose its velocity vector as the gradient
of a scalar objective function to optimize [20], [30]. Multiple
secondary tasks can be arranged in hierarchy and, to avoid
conservative stability conditions [31], the augmented inverse-
based projections method is here considered [21]. In this
method, lower priority tasks are not only projected onto the
null space of the task up in the hierarchy, but onto the null
space of an augmented Jacobian with all higher priority tasks.
In Section III-B we showed how to compute a visual servo
control law that takes into account the uncontrollable state
variables. This is not however our main task. We decide
to locate higher up in the hierarchy an obstacle avoidance
task needed to guarantee system integrity. In a more general
sense, we can define any such primary task as a configuration
dependent task σ
0
= f
0
(x). Differentiating it with respect to
x, and separating the uncontrollable state variables as in Eq. 14
we have
˙
σ
0
=
f
0
(x)
x
˙
x = J
0
˙
q
0
+ J
0
$ , (20)
which again, considering as in Eq. 16 a main task error
e
σ
0
=
σ
0
σ
0
, to regulate σ
0
to a desired value σ
0
, the control law
for the main task becomes
˙
q
0
= J
+
0
(Λ
0
e
σ
0
J
0
$) , (21)
where as with Eq. 15 and 16, Λ
0
is a positive definite gain
matrix and J
+
0
is the generalized inverse of J
0
.
Consider now a secondary lower priority task σ
1
= f
1
(x)
such that
˙
σ
1
= J
1
˙
q
1
+ J
1
$ , (22)
with
˙
q
1
= J
+
1
(Λ
1
e
σ
1
J
1
$) and a task composition strategy
that minimizes secondary task velocity reconstruction only
for those components in Eq. 22 that do not conflict with the
primary task [24], namely
˙
q = J
+
0
Λ
0
e
σ
0
+ N
0
J
+
1
Λ
1
e
σ
1
J
0|1
$ , (23)
where N
0
= (I
n
J
+
0
J
0
) is the null space projector of the
primary task and J
0|1
= J
+
0
J
0
+ N
0
J
+
1
J
1
is the Jacobian
matrix that allows for the compensation of the variation of
the uncontrollable states $.
This strategy, in contrast to the more restrictive one we
presented in [23] might achieve larger constraint-task recon-
struction errors than the full least squares secondary task
solution in [23] but with the advantage that algorithmic
singularities arising from conflicting tasks are decoupled from
the singularities of the secondary tasks.
The addition of more tasks in cascade is possible as long
as there exist remaining DoF from the concatenation of tasks
higher up in the hierarchy. The generalization of Eq. 23 to the
case of η prioritized subtasks is
˙
q = J
+
0
Λ
0
e
σ
0
+
η
X
i=1
N
0|...|i1
J
+
i
Λ
i
e
σ
i
J
0|...|η
$ (24)
with the recursively-defined compensating matrix
J
0|...|η
= N
0|...|i1
J
+
i
J
i
+ (I N
+
0|...|i1
J
+
i
J
i
)J
0|...|i1
,
(25)
where N
0|...|i
is the projector onto the null space of the
augmented Jacobian J
0|...|i
for the i-th subtask, with i =
0, ..., η 1, and are respectively defined as follows
N
0|...|i
= (I J
+
0|...|i
J
0|...|i
) (26)
J
0|...|i
= [J
>
0
... J
>
i
]
>
. (27)
B. Stability analysis
To assess the stability of each i-th individual task, we use
Lyapunov analysis by considering the positive definite can-
didate Lyapunov function L =
1
2
kσ
i
(t)k
2
and its derivative
˙
L = σ
T
i
˙
σ
i
. Then, for the primary task we can substitute Eq. 21
into Eq. 20, giving
˙
σ
0
= Λ
0
e
σ
0
, which for a defined main task
error
e
σ
0
= σ
0
σ
0
and σ
0
= 0, the asymptotic stability is
proven with
˙
L = σ
T
0
Λ
0
σ
0
.
Similarly, substituting Eq. 23 into Eq. 22, and considering a
task error
e
σ
1
= σ
1
σ
1
, with σ
1
= 0, the following dynamics
for the secondary task is achieved
˙
σ
1
= J
1
J
+
0
Λ
0
σ
0
Λ
1
σ
1
+ (J
1
J
+
0
J
0
)$ , (28)
where we used the property J
1
N
0
J
+
1
= I. Notice how
exponential stability of the secondary task in Eq. 28 can
only be guaranteed when the tasks are independent for
the uncontrollable states $ (i.e. J
1
J
+
0
J
0
= 0), hence
˙
L = σ
T
1
J
1
J
+
0
Λ
0
σ
0
σ
T
1
Λ
1
σ
1
, which is a less stringent
condition than whole task orthogonality J
1
J
+
0
= 0 that was
needed in [23].
Finally the dynamics of the system can be written as
˙
σ
0
˙
σ
1
=
Λ
0
O
J
1
J
+
0
Λ
0
Λ
1
σ
0
σ
1
, (29)
which is characterized by a Hurwitz matrix as in [23] that
guarantees the exponential stability of the system. Notice how
the secondary task does not affect the dynamics of the main
task thanks to the null space projector, hence the stability of
the main task is again achieved.
The previous stability analysis can be straightforwardly
extended to the general case of η subtasks.
C. Task Order
In this paper we consider the following ordered tasks: a pri-
mary safety task (I) considering potential collisions (inflation
radius); a secondary task performing visual servoing (S), and
lower in the hierarchy, the alignment of the center of gravity
of the UAM (G), and a technique to stay away from the arm’s
joint limits (L). By denoting with J
I
, J
S
, J
G
and J
L
the

Citations
More filters
Journal ArticleDOI
TL;DR: This article summarizes new aerial robotic manipulation technologies and methods-aerial robotic manipulators with dual arms and multidirectional thrusters-developed in the AEROARMS project for outdoor industrial inspection and maintenance (I&M).
Abstract: This article summarizes new aerial robotic manipulation technologies and methods-aerial robotic manipulators with dual arms and multidirectional thrusters-developed in the AEROARMS project for outdoor industrial inspection and maintenance (IaM).

167 citations


Cites background from "Uncalibrated Visual Servo for Unman..."

  • ...In [19] the feedback output from a camera attached to the end effector is adopted in a hierarchical control law....

    [...]

Journal ArticleDOI
TL;DR: An extensive study of aerial vehicles and manipulation/interaction mechanisms in aerial manipulation is presented and the shortcomings of current aerial manipulation research are highlighted and a number of directions for future research are suggested.

144 citations

Journal ArticleDOI
TL;DR: A novel proportional-integral-derivative (PID)-type motion controller for a quadrotor is introduced, and better tracking accuracy is obtained with the introduced nonlinear PID-type algorithm.
Abstract: A novel proportional-integral-derivative (PID)-type motion controller for a quadrotor is introduced in this paper A rigorous analysis of the closed-loop system trajectories is provided, and gain tuning guidelines are discussed Real-time experimental results consisting of the implementation of a PID-based scheme, a sliding-mode controller, and the new scheme are given Gains are selected so that the three tested controllers present the same energy consumption In order to assess the robustness of the controllers tested, experiments are carried out in the presence of disturbances in one of the actuators Specifically, the disturbance consists in attenuating the force delivered Better tracking accuracy is obtained with the introduced nonlinear PID-type algorithm

94 citations


Cites methods from "Uncalibrated Visual Servo for Unman..."

  • ...In [8], an uncalibrated image-based visual servo scheme was given for the manipulation of unmanned aerial vehicles....

    [...]

Journal ArticleDOI
TL;DR: The failure recovery and synchronization job manager is used to integrate all the presented subtasks together and also to decrease the vulnerability to individual subtask failures in real‐world conditions.

81 citations

Journal ArticleDOI
TL;DR: In this paper , the evolution and current trends in aerial robotic manipulation, comprising helicopters, conventional underactuated multirotors, and multidirectional thrust platforms equipped with a wide variety of robotic manipulators capable of physically interacting with the environment, are analyzed.
Abstract: This article analyzes the evolution and current trends in aerial robotic manipulation, comprising helicopters, conventional underactuated multirotors, and multidirectional thrust platforms equipped with a wide variety of robotic manipulators capable of physically interacting with the environment. It also covers cooperative aerial manipulation and interconnected actuated multibody designs. The review is completed with developments in teleoperation, perception, and planning. Finally, a new generation of aerial robotic manipulators is presented with our vision of the future.

65 citations

References
More filters
Proceedings ArticleDOI
24 Dec 2012
TL;DR: A Cartesian impedance control for UAVs equipped with a robotic arm, which is specified in terms of Cartesian space coordinates, and it is possible to exploit the redundancy of the system so as to perform some useful subtasks.
Abstract: A Cartesian impedance control for UAVs equipped with a robotic arm is presented in this paper. A dynamic relationship between generalized external forces acting on the structure and the system motion, which is specified in terms of Cartesian space coordinates, is provided. Through a suitable choice of such variables and with respect to a given task, thanks to the added degrees of freedom given by the robot arm attached to the UAV, it is possible to exploit the redundancy of the system so as to perform some useful subtasks. The hovering control of a quadrotor, equipped with a 3-DOF robotic arm and subject to contact forces and external disturbances acting on some points of the whole structure, is tested in a simulated case study.

136 citations


"Uncalibrated Visual Servo for Unman..." refers background in this paper

  • ...Among these effects, there is the change of the center of mass during flight that can be solved designing a low-level attitude controller, such as a Cartesian impedance controller [16], or an adaptive controller....

    [...]

Journal ArticleDOI
TL;DR: In this article, a vision guidance approach using an image-based visual servo (IBVS) for an aerial manipulator combining a multirotor with a multidegree of freedom robotic arm is presented.
Abstract: This paper presents a vision guidance approach using an image-based visual servo (IBVS) for an aerial manipulator combining a multirotor with a multidegree of freedom robotic arm. To take into account the dynamic characteristics of the combined manipulation platform, the kinematic and dynamic models of the combined system are derived. Based on the combined model, a passivity-based adaptive controller which can be applied on both position and velocity control is designed. The position control is utilized for waypoint tracking such as taking off and landing, and the velocity control is engaged when the platform is guided by visual information. In addition, a guidance law utilizing IBVS is employed with modifications. To secure the view of an object with an eye-in-hand camera, IBVS is utilized with images taken from a fisheye camera. Also, to compensate underactuation of the multirotor, an image adjustment method is developed. With the proposed control and guidance laws, autonomous flight experiments involving grabbing and transporting an object are carried out. Successful experimental results demonstrate that the proposed approaches can be applied in various types of manipulation missions.

128 citations


"Uncalibrated Visual Servo for Unman..." refers methods in this paper

  • ...In [14], a vision-based method to guide a UAM with a three DOF arm is described....

    [...]

Proceedings ArticleDOI
10 Nov 2003
TL;DR: It is proved that the robustness domain is not so wide and that an extern care must be taken when approximating the depth distribution and that a rough approximation of the Depth Distribution Law is sufficient to ensure the stability of the control law.
Abstract: This paper concerns the stability analysis of image-based visual servoing with respect to uncertainties on the depths of the observed object. In the recent past, research on image-based visual servoing has been concentrated on potential problems of stability and on robustness with respect to camera calibration errors. Only little attention, if any, has been devoted to the robustness of image-based visual servoing to depth estimation errors. It is generally believed that a rough approximation of the depth distribution is sufficient to ensure the stability of the control law. In this paper, we prove that the robustness domain is not so wide and that an extern care must be taken when approximating the depth distribution.

123 citations


"Uncalibrated Visual Servo for Unman..." refers background or methods in this paper

  • ...In addition, IBVS is more robust than PBVS with respect to uncertainties and disturbances affecting the model of the robot, as well as the calibration of the camera [4], [5]....

    [...]

  • ...shown some robustness in these parameters, they usually break down at error levels larger than 10% [5]....

    [...]

Journal ArticleDOI
TL;DR: This work proposes a novel approach for the estimation of the pose and focal length of a camera from a set of 3D-to-2D point correspondences and presents new methodologies to circumvent this limitation termed exhaustive linearization and exhaustive relinearization which perform a systematic exploration of the solution space in closed form.
Abstract: We propose a novel approach for the estimation of the pose and focal length of a camera from a set of 3D-to-2D point correspondences. Our method compares favorably to competing approaches in that it is both more accurate than existing closed form solutions, as well as faster and also more accurate than iterative ones. Our approach is inspired on the EPnP algorithm, a recent O(n) solution for the calibrated case. Yet we show that considering the focal length as an additional unknown renders the linearization and relinearization techniques of the original approach no longer valid, especially with large amounts of noise. We present new methodologies to circumvent this limitation termed exhaustive linearization and exhaustive relinearization which perform a systematic exploration of the solution space in closed form. The method is evaluated on both real and synthetic data, and our results show that besides producing precise focal length estimation, the retrieved camera pose is almost as accurate as the one computed using the EPnP, which assumes a calibrated camera.

117 citations


"Uncalibrated Visual Servo for Unman..." refers methods in this paper

  • ...Drawing inspiration on the UPnP algorithm [25], we describe in the following section a method to solve for the camera pose and focal length using a reference system attached to the target...

    [...]

Journal ArticleDOI
01 Jan 2016
TL;DR: A hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm that suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes.
Abstract: In this letter, a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.

107 citations


"Uncalibrated Visual Servo for Unman..." refers background or methods or result in this paper

  • ...In [23], we presented a close approach consisting of a hybrid servoing scheme....

    [...]

  • ...In contrast to [23], which uses a combination of classical PBVS and IBVS, in this paper we present a fully vision-based self-calibrated scheme that can handle poorly calibrated cameras....

    [...]

  • ...While the control law presented in [23] requires orthogonal tasks to guarantee the stability of the system, in our case only independence of noncontrollable DOF is required....

    [...]

  • ..., J1J0 J0 = 0), hence L̇ = −σ1 J1J0 Λ0σ0 − σ1 Λ1σ1 , which is a less stringent condition than whole task orthogonality J1J0 = 0 that was needed in [23]....

    [...]

  • ...control law that exploits the extra DOF of the UAV-arm system which, in contrast to our previous solution [23], uses a less...

    [...]

Frequently Asked Questions (15)
Q1. What are the contributions mentioned in the paper "Uncalibrated visual servo for unmanned aerial manipulation" ?

This paper addresses the problem of autonomous servoing an unmanned redundant aerial manipulator using computer vision. The authors propose a safety related primary task to avoid possible collisions. As a secondary task the authors present an uncalibrated image-based visual servo strategy to drive the arm end-effector to a desired position and orientation using a camera attached to it. To further improve flight behavior the authors hierarchically add one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits. 

The authors can think of two avenues for further research. On the one hand, the activation and deactivation of the safety task as well as a dynamic exchange of task priority roles can induce some chattering phenomena, which can be avoided by introducing a hysteresis scheme. Secondly, the dimensionality of the subspace associated to each null space projector is a necessary condition to be considered when designing subtasks, however it might not be sufficient to guarantee the fulfilment of the subtask and a thorough analytical study of these spaces can be required. 

The addition of more tasks in cascade is possible as long as there exist remaining DoF from the concatenation of tasks higher up in the hierarchy. 

The visual servo mission task requires 6 DoF, and the secondary and comfort tasks with lower priority can take advantage of the remaining 4 DoF. 

1) Primary task: Among all other tasks, the one with the highest priority must be the safety task, not to compromise the platform integrity. 

The gravitational vector alignment task and the joint limits avoidance task require 1 DoF each being scalar cost functions to minimize (see Eq. 35 and 43). 

The desired task variable is σ∗L = 0 (i.e. σ̃L = −σL), while the corresponding task Jacobian isJL = [ 01×4 −2 (ΛL (qa − q∗a))> ] . (45)One common choice of q∗a for the joint limit avoidance is the middle of the joint limit ranges (if this configuration is far from kinematic singularities), q∗a = qa + 1 2 (qa − qa). 

Finally the dynamics of the system can be written as[ σ̇0 σ̇1 ] = [ −Λ0 O −J1J+0 Λ0 −Λ1 ] [ σ0 σ1 ] , (29)which is characterized by a Hurwitz matrix as in [23] that guarantees the exponential stability of the system. 

This guarantees asymptotic stability of the control law regardless of the target point selection, as long as planar configurations are avoided. 

When the obstacle does not violate the inflation radius, the safety task becomes deactivated and the other subtasks can regain access to the previously blocked DoF. Fig. 3(a) shows how the servoing task is elusive during the first 10 seconds of the simulation when the obstacle is present, but is accomplished afterwards when the obstacle is no longer an impediment to the secondary task. 

Ja q̇a = R c b Ja q̇a, (11)with Rcb the rotation matrix of the body frame with respect to the camera frame, and where vcq corresponds to the velocity of the quadrotor expressed in the camera framevcq = R c b[ νbq + ω b q × rbcωbq] = [ Rcb −Rcb [ rbc ] ×0 Rcb] vbq, (12)with rbc(qa) the distance vector between the body and camera frames, and vbq = [νqx, νqy, νqz, ωqx, ωqy, ωqz]> the velocity vector of the quadrotor in the body frame. 

for the primary task the authors can substitute Eq. 21 into Eq. 20, giving σ̇0 = Λ0σ̃0, which for a defined main task error σ̃0 = σ∗0 − σ0 and σ∗0 = 0, the asymptotic stability is proven with L̇ = −σT0 Λ0σ0. 

By denoting with JI , JS , JG and JL theJacobian matrices of the above-mentioned tasks, the desired system velocity can be written as followsq̇ = J#I σ̃I + NI J # S ΛSσ̃S + NI|S J + G σ̃G+NI|S|G J + L σ̃L − JI|S|G|L$, (30)where NI , NI|S , NI|S|G are the projectors of the safety, the visual servoing and of the center of gravity tasks, which are defined asNI = (I− J#I JI) (31a) NI|S = (I− J+I|S JI|S) (31b) NI|S|G = (I− J+I|S|G JI|S|G) , (31c)with JI|S and JI|S|G the augmented Jacobians computed as in Eq. 27. 

The sum of normalized distances of the position of the i-th joint to its desired configuration is given bym∑ i=1 ( qai − q∗ai qai − qai )2 . (42)So their task function is selected as the squared distance of the whole arm joint configuration with respect to the desired oneσL = (qa − q∗a)>ΛL (qa − q∗a), (43)where qa = [ qa1, . . . , qam ]> and q a = [ q a1 , . . . , q am ]> are the high and low joint-limit vectors respectively, and ΛL is a diagonal matrix whose diagonal elements are equal to the inverse of the squared joint limit rangesΛL = diag((qa1 − qa1) −2, . . . , (qam − qam) −2). (44) 

The generalization of Eq. 23 to the case of η prioritized subtasks isq̇ = J+0 Λ0σ̃0 + η∑ i=1 N0|...|i−1J + i Λiσ̃i − J0|...|η$ (24)with the recursively-defined compensating matrixJ0|...|η = N0|...|i−1J + i Ji +