scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Uncalibrated Visual Servo for Unmanned Aerial Manipulation

TL;DR: This paper hierarchically adds one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits.
Abstract: This paper addresses the problem of autonomous servoing an unmanned redundant aerial manipulator using computer vision. The overactuation of the system is exploited by means of a hierarchical control law, which allows to prioritize several tasks during flight. We propose a safety-related primary task to avoid possible collisions. As a secondary task, we present an uncalibrated image-based visual servo strategy to drive the arm end-effector to a desired position and orientation by using a camera attached to it. In contrast to the previous visual servo approaches, a known value of camera focal length is not strictly required. To further improve flight behavior, we hierarchically add one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits. The performance of the hierarchical control law, with and without activation of each of the tasks, is shown in simulations and in real experiments confirming the viability of such prioritized control scheme for aerial manipulation.

Summary (4 min read)

Introduction

  • Unmanned aerial vehicles (UAVs), and in particular multirotor systems, have substantially gained popularity in recent years, motivated by their significant increase in maneuverability, together with a decrease in weight and cost [1].
  • In addition, IBVS is more robust than PBVS with respect to uncertainties and disturbances affecting the model of the robot, as well as the calibration of the camera [4], [5].
  • In all image-based and hybrid approaches the resulting image Jacobian or interaction matrix, which relates the camera velocity to the image feature velocities, depends on a priori knowledge of the intrinsic camera parameters.
  • The second contribution is the proposal of a hierarchical control law that exploits the extra degrees of freedom of the UAV-arm system which, in contrast to their previous solution [23], uses a less restrictive control law that only actuates on the components of the secondary tasks that do not conflict directly with tasks higher up in the hierarchy.
  • The next section presents their uncalibrated approach to visual servo.

II. UNCALIBRATED IMAGE-BASED VISUAL SERVOING

  • Drawing inspiration on the UPnP algorithm [25], the authors describe in the following subsection a method to solve for the camera pose and focal length using a reference system attached to the target object.
  • The method is extended in Sec. II-B to compute a calibration-free image Jacobian for their servo task, and in Sec. II-C to compute the desired control law.

A. Uncalibrated PnP

  • 3D target features are parameterized with their barycentric coordinates, and the basis of these coordinates is used to define a set of control points.
  • A least squares solution for the control point coordinates albeit scale, is given by the null eigenvector of a linear system made up of all 2D to 3D perspective projection relations between the target points.
  • The terms aij are the barycentric coordinates of the i-th target feature which are constant regardless of the location of the camera reference frame, and α is their unknown focal length.
  • In the noise-free case, M>M is only rank deficient by one, but when image noise is severe it might loose rank, and a more accurate solution can be found as a linear combination of the basis of its null space.
  • It is sufficient for their purposes to consider only the least squares approximation; that is, to compute the solution only using the eigenvector associated to the smallest eigenvalue.

A. Coordinate Systems

  • Consider the quadrotor-arm system equipped with a camera mounted at the end-effector’s arm as shown in Fig.
  • The position of the camera (c) with respect to the target frame, expressed as a homogeneous transform Twc , can be computed integrating the camera velocities obtained from the uncalibrated visual servo approach presented in the previous section.
  • A quadrotor is at the high level of control an underactuated vehicle with only 4 DoF, namely the linear velocities plus the yaw angular velocity (νqx, νqy, νqz, ωqz) acting on the body frame.
  • And at the low level, the attitude controller stabilizes horizontally the quadrotor body.
  • With the arm base frame coincident with the quadrotor body frame, the relation between the quadrotor body and camera frames is Tbc = T b t(qa) T t c, with T b t(qa) the arm kinematics and Ttc the tool-camera transform.

B. Robot Kinematics

  • Combining Eqs. 9 and 10 the authors can relate the desired highlevel control velocities with their visual servo task, which they term now σS Jqavqa = −ΛS J+vse︸︷︷︸ σS . (13) Unfortunately as said before, the quadrotor is an underactuated vehicle.
  • So, to remove the non-controllable variables from the control command, their contribution to the image error can be isolated from that of the controllable ones by extracting the columns of Jqa and the rows of vqa corresponding to ωqx and ωqy , reading out these values from the platform gyroscopes, and subtracting them from the camera velocity [26].

C. Motion Distribution

  • In order to penalize the motion of the quadrotor vs the arm to account for their different motion capabilities, the authors can define a weighted norm of the whole velocity vector ‖q̇‖W = √ q̇>.
  • Large movements should be achieved by the quadrotor whereas the precise movements should be devoted to the robotic arm due to its dexterity when the platform is close to the target.
  • The blocks of W weight differently the velocity components of the arm and the quadrotor by increasing the velocity of the quadrotor when the distance to the target d > ∆W , while for distances d < δW the quadrotor is slowed down and the arm is commanded to accommodate for the precise movements.

A. Hierarchical Task Composition

  • Exploiting this redundancy, the authors can achieve additional tasks acting on the null space of the quadrotor-arm Jacobian [29], while preserving the primary task.
  • These tasks can be used to reconfigure the robot structure without changing the position and orientation of the arm end-effector.
  • Multiple secondary tasks can be arranged in hierarchy and, to avoid conservative stability conditions [31], the augmented inversebased projections method is here considered [21].
  • Lower priority tasks are not only projected onto the null space of the task up in the hierarchy, but onto the null space of an augmented Jacobian with all higher priority tasks.
  • In Section III-B the authors showed how to compute a visual servo control law that takes into account the uncontrollable state variables.

B. Stability analysis

  • To assess the stability of each i-th individual task, the authors use Lyapunov analysis by considering the positive definite candidate Lyapunov function L = 12 ‖σi(t)‖.
  • Notice how the secondary task does not affect the dynamics of the main task thanks to the null space projector, hence the stability of the main task is again achieved.
  • The previous stability analysis can be straightforwardly extended to the general case of η subtasks.

C. Task Order

  • And lower in the hierarchy, the alignment of the center of gravity of the UAM (G), and a technique to stay away from the arm’s joint limits (L).the authors.
  • Notice that the safety task Jacobian pseudo-inverse J#I is also weighted and how in the null space projectors NI|S and NI|S|G from Eq. 31, the involved pseudo-inverses do not need to be weighted because the center of gravity alignment and the joint limits avoidance involve only arm movements and also should be accomplished during the flight.
  • The authors now give more detailed descriptions of task Jacobians and task errors involved.

D. Collision Avoidance

  • The most important task during a mission is to preserve flight safety.
  • When a rotor operates near an obstacle, different aerodynamic effects are revealed, such as the so called "ground" or "ceiling" effects, that can lead to an accident.
  • Hence, to avoid them, the authors propose a task with the highest priority to maintain a safety distance to obstacles by defining a safety sphere around the flying platform, and comparing the Euclidean distance to the obstacle (do) with the sphere radius (rI ).
  • Note that this corresponds to a proportional control law although integral or derivative errors could also be considered.
  • This is usually called joint clamping (JC).

E. Center of Gravity

  • If the arm and quadrotor center of gravity (CoG) are not vertically aligned, the motion of the arm produces an undesired torque on the quadrotor base that perturbs the system attitude and position.
  • This effect can be mitigated by minimizing the distance between the arm CoG and the vertical line of the quadrotor gravity vector.
  • The position of the arm CoG pbG is a function of the arm joint configuration defined as pbG = ∑ν i=1 mi p b Gi∑ν i=1 , (37) where mi and pbGi are the mass and the position of the CoG of link i.
  • The authors can compute the arm CoG with respect to the body frame for the sequence of links j to the end-effector with p∗bGj = R b j ∑ν i=j mi p b Gi∑ν i=j , (38) where Rbj is the rotation between link j and the body frame.
  • Notice that all these quantities are a function of the current joint configuration qa.

B. UAM system

  • To demonstrate the proposed hierarchical task composition the authors designed and built a lightweight robotic arm with a joint setting to compensate the possible noise existing in the quadrotor positioning while hovering, and to avoid self collisions during take off and landing maneuvers.
  • To address the dynamical effects of the overall system their cascaded architecture considers two different control loops at very high frequency (1KHz), one for the arm and one for the attitude of the UAV; and a hierarchical task controller running at much lower frequency (camera frame rate), hence avoiding dynamic coupling between them.
  • Then the authors use the inter-distance constraints to solve for scale and focal length.
  • When only the visual servoing (S) is executed, the time to reach the target is significantly higher than those cases in which the arm CoG is vertically aligned (S+G and S+G+L).

VI. REAL ROBOT EXPERIMENTS

  • The authors conducted a series of experiments with missions similar to those shown in simulations, i.e. autonomously taking off and flying to a location in which the target appears in the field of view of the camera, turning on then the hierarchical task controller to servo the system towards a desired camera pose, and finally autonomously landing the system.
  • The last task is designed to favor a desired arm configuration and it can be used to push the joints away from singularities and potentially increase maneuverability.
  • It does not imply that the subtask can always be fulfilled.

VII. CONCLUSIONS

  • The authors have presented an uncalibrated image-based visual servo scheme for manipulation UAVs.
  • The authors have presented a control law to achieve not only the visual servoing but also other tasks taking into account their specific priorities.
  • Moreover, the presented control law only requires independent tasks for the uncontrollable variables to guarantee exponentially stability of the system.
  • The technique is demonstrated using Matlab and ROS in both simulation and a real UAM.
  • The authors can think of two avenues for further research.

Did you find this useful? Give us your feedback

Figures (9)

Content maybe subject to copyright    Report

IEEE/ASME TRANSACTIONS ON MECHATRONICS 1
Uncalibrated Visual Servo for
Unmanned Aerial Manipulation
Angel Santamaria-Navarro, Patrick Grosch, Vincenzo Lippiello, Joan Solà and Juan Andrade-Cetto
Abstract—This paper addresses the problem of autonomous
servoing an unmanned redundant aerial manipulator using
computer vision. The over-actuation of the system is exploited
by means of a hierarchical control law which allows to prioritize
several tasks during flight. We propose a safety related primary
task to avoid possible collisions. As a secondary task we present
an uncalibrated image-based visual servo strategy to drive the
arm end-effector to a desired position and orientation using
a camera attached to it. In contrast to previous visual-servo
approaches, a known value of camera focal length is not strictly
required. To further improve flight behavior we hierarchically
add one task to reduce dynamic effects by vertically aligning the
arm center of gravity to the multirotor gravitational vector, and
another one that keeps the arm close to a desired configuration
of high manipulability and avoiding arm joint limits. The
performance of the hierarchical control law, with and without
activation of each of the tasks, is shown in simulations and in real
experiments confirming the viability of such prioritized control
scheme for aerial manipulation.
I. INTRODUCTION
Unmanned aerial vehicles (UAVs), and in particular multi-
rotor systems, have substantially gained popularity in recent
years, motivated by their significant increase in maneuver-
ability, together with a decrease in weight and cost [1].
Until recently, UAVs were not usually required to interact
physically with the environment, however this trend is set to
change. Some examples are the ARCAS, AEROARMS and
AEROWORKS EU funded projects with the aim to develop
UAV systems with advanced manipulation capabilities for
autonomous industrial inspection and repair tasks, such as the
UAM manipulator Kinton from the ARCAS project shown
in Fig. 1. Physical interaction with the environment calls for
positioning accuracy at the centimeter level, which in GPS
denied environments is often difficult to achieve. For indoor
UAV systems, accurate localization is usually obtained from
infrared multi-camera devices, like Vicon or Optitrack. How-
ever, these devices are not suited for outdoor environments
and other means should be used, such as visual servoing.
Vision-based robot control systems are usually classified
in three groups: position-based visual servo (PBVS), image-
A. Santamaria-Navarro, P. Grosch, J. Solà and J. Andrade-Cetto are
with the Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Llorens
Artigas 4-6, Barcelona 08028, Spain, e-mail: {asantamaria, pgrosch, jsola,
cetto}@iri.upc.edu
V. Lippiello is with Università degli Studi di Napoli Federico II. Via
Claudio 21, 80125 Napoli, Italy, e-mail: lippiello@unina.it
This work has been funded by the EU project AEROARMS H2020-ICT-
2014-1-644271 and by the Spanish Ministry of Economy and Competitiveness
project ROBINSTRUCT TIN2014-58178-R.
The paper has supplementary multimedia material available at
http://www.angelsantamaria.eu/multimedia
Fig. 1: The UAM used in the experiments is composed of a 4
DoF quadrotor, commanded at high-level by 3 linear and an angular
velocities (ν
x
, ν
y
, ν
z
and ω
z
), and a 6 DoF robotic arm with joints
q
j
, j = 1...6; and world, camera, tool and body reference frames
indicated by the letters w, c, t and b, respectively.
based visual servo (IBVS), and hybrid control systems [2],
[3]. In PBVS, the geometric model of the target is used in
conjunction with image features to estimate the pose of the
target with respect to the camera frame. The control law is
designed to reduce such pose error in pose space and, in
consequence, the target could be easily lost in the image during
the servo loop. In IBVS on the other hand, both the error and
control law are expressed in the image space, minimizing the
error between observed and desired image feature coordinates.
As a consequence, IBVS schemes do not need any a priori
knowledge of the 3D structure of the observed scene. In
addition, IBVS is more robust than PBVS with respect to
uncertainties and disturbances affecting the model of the robot,
as well as the calibration of the camera [4], [5]. Hybrid
methods, also called 2-1/2-D visual servo [6], combine IBVS
and PBVS to estimate partial camera displacements at each
iteration of the control law minimizing a functional of both.
In all image-based and hybrid approaches the resulting
image Jacobian or interaction matrix, which relates the cam-
era velocity to the image feature velocities, depends on a
priori knowledge of the intrinsic camera parameters. Al-
though image-based methods, and in extension some hybrid
approaches, have shown some robustness in these parameters,
they usually break down at error levels larger than 10% [5].
In contrast, our method indirectly estimates the focal length
online which, as shown in the experiments section, allows to
withstand calibration errors up to 20%.
To do away with this dependence, one could optimize
for the parameters in the image Jacobian whilst the error
in the image plane is being minimized. This is done for
instance, using Gauss-Newton to minimize the squared image
error and non-linear least squares optimization for the image
Jacobian [7]; using weighted recursive least squares, not to
obtain the true parameters, but instead an approximation that

IEEE/ASME TRANSACTIONS ON MECHATRONICS 2
still guarantees asymptotic stability of the control law in the
sense of Lyapunov [8], [9]; using k-nearest neighbor regres-
sion to store previously estimated local models or previous
movements, and estimating the Jacobian using local least
squares [10], or building a secant model using population of
the previous iterates [11]. To provide robustness to outliers in
the computation of the Jacobian, [12] proposes the use of an
M-estimator.
In this paper we extend our prior work on uncalibrated
image-based visual servo (UIBVS) [13], which was demon-
strated only in simulation, to a real implementation for the
case of aerial manipulation. UIBVS contains mild assumptions
about the principal point and skew values of the camera, and
does not require prior knowledge of the focal length. Instead,
in our method, the camera focal length is iteratively estimated
within the control loop. Independence of focal length true
value makes the system robust to noise and to unexpected
large variations of this parameter (e.g., poor initialization or
an unaccounted zoom change).
Multirotors, and in particular quadrotors such as the one
used in this work, are underactuated platforms. That is, they
can change their torque load and thrust/lift by altering the
velocity of the propellers, with only four degrees-of-freedom
(DoF), one for the thrust and three torques. But, as shown in
this paper, the attachment of a manipulator arm to the base of
the robot can be seen as a strategy to alleviate underactuation
allowing unmanned aerial manipulators (UAM) to perform
complex tasks.
In [14] a vision-based method to guide a UAM with a
three DoF arm is described. To cope with underactuation
of the aerial platform, roll and pitch motion compensation
is moved to the image processing part, requiring projective
transformations. Therefore, errors computing arm kinematics
are to be coupled with the image-based control law and the
scale (i.e. camera-object distance) cannot be directly measured.
Flying with a suspended load is a challenging task and it is
essential to have the ability to minimize the undesired effects
of the arm in the flying system [15]. Among these effects,
there is the change of the center of mass during flight, that can
be solved designing a low-level attitude controller such as a
Cartesian impedance controller [16], or an adaptive controller.
Moreover, a desired end-effector pose might require a non-
horizontal robot configuration that the low level controller
would try to compensate, changing in turn the arm end-effector
position. In this way, [17] designs a controller exploiting the
whole system model. However, flight stability is preserved
by restricting the arm movements to those not jeopardizing
UAM integrity. To cope with these problems, parallel robots
are analyzed in [18] and [19]. The main advantages they offer
are related with the torque reduction in the platform base.
However, they are limited in workspace and are difficult to
handle due to their highly nonlinear motion models.
The redundancy of quadrotor-arm systems in the form
of extra DoF could be exploited to develop a low priority
stabilizing task or to optimize some given quality indices,
e.g. manipulability, joint limits, etc., [20], [21]. In [22] is
presented an image-based control law explicitly taking into
account the system redundancy and underactuation of the
vehicle base. The camera is attached on the aerial platform
and the positions of both arm end-effector and target are
projected onto the image plane in order to perform an image-
based error decrease, which creates a dependency on the
precision of the odometry estimator that is rarely achieved
in a real scenario without motion capture systems. Moreover,
the proposed control scheme is only validated in simulation.
In this work, we exploit the DoF redundancy of the overall
system not only to achieve the desired visual servo task, but
to do so whilst attaining also other tasks during the mission.
We presented in [23] a close approach consisting on a hybrid
servoing scheme. In contrast to [23] which uses a combination
of classical PBVS and IBVS, in this article we present a fully
vision-based self-calibrated scheme that can handle poorly
calibrated cameras. Moreover, we attach a light-weight serial
arm to a quadrotor with a camera at its end-effector, see Fig. 1,
instead of allocating it in the platform frame.
We present a new safety task intended for collision avoid-
ance, designed with the highest priority. Our servo task is
considered second in the hierarchy with two low priority
tasks, one to vertically align the arm and platform centers
of gravity and another to avoid arm joint limits. In contrast
to [23] we combine the tasks hierarchically in a less restrictive
manner, minimizing secondary task reconstruction only for
those components not in conflict with the primary task. This
strategy is known to achieve possibly less accurate secondary
task reconstruction but with the advantage of decoupling
algorithmic singularities between tasks [24].
Although hierarchical task composition techniques are well
known for redundant manipulators, its use on aerial manipu-
lation is novel. Specifically, the underactuation of the flying
vehicle has critical effects on mission achievement and here we
show how the non-controllable DoF must be considered in the
task designs. While the control law presented in [23] requires
orthogonal tasks to guarantee stability of the system, in our
case only independence of non-controllable DoF is required.
We validate the use of this task hierarchy in simulations and in
extensive real experiments, using our UIBVS scheme to track
the target, and also with the aid of an external positioning
system.
To summarize, the main contributions of the paper are two-
fold. On the one hand, we demonstrate now in real experi-
ments (on-board, and in real time) the proposed uncalibrated
image-based servo law which was previously only shown in
simulation in [13]. The second contribution is the proposal
of a hierarchical control law that exploits the extra degrees
of freedom of the UAV-arm system which, in contrast to our
previous solution [23], uses a less restrictive control law that
only actuates on the components of the secondary tasks that
do not conflict directly with tasks higher up in the hierarchy.
The remainder of this article is structured as follows. The
next section presents our uncalibrated approach to visual servo.
Section III describes the kinematics of our UAM and Sec-
tion IV contains the proposed task priority controller and task
definitions. Simulations and experimental results are presented
in Section V. Finally, conclusions are given in Section VII.

IEEE/ASME TRANSACTIONS ON MECHATRONICS 3
II. UNCALIBRATED IMAGE-BASED VISUAL SERVOING
Drawing inspiration on the UPnP algorithm [25], we de-
scribe in the following subsection a method to solve for the
camera pose and focal length using a reference system attached
to the target object. The method is extended in Sec. II-B to
compute a calibration-free image Jacobian for our servo task,
and in Sec. II-C to compute the desired control law.
A. Uncalibrated PnP
3D target features are parameterized with their barycentric
coordinates, and the basis of these coordinates is used to define
a set of control points. Computing the pose of the object with
respect to the camera resorts to computing the location of
these control points with respect to the camera frame. A least
squares solution for the control point coordinates albeit scale,
is given by the null eigenvector of a linear system made up
of all 2D to 3D perspective projection relations between the
target points. Given the fact that distances between control
points must be preserved, these distance constraints can be
used in a second least squares computation to solve for scale
and focal length. More explicitly, the perspective projection
equations for each target feature become
4
X
j=1
a
ij
x
j
+ a
ij
(u
0
u
i
)
z
j
α
= 0 (1a)
4
X
j=1
a
ij
y
j
+ a
ij
(v
0
v
i
)
z
j
α
= 0, (1b)
where s
i
= [u
i
, v
i
]
>
are the image coordinates of the target
feature i, and c
j
= [x
j
, y
j
, z
j
]
>
are the 3D coordinates of
the j-th control point in the camera frame. The terms a
ij
are
the barycentric coordinates of the i-th target feature which
are constant regardless of the location of the camera reference
frame, and α is our unknown focal length.
These equations can be jointly expressed for n 2D-3D
correspondences as a linear system
Mx = 0 , (2)
where M is a 2n × 12 matrix made of the coefficients
a
ij
, the 2D points s
i
and the principal point, and x is
our vector of 12 unknowns containing both the 3D coordi-
nates of the control points in the camera reference frame
and the camera focal length, dividing the z terms x =
[x
1
, y
1
, z
1
/α, ..., x
4
, y
4
, z
4
]
>
. Its solution lies in the null
space of M, and can be computed as a scaled product of the
null eigenvector of M
>
M via singular value decomposition
x = βv , (3)
the scale β becoming a new unknown. In the noise-free case,
M
>
M is only rank deficient by one, but when image noise is
severe it might loose rank, and a more accurate solution can be
found as a linear combination of the basis of its null space. In
this work we are not interested on recovering accurate camera
pose, but on minimizing the projection error within a servo
task. It is sufficient for our purposes to consider only the least
squares approximation; that is, to compute the solution only
using the eigenvector associated to the smallest eigenvalue.
To solve for β we add constraints that preserve the distance
between control points of the form ||c
j
c
j
0
||
2
= d
2
jj
0
, where
d
jj
0
is the known distance between control points c
j
and
c
j
0
in the world coordinate system. Substituting x in these
six distance constraints, we obtain a system of the form
Lb = d, where b = [β
2
, α
2
β
2
]
>
, L is a 6 × 2 matrix
built from the known elements of v, and d is the 6-vector
of squared distances between the control points. We solve
this overdetermined linearized system using least squares and
estimate the magnitudes of α and β by back substitution
α =
s
|b
2
|
|b
1
|
, β =
b
1
. (4)
B. Calibration-free Image Jacobian
As the camera moves, the velocity of each target control
point c
j
in camera coordinates can be related to the camera
spatial velocity (t, ) with
˙
c
j
= t × c
j
. Which com-
bined with Eq. 3, we obtain
˙x
j
˙y
j
˙z
j
=
t
x
ω
y
α βv
z
+ ω
z
βv
y
t
y
ω
z
βv
x
+ ω
x
α βv
z
t
z
ω
x
βv
y
+ ω
y
βv
x
, (5)
where v
x
, v
y
, and v
z
are the x, y, and z components of
eigenvector v related to the control point c
j
, and whose image
projection and its time derivative are given by
u
j
v
j
=
"
α
x
j
z
j
+ u
0
α
y
j
z
j
+ v
0
#
,
˙u
j
˙v
j
= α
˙x
j
z
j
x
j
˙z
j
z
2
j
˙y
j
z
j
y
j
˙z
j
z
2
j
. (6)
Substituting Eqs. 3 and 5 in Eq. 6 we have
˙u
j
=
t
x
αβv
z
ω
y
+ βv
y
ω
z
βv
z
v
x
(t
z
βv
y
ω
x
+ βv
x
ω
y
)
αβv
2
z
(7a)
˙v
j
=
t
y
αβv
z
ω
x
+ βv
x
ω
z
βv
z
v
y
(t
z
βv
y
ω
x
+ βv
x
ω
y
)
αβv
2
z
,
(7b)
which can be rewritten as
˙
s
j
= J
j
v
c
, with
˙
s
j
= [ ˙u
j
, ˙v
j
]
>
,
the image velocities of control point j, and v
c
= [t
>
,
>
]
>
.
J
j
is our desired calibration-free image Jacobian for the j-th
control point, and takes the form
J
j
=
1
βv
z
0
v
x
αβv
2
z
v
x
v
y
αv
2
z
v
2
x
α
2
v
2
z
αv
2
z
v
y
v
z
0
1
βv
z
v
y
αβv
2
z
v
2
y
+α
2
v
2
z
αv
2
z
v
x
v
y
αv
2
z
v
x
v
z
.
(8)
Stacking these together, we get the image Jacobian for all
control points J
vs
=
J
1
. . . J
4
>
.
C. Control Law
The aim of our image-based control scheme is to minimize
the error e(t) = s(t) s
, where s(t) are the current image
coordinates of the set of target features, and s
are their
final desired positions in the image plane, computed with our
initial value for α. If we select s to be the projection of the
control points c, and disregarding the time variation of α,
and consequently of s
, the derivative of the error becomes
˙
e =
˙
s, and, for a desired exponential decoupled error decrease

IEEE/ASME TRANSACTIONS ON MECHATRONICS 4
˙
e = Λ
S
e, we have a desired camera velocity
v
c
= Λ
S
J
+
vs
e (9)
where Λ
S
is a 6 × 6 positive definite gain matrix and J
+
vs
=
(J
>
vs
J
vs
)
1
J
>
vs
is the left Moore-Penrose pseudoinverse of
J
vs
.
III. ROBOT MODEL
A. Coordinate Systems
Consider the quadrotor-arm system equipped with a camera
mounted at the end-effector’s arm as shown in Fig. 1. Without
loss of generality, we consider the world frame (w) to be
located at the target. With this, the position of the camera (c)
with respect to the target frame, expressed as a homogeneous
transform T
w
c
, can be computed integrating the camera ve-
locities obtained from the uncalibrated visual servo approach
presented in the previous section.
A quadrotor is at the high level of control an underactuated
vehicle with only 4 DoF, namely the linear velocities plus the
yaw angular velocity (ν
qx
, ν
qy
, ν
qz
, ω
qz
) acting on the body
frame. And at the low level, the attitude controller stabilizes
horizontally the quadrotor body. Now, let q
a
=
q
1
, . . . , q
m
>
be the joint vector of the robotic arm attached to the UAM.
With the arm base frame coincident with the quadrotor body
frame, the relation between the quadrotor body and camera
frames is T
b
c
= T
b
t
(q
a
) T
t
c
, with T
b
t
(q
a
) the arm kinematics
and T
t
c
the tool-camera transform. Moreover, the pose of
the quadrotor with respect to the target is determined by the
transform T
b
w
= T
b
c
(T
w
c
)
1
.
B. Robot Kinematics
We are in the position now to define a joint quadrotor-
arm Jacobian that relates the local translational and angular
velocities of the platform and those of the m arm joints,
v
qa
= (ν
qx
, ν
qy
, ν
qz
, ω
qx
, ω
qy
, ω
qz
, ˙q
1
, . . . , ˙q
m
), to the desired
camera velocities computed from the visual servo
v
c
= J
qa
v
qa
. (10)
with J
qa
the Jacobian matrix of the whole robot.
This velocity vector in the camera frame, can be expressed
as a sum of the velocities added by the arm kinematics and
the quadrotor movement v
c
= v
c
a
+ v
c
q
(superscripts indicate
the reference frame to make it clear to the reader), where v
c
a
is obtained with the arm Jacobian
v
c
a
=
R
c
b
0
0 R
c
b
J
a
˙
q
a
= R
c
b
J
a
˙
q
a
, (11)
with R
c
b
the rotation matrix of the body frame with respect to
the camera frame, and where v
c
q
corresponds to the velocity
of the quadrotor expressed in the camera frame
v
c
q
= R
c
b
ν
b
q
+ ω
b
q
× r
b
c
ω
b
q
=
R
c
b
R
c
b
r
b
c
×
0 R
c
b
v
b
q
, (12)
with r
b
c
(q
a
) the distance vector between the body and camera
frames, and v
b
q
= [ν
qx
, ν
qy
, ν
qz
, ω
qx
, ω
qy
, ω
qz
]
>
the velocity
vector of the quadrotor in the body frame.
Combining Eqs. 9 and 10 we can relate the desired high-
level control velocities with our visual servo task, which we
term now σ
S
J
qa
v
qa
= Λ
S
J
+
vs
e
|{z}
σ
S
. (13)
Unfortunately as said before, the quadrotor is an underactu-
ated vehicle. So, to remove the non-controllable variables from
the control command, their contribution to the image error can
be isolated from that of the controllable ones by extracting the
columns of J
qa
and the rows of v
qa
corresponding to ω
qx
and
ω
qy
, reading out these values from the platform gyroscopes,
and subtracting them from the camera velocity [26]
J
S
˙
q +
J
S
$ = Λ
S
σ
S
, (14)
where $ = [ω
qx
, ω
qy
]
>
, J
S
is the Jacobian formed by the
columns of J
qa
corresponding to ω
qx
and ω
qy
, and J
S
is the
Jacobian formed by all other columns of J
qa
, corresponding
to the actuated variables
˙
q = [ν
qx
, ν
qy
, ν
qz
, ω
qz
, ˙q
1
, . . . , ˙q
m
]
>
.
Rearranging terms
J
S
˙
q = Λ
S
σ
S
J
S
$
| {z }
ξ
(15)
and with this, our main task velocity corresponding to the
visual servo is
˙
q = J
+
S
ξ , (16)
where, with 6 linearly independent rows and 4 + m > 6
columns, J
+
S
is computed with the right Moore-Penrose pseu-
doinverse J
>
S
(J
S
J
>
S
)
1
.
C. Motion Distribution
In order to penalize the motion of the quadrotor vs the
arm to account for their different motion capabilities, we
can define a weighted norm of the whole velocity vector
k
˙
qk
W
=
p
˙
q
>
W
˙
q as in [27], and use a weighted task
Jacobian to solve for the weighted controls
˙
q
W
= W
1/2
(J
S
W
1/2
)
+
ξ = J
#
S
ξ , (17)
with
J
#
S
= W
1
J
>
S
(J
S
W
1
J
>
S
)
1
(18)
the weighted generalized Moore-Penrose pseudoinverse of the
servoing Jacobian. With this, large movements should be
achieved by the quadrotor whereas the precise movements
should be devoted to the robotic arm due to its dexterity when
the platform is close to the target. To achieve this behavior,
we define a time-varying diagonal weight-matrix, as proposed
in [28], W(d) = diag((1 γ) I
4
, γ I
n
), with n = 4 + m the
whole UAM DoF (4 for the quadrotor and m for the arm) and
γ(d) =
1 + γ
2
+
1 γ
2
tanh
2 π
d δ
W
W
δ
W
π
, (19)
where γ [γ, 1], and δ
W
and
W
,
W
> δ
W
, are the
distance thresholds corresponding to γ
=
1 and γ
=
γ,
respectively. The blocks of W weight differently the velocity
components of the arm and the quadrotor by increasing the
velocity of the quadrotor when the distance to the target
d >
W
, while for distances d < δ
W
the quadrotor is slowed
down and the arm is commanded to accommodate for the
precise movements.

IEEE/ASME TRANSACTIONS ON MECHATRONICS 5
IV. TASK PRIORITY CONTROL
A. Hierarchical Task Composition
Even though the quadrotor itself is underactuated (4 DoF),
by attaching a robotic arm with more than 2 DoF we can
attain over-actuation (n = 4 + m). In our case, m = 6.
Exploiting this redundancy, we can achieve additional tasks
acting on the null space of the quadrotor-arm Jacobian [29],
while preserving the primary task. These tasks can be used to
reconfigure the robot structure without changing the position
and orientation of the arm end-effector. This is usually referred
to as internal motion of the arm. One possible way to specify
a secondary task is to choose its velocity vector as the gradient
of a scalar objective function to optimize [20], [30]. Multiple
secondary tasks can be arranged in hierarchy and, to avoid
conservative stability conditions [31], the augmented inverse-
based projections method is here considered [21]. In this
method, lower priority tasks are not only projected onto the
null space of the task up in the hierarchy, but onto the null
space of an augmented Jacobian with all higher priority tasks.
In Section III-B we showed how to compute a visual servo
control law that takes into account the uncontrollable state
variables. This is not however our main task. We decide
to locate higher up in the hierarchy an obstacle avoidance
task needed to guarantee system integrity. In a more general
sense, we can define any such primary task as a configuration
dependent task σ
0
= f
0
(x). Differentiating it with respect to
x, and separating the uncontrollable state variables as in Eq. 14
we have
˙
σ
0
=
f
0
(x)
x
˙
x = J
0
˙
q
0
+ J
0
$ , (20)
which again, considering as in Eq. 16 a main task error
e
σ
0
=
σ
0
σ
0
, to regulate σ
0
to a desired value σ
0
, the control law
for the main task becomes
˙
q
0
= J
+
0
(Λ
0
e
σ
0
J
0
$) , (21)
where as with Eq. 15 and 16, Λ
0
is a positive definite gain
matrix and J
+
0
is the generalized inverse of J
0
.
Consider now a secondary lower priority task σ
1
= f
1
(x)
such that
˙
σ
1
= J
1
˙
q
1
+ J
1
$ , (22)
with
˙
q
1
= J
+
1
(Λ
1
e
σ
1
J
1
$) and a task composition strategy
that minimizes secondary task velocity reconstruction only
for those components in Eq. 22 that do not conflict with the
primary task [24], namely
˙
q = J
+
0
Λ
0
e
σ
0
+ N
0
J
+
1
Λ
1
e
σ
1
J
0|1
$ , (23)
where N
0
= (I
n
J
+
0
J
0
) is the null space projector of the
primary task and J
0|1
= J
+
0
J
0
+ N
0
J
+
1
J
1
is the Jacobian
matrix that allows for the compensation of the variation of
the uncontrollable states $.
This strategy, in contrast to the more restrictive one we
presented in [23] might achieve larger constraint-task recon-
struction errors than the full least squares secondary task
solution in [23] but with the advantage that algorithmic
singularities arising from conflicting tasks are decoupled from
the singularities of the secondary tasks.
The addition of more tasks in cascade is possible as long
as there exist remaining DoF from the concatenation of tasks
higher up in the hierarchy. The generalization of Eq. 23 to the
case of η prioritized subtasks is
˙
q = J
+
0
Λ
0
e
σ
0
+
η
X
i=1
N
0|...|i1
J
+
i
Λ
i
e
σ
i
J
0|...|η
$ (24)
with the recursively-defined compensating matrix
J
0|...|η
= N
0|...|i1
J
+
i
J
i
+ (I N
+
0|...|i1
J
+
i
J
i
)J
0|...|i1
,
(25)
where N
0|...|i
is the projector onto the null space of the
augmented Jacobian J
0|...|i
for the i-th subtask, with i =
0, ..., η 1, and are respectively defined as follows
N
0|...|i
= (I J
+
0|...|i
J
0|...|i
) (26)
J
0|...|i
= [J
>
0
... J
>
i
]
>
. (27)
B. Stability analysis
To assess the stability of each i-th individual task, we use
Lyapunov analysis by considering the positive definite can-
didate Lyapunov function L =
1
2
kσ
i
(t)k
2
and its derivative
˙
L = σ
T
i
˙
σ
i
. Then, for the primary task we can substitute Eq. 21
into Eq. 20, giving
˙
σ
0
= Λ
0
e
σ
0
, which for a defined main task
error
e
σ
0
= σ
0
σ
0
and σ
0
= 0, the asymptotic stability is
proven with
˙
L = σ
T
0
Λ
0
σ
0
.
Similarly, substituting Eq. 23 into Eq. 22, and considering a
task error
e
σ
1
= σ
1
σ
1
, with σ
1
= 0, the following dynamics
for the secondary task is achieved
˙
σ
1
= J
1
J
+
0
Λ
0
σ
0
Λ
1
σ
1
+ (J
1
J
+
0
J
0
)$ , (28)
where we used the property J
1
N
0
J
+
1
= I. Notice how
exponential stability of the secondary task in Eq. 28 can
only be guaranteed when the tasks are independent for
the uncontrollable states $ (i.e. J
1
J
+
0
J
0
= 0), hence
˙
L = σ
T
1
J
1
J
+
0
Λ
0
σ
0
σ
T
1
Λ
1
σ
1
, which is a less stringent
condition than whole task orthogonality J
1
J
+
0
= 0 that was
needed in [23].
Finally the dynamics of the system can be written as
˙
σ
0
˙
σ
1
=
Λ
0
O
J
1
J
+
0
Λ
0
Λ
1
σ
0
σ
1
, (29)
which is characterized by a Hurwitz matrix as in [23] that
guarantees the exponential stability of the system. Notice how
the secondary task does not affect the dynamics of the main
task thanks to the null space projector, hence the stability of
the main task is again achieved.
The previous stability analysis can be straightforwardly
extended to the general case of η subtasks.
C. Task Order
In this paper we consider the following ordered tasks: a pri-
mary safety task (I) considering potential collisions (inflation
radius); a secondary task performing visual servoing (S), and
lower in the hierarchy, the alignment of the center of gravity
of the UAM (G), and a technique to stay away from the arm’s
joint limits (L). By denoting with J
I
, J
S
, J
G
and J
L
the

Citations
More filters
Journal ArticleDOI
01 Jul 2020-Robotica
TL;DR: A complete and systematic review of related research on this topic is conducted, and various types of structure designs of aerial manipulators are listed out.
Abstract: The aerial manipulator is a special and new type of flying robot composed of a rotorcraft unmanned aerial vehicle (UAV) and a/several manipulator/s. It has gained a lot of attention since its initial appearance in 2010. This is mainly because it enables traditional UAVs to conduct versatile manipulating tasks from air, considerably enriching their applications. In this survey, a complete and systematic review of related research on this topic is conducted. First, various types of structure designs of aerial manipulators are listed out. Subsequently, the modeling and control methods are introduced in detail from the perspective of two types of typical application cases: free-flight and motion-restricted operations. Finally, challenges for future research are presented.

63 citations

Journal ArticleDOI
TL;DR: An anthropomorphic, compliant and lightweight dual arm manipulator designed and developed for aerial manipulation applications with multi-rotor platforms and validated through different experiments in fixed base test-bench and in outdoor flight tests.
Abstract: This paper presents an anthropomorphic, compliant and lightweight dual arm manipulator designed and developed for aerial manipulation applications with multi-rotor platforms. Each arm provides four degrees of freedom in a human-like kinematic configuration for end effector positioning: shoulder pitch, roll and yaw, and elbow pitch. The dual arm, weighting 1.3 kg in total, employs smart servo actuators and a customized and carefully designed aluminum frame structure manufactured by laser cut. The proposed design reduces the manufacturing cost as no computer numerical control machined part is used. Mechanical joint compliance is provided in all the joints, introducing a compact spring-lever transmission mechanism between the servo shaft and the links, integrating a potentiometer for measuring the deflection of the joints. The servo actuators are partially or fully isolated against impacts and overloads thanks to the flange bearings attached to the frame structure that support the rotation of the links and the deflection of the joints. This simple mechanism increases the robustness of the arms and safety in the physical interactions between the aerial robot and the environment. The developed manipulator has been validated through different experiments in fixed base test-bench and in outdoor flight tests.

43 citations


Cites background from "Uncalibrated Visual Servo for Unman..."

  • ...Aerial manipulators have been applied in a wide variety of applications, including valve turning with quadrotors [13], visual servoing [14], contact based inspection [15], or grasping [16], [17]....

    [...]

Proceedings ArticleDOI
13 Jun 2017
TL;DR: In this article, a nonlinear model predictive controller is presented to follow desired 3D trajectories with the end effector of an unmanned aerial manipulator (i.e., a multirotor with a serial arm attached).
Abstract: This paper presents a nonlinear model predictive controller to follow desired 3D trajectories with the end effector of an unmanned aerial manipulator (i.e., a multirotor with a serial arm attached). To the knowledge of the authors, this is the first time that such controller runs online and on board a limited computational unit to drive a kinematically augmented aerial vehicle. Besides the trajectory following target, we explore the possibility of accomplishing other tasks during flight by taking advantage of the system redundancy. We define several tasks designed for aerial manipulators and show in simulation case studies how they can be achieved by either a weighting strategy, within a main optimization process, or a hierarchical approach consisting on nested optimizations. Moreover, experiments are presented to demonstrate the performance of such controller in a real robot.

43 citations


Cites background or methods from "Uncalibrated Visual Servo for Unman..."

  • ..., [16], [17], [18]) the problem is solved using hierarchical task composition control but in almost all cases without using optimal control....

    [...]

  • ...This expression can be obtained as in [17], [18]....

    [...]

  • ...Instead of following a weighting strategy, we can impose a hierarchy between the tasks (cost functions), similarly to [17] and [18], but in this case using optimal control....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a simple, low-cost and high rate method for state estimation enabling autonomous flight of micro aerial vehicles, which presents a low computational burden and investigates the performances of two Kalman filters, in the extended and error-state flavors, alongside with a large number of algorithm modifications defended in earlier literature on visual-inertial odometry.
Abstract: The combination of visual and inertial sensors for state estimation has recently found wide echo in the robotics community, especially in the aerial robotics field, due to the lightweight and complementary characteristics of the sensors data. However, most state estimation systems based on visual-inertial sensing suffer from severe processor requirements, which in many cases make them impractical. In this paper, we propose a simple, low-cost and high rate method for state estimation enabling autonomous flight of micro aerial vehicles, which presents a low computational burden. The proposed state estimator fuses observations from an inertial measurement unit, an optical flow smart camera and a time-of-flight range sensor. The smart camera provides optical flow measurements up to a rate of 200 Hz, avoiding the computational bottleneck to the main processor produced by all image processing requirements. To the best of our knowledge, this is the first example of extending the use of these smart cameras from hovering-like motions to odometry estimation, producing estimates that are usable during flight times of several minutes. In order to validate and defend the simplest algorithmic solution, we investigate the performances of two Kalman filters, in the extended and error-state flavors, alongside with a large number of algorithm modifications defended in earlier literature on visual-inertial odometry, showing that their impact on filter performance is minimal. To close the control loop, a non-linear controller operating in the special Euclidean group SE(3) is able to drive, based on the estimated vehicle’s state, a quadrotor platform in 3D space guaranteeing the asymptotic stability of 3D position and heading. All the estimation and control tasks are solved on board and in real time on a limited computational unit. The proposed approach is validated through simulations and experimental results, which include comparisons with ground-truth data provided by a motion capture system. For the benefit of the community, we make the source code public.

41 citations


Cites background from "Uncalibrated Visual Servo for Unman..."

  • ...…x and y positions and the yaw angle are not observable, and their output is the result of an incremental estimation subject to drift —these modes can be observed with a lower update rate by a higher level task, such as a visual servoing (Santamaria-Navarro et al., 2014, 2017; Rossi et al., 2017)....

    [...]

Journal ArticleDOI
TL;DR: This article addresses the problem of autonomous servoing control of an unmanned aerial manipulator with the capability of grasping target objects using computer vision by proposing a practical visual servo control using a spherical projection model.
Abstract: This article addresses the problem of autonomous servoing control of an unmanned aerial manipulator with the capability of grasping target objects using computer vision. Specifically, a practical visual servo control using a spherical projection model is proposed. The aerial manipulator is an unmanned aerial vehicle equipped with a robotic arm that greatly increases the freedom and operational flexibility of the end-effector. However, it also increases the complexity of the kinematics, dynamics, and control design of the complete system. A novel passivity-like error equation of the image features is established by using the spherical camera geometry dynamics with an eye-in-hand configuration. To further improve the grasping performance, a task-priority control scheme is utilized with one main task and several subtasks, i.e., controlling the gripper position and orientation, vertically aligning the center of gravity, and avoiding the joint limitation. Simulation results are provided to illustrate and assess the performance of the proposed visual servo control. The practicability and effectiveness of autonomous aerial manipulation are well supported by the experimental results acquired through outdoor environments.

35 citations


Cites background or methods from "Uncalibrated Visual Servo for Unman..."

  • ...In [33] and [36], a visual servoing with a hierarchical task-priority control framework suitable for UAMs was developed, and a new general formulation was...

    [...]

  • ...In fact, the image Jacobian matrix in [36] is essentially the same as that in [46]....

    [...]

  • ...In [36], an uncalibrated image-based visual servo approach was proposed for a UAM to drive the arm end-effector to a desired pose by using an eye-in-hand camera....

    [...]

  • ...It is worth pointing out that the uncalibrated visual servo control proposed in [36] is also effective when the target feature...

    [...]

  • ...1, the use of a camera in a UAM control can be implemented with two types of architectures: the eyeto-hand configuration [31]–[34] and the eye-in-hand configuration [35], [36]....

    [...]

References
More filters
Journal ArticleDOI
01 Oct 1996
TL;DR: This article provides a tutorial introduction to visual servo control of robotic manipulators by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process.
Abstract: This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.

3,619 citations


"Uncalibrated Visual Servo for Unman..." refers methods in this paper

  • ...In addition, IBVS is more robust than PBVS with respect to uncertainties and disturbances affecting the model of the robot, as well as the calibration of the camera [4], [5]....

    [...]

BookDOI
01 Nov 2007
TL;DR: The contents have been restructured to achieve four main objectives: the enlargement of foundational topics for robotics, the enlightenment of design of various types of robotic systems, the extension of the treatment on robots moving in the environment, and the enrichment of advanced robotics applications.
Abstract: The second edition of this handbook provides a state-of-the-art cover view on the various aspects in the rapidly developing field of robotics. Reaching for the human frontier, robotics is vigorously engaged in the growing challenges of new emerging domains. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The credible prospect of practical robots among humans is the result of the scientific endeavour of a half a century of robotic developments that established robotics as a modern scientific discipline. The ongoing vibrant expansion and strong growth of the field during the last decade has fueled this second edition of the Springer Handbook of Robotics. The first edition of the handbook soon became a landmark in robotics publishing and won the American Association of Publishers PROSE Award for Excellence in Physical Sciences & Mathematics as well as the organizations Award for Engineering & Technology. The second edition of the handbook, edited by two internationally renowned scientists with the support of an outstanding team of seven part editors and more than 200 authors, continues to be an authoritative reference for robotics researchers, newcomers to the field, and scholars from related disciplines. The contents have been restructured to achieve four main objectives: the enlargement of foundational topics for robotics, the enlightenment of design of various types of robotic systems, the extension of the treatment on robots moving in the environment, and the enrichment of advanced robotics applications. Further to an extensive update, fifteen new chapters have been introduced on emerging topics, and a new generation of authors have joined the handbooks team. A novel addition to the second edition is a comprehensive collection of multimedia references to more than 700 videos, which bring valuable insight into the contents. The videos can be viewed directly augmented into the text with a smartphone or tablet using a unique and specially designed app.

3,174 citations

Journal ArticleDOI
30 Nov 2006
TL;DR: This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot using basic techniques that are by now well established in the field.
Abstract: This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot. In this paper, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques

2,026 citations


"Uncalibrated Visual Servo for Unman..." refers background in this paper

  • ...Vision-based robot control systems are usually classified in three groups: position-based visual servo (PBVS), image-based visual servo (IBVS), and hybrid control systems [2], [3]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a tutorial for modeling, estimation, and control for multi-rotor aerial vehicles that includes the common four-rotors or quadrotors case is presented.
Abstract: This article provides a tutorial introduction to modeling, estimation, and control for multirotor aerial vehicles that includes the common four-rotor or quadrotor case.

1,241 citations

Book
01 Feb 1990
TL;DR: Advanced robotics: redundancy and optimization, Advanced robotics: redundancies and optimization , مرکز فناوری اطلاعات و £1,000,000 کسورزی .
Abstract: Advanced robotics: redundancy and optimization , Advanced robotics: redundancy and optimization , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

1,137 citations


"Uncalibrated Visual Servo for Unman..." refers background in this paper

  • ...One possible way to specify a secondary task is to choose its velocity vector as the gradient of a scalar objective function to optimize [20] and [30]....

    [...]

Frequently Asked Questions (15)
Q1. What are the contributions mentioned in the paper "Uncalibrated visual servo for unmanned aerial manipulation" ?

This paper addresses the problem of autonomous servoing an unmanned redundant aerial manipulator using computer vision. The authors propose a safety related primary task to avoid possible collisions. As a secondary task the authors present an uncalibrated image-based visual servo strategy to drive the arm end-effector to a desired position and orientation using a camera attached to it. To further improve flight behavior the authors hierarchically add one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits. 

The authors can think of two avenues for further research. On the one hand, the activation and deactivation of the safety task as well as a dynamic exchange of task priority roles can induce some chattering phenomena, which can be avoided by introducing a hysteresis scheme. Secondly, the dimensionality of the subspace associated to each null space projector is a necessary condition to be considered when designing subtasks, however it might not be sufficient to guarantee the fulfilment of the subtask and a thorough analytical study of these spaces can be required. 

The addition of more tasks in cascade is possible as long as there exist remaining DoF from the concatenation of tasks higher up in the hierarchy. 

The visual servo mission task requires 6 DoF, and the secondary and comfort tasks with lower priority can take advantage of the remaining 4 DoF. 

1) Primary task: Among all other tasks, the one with the highest priority must be the safety task, not to compromise the platform integrity. 

The gravitational vector alignment task and the joint limits avoidance task require 1 DoF each being scalar cost functions to minimize (see Eq. 35 and 43). 

The desired task variable is σ∗L = 0 (i.e. σ̃L = −σL), while the corresponding task Jacobian isJL = [ 01×4 −2 (ΛL (qa − q∗a))> ] . (45)One common choice of q∗a for the joint limit avoidance is the middle of the joint limit ranges (if this configuration is far from kinematic singularities), q∗a = qa + 1 2 (qa − qa). 

Finally the dynamics of the system can be written as[ σ̇0 σ̇1 ] = [ −Λ0 O −J1J+0 Λ0 −Λ1 ] [ σ0 σ1 ] , (29)which is characterized by a Hurwitz matrix as in [23] that guarantees the exponential stability of the system. 

This guarantees asymptotic stability of the control law regardless of the target point selection, as long as planar configurations are avoided. 

When the obstacle does not violate the inflation radius, the safety task becomes deactivated and the other subtasks can regain access to the previously blocked DoF. Fig. 3(a) shows how the servoing task is elusive during the first 10 seconds of the simulation when the obstacle is present, but is accomplished afterwards when the obstacle is no longer an impediment to the secondary task. 

Ja q̇a = R c b Ja q̇a, (11)with Rcb the rotation matrix of the body frame with respect to the camera frame, and where vcq corresponds to the velocity of the quadrotor expressed in the camera framevcq = R c b[ νbq + ω b q × rbcωbq] = [ Rcb −Rcb [ rbc ] ×0 Rcb] vbq, (12)with rbc(qa) the distance vector between the body and camera frames, and vbq = [νqx, νqy, νqz, ωqx, ωqy, ωqz]> the velocity vector of the quadrotor in the body frame. 

for the primary task the authors can substitute Eq. 21 into Eq. 20, giving σ̇0 = Λ0σ̃0, which for a defined main task error σ̃0 = σ∗0 − σ0 and σ∗0 = 0, the asymptotic stability is proven with L̇ = −σT0 Λ0σ0. 

By denoting with JI , JS , JG and JL theJacobian matrices of the above-mentioned tasks, the desired system velocity can be written as followsq̇ = J#I σ̃I + NI J # S ΛSσ̃S + NI|S J + G σ̃G+NI|S|G J + L σ̃L − JI|S|G|L$, (30)where NI , NI|S , NI|S|G are the projectors of the safety, the visual servoing and of the center of gravity tasks, which are defined asNI = (I− J#I JI) (31a) NI|S = (I− J+I|S JI|S) (31b) NI|S|G = (I− J+I|S|G JI|S|G) , (31c)with JI|S and JI|S|G the augmented Jacobians computed as in Eq. 27. 

The sum of normalized distances of the position of the i-th joint to its desired configuration is given bym∑ i=1 ( qai − q∗ai qai − qai )2 . (42)So their task function is selected as the squared distance of the whole arm joint configuration with respect to the desired oneσL = (qa − q∗a)>ΛL (qa − q∗a), (43)where qa = [ qa1, . . . , qam ]> and q a = [ q a1 , . . . , q am ]> are the high and low joint-limit vectors respectively, and ΛL is a diagonal matrix whose diagonal elements are equal to the inverse of the squared joint limit rangesΛL = diag((qa1 − qa1) −2, . . . , (qam − qam) −2). (44) 

The generalization of Eq. 23 to the case of η prioritized subtasks isq̇ = J+0 Λ0σ̃0 + η∑ i=1 N0|...|i−1J + i Λiσ̃i − J0|...|η$ (24)with the recursively-defined compensating matrixJ0|...|η = N0|...|i−1J + i Ji +