scispace - formally typeset
Open AccessProceedings ArticleDOI

Path planning in image space for robust visual servoing

Youcef Mezouar, +1 more
- Vol. 3, pp 2759-2764
TLDR
A new approach to resolve difficulties by planning trajectories in the image by applying the method when object dimension are known or not and/or when the calibration parameters of the camera are well or badly estimated is proposed.
Abstract
Vision feedback control loop techniques are efficient for a number of applications but they come up against difficulties when the initial and desired positions of the camera are distant. We propose a new approach to resolve these difficulties by planning trajectories in the image. Constraints such that the object remains in the camera field of view can be taken into account. Furthermore, using this process, current measurement always remain close to their desired value and a control by image based servoing ensures the robustness with respect to modeling errors. We apply our method when object dimension are known or not and/or when the calibration parameters of the camera are well or badly estimated. Finally, real time experimental results using a camera mounted on the end effector of a 6-DOF robot are presented.

read more

Content maybe subject to copyright    Report

Path Planning in Image Space for Robust Visual Servoing
Youcef Mezouar Franc¸ois Chaumette
Youcef.Mezouar@irisa.fr Francois.Chaumette@irisa.fr
IRISA - INRIA Rennes
Campus de Beaulieu,
35042 Rennes Cedex, France
Abstract
Vision feedback control loop techniques are efficient for
a great class of applications but they come up against diffi-
culties when the initial and desired positions of the camera
are distant. In this paper we propose a new approach to
resolve these difficulties by planning trajectories in the im-
age. Constraints such that the object remains in the cam-
era field of view can thus be taken into account. Further-
more, using this process, current measurement always re-
main close to their desired value and a control by Image-
based Servoing ensures the robustness with respect to mod-
eling errors. We apply our method when object dimension
are known or not and/or when the calibration parameter-
s of the camera are well or badly estimated. Finally, real
time experimental results using a camera mounted on the
end effector of a six d-o-f robot are presented.
1 Introduction
Visual servoing is classified into two main approaches
[15, 6, 8]. The first one is called Position-based Control
(PbC) or 3D visual servoing. In PbC the control error func-
tion is computed in the Cartesian space. Image features are
extracted from the image and a perfect model of the tar-
get is used to determine its position with respect to cam-
era frame. The main advantage of this approach is that it
controls the camera trajectory directly in Cartesian space.
However there is no control in the image space and the ob-
ject may get out of the camera field of view during servo-
ing. Furthermore, it is impossible to analytically demon-
strate the stability of the system in presence of modeling
errors. Indeed, the sensitivity of pose estimation algorithm
with respect to calibration errors and measurement pertur-
bations is not available [2].
The second approach is called Image-based Control
(IbC) or 2D visual servoing. In IbC the pose estimation
is omitted and the control error function is computed in the
image space. The IbC approach does not need a precise
calibration and modeling since a closed loops scheme is
performed. However, the stability is theoretically ensured
only in the neighborhood of the desired position. There-
fore, if initial and desired configurations are closed, IbC is
robust with respect to measurement and modeling errors.
Otherwise, that is if desired and initial position are distant,
the stability is not ensured and the object can get out of
the camera field of view [2]. Control laws taking into ac-
count this last constraint have been proposed for example
in [13, 12]. We propose in this paper a more robust ap-
proach.
A third approach is described in [11] and is called
2 1/2 D visual servoing. In this case the control error func-
tion is computed in part in the Cartesian space and in part
in the 2D image space. An homography, computed at each
iteration, is used to extract the Cartesian part of the error
function. Hence, this method does not need a model of the
target. Contrarily to the previous approaches, it is possible
to obtain analytical results about stability with respect to
modeling and calibration errors. However, the main draw-
back of 2 1/2 D visual servoing is its relative sensitivity to
measurement perturbations. Furthermore, keeping all the
object in the camera field of view is not obvious.
In this paper, a new method, robust and stable even if
initial and desired positions are distant, is described. The
method consists in planning trajectories of a set of
points
lying on the target in image space and then tracking these
trajectories by 2D visual servoing (see Figure 1). Using
this process, current measurements always remain close to
their desired value . Thus the good behavior of IbC in
such configuration can be exploited. Moreover, it is pos-
sible to ensure that the object will always remain in the
camera field of view by enforcing such constraint on the
trajectories.
There are few papers dealing with path planning in im-
age space. In [7] a trajectory generator using a stereo sys-
tem is proposed and applied to obstacle avoidance. In [14]
an alignment task is realized using intermediate view of the
target synthesized by image morphing. However, none of
them were dealing with robustness issues. Our path plan-
ning strategy is based on the potential field method. This

method was originally developed for an on-line collision
avoidance [9, 10]. In this approach the robot motions are
under the influence of an artificial potential field (
) de-
fined as the sum of an attractive potential (

) pulling the
robot toward the goal configuration (

) and a repulsive
potential (

) pushing the robot away from the obstacles.
Motion planning is performed in an iterative fashion. At
each iteration an artificial force

, where the

vec-
tor
represents a parameterization of robot workspace, is
induced by the potential function. This force is defined
as


where

denotes the gradient vector
of
at
. Using these conventions,

can be de-
composed as the sum of two vectors,


and
 
!
, which are called the attractive and
repulsive forces respectively. Path generation proceeds a-
long the direction of

and the discrete-time trajectory
is given by the transition equation :
"$#&%'("')*+"

"
,

"
.-/-
(1)
where
0
is the increment index and
*
"
a positive scaling
factor denoting the length of the
02143
increment.
The paper is organized as follows. We describe in Sec-
tion 2 the method when a model of the target and the cali-
bration of the camera are available. We present in Section
3 how we proceed if the object is planar but neither a model
of the target and neither accurate calibration are available.
In Section 4 we use the task function approach to track the
trajectories. Experimental results are finally given in Sec-
tion 5.
56
78
9:
;<
=>
?@
AB
CD
s*(t)
-
+
T
Control law
Extraction
Features
desired image
initial image
Trajectories
Planning
Constraints
ROBOT
s(t)
Figure 1: Block diagram of the method
2 Known target
Here, we assume that the calibration parameters and
a target model are available. The technique consists in
planning camera frame trajectory bringing it from initial
camera frame
E'F
(
GHIF
) to desired camera frame
EJ
(
KL
) and then to project the target model in the image
along the trajectory. Let
NMO
,
QPQO
, u and
R
be respectively
the rotational matrix and the translational matrix between
the current camera frame
E
O
and
EJ
, the rotation axis and
the rotation angle obtained from
SM
O
. We choose as pa-
rameterization of the workspace
ITUHV
QP
T
O
u
RWXTZY
. We
thus have
IT
F
[V
\P
T
F
u
R]XT
F
Y
and
IT
^_N`
%
. Using a
pose estimation algorithm [3], we can determine
FaMb
,
FcPNb
,
NMIb
and
NPQb
that represent respectively the rotation and
the translation from object frame
E
b
to
E'F
and
E
b
to
EJ
(see Figure 3). The vector
IF
is then computed using the
following relations :
d
QM
Fe
SMIbfFM
T
b
\P
F
NM
F
FaPQb
)
QPNb
According to (1) we construct a path as the sequence of
successive path segments starting at the initial configura-
tion
F
. We now present how the potentials functions and
the induced forces are defined and calculated.
Attractive potential and force. The attractive potential
field
is simply defined as a parabolic function in order
to minimize the distance between the current position and
the desired one :

gih
,
jk
,$l
gmh
,
,$l
where
h
is a positive scaling factor. The attractive force
deriving from
is :
no

p
h
(2)
Repulsive potential and force. A point
qnr
, which
projects onto the camera’s image plane at a point with im-
age coordinates
sZrtoV uvrmwQrxyYzT
, is observable by the cam-
era if
u{r}|oV u~}ufY
and
wQr|oV w~w+Y
, where
u~
,
uf
,
w
~
,
w
are the limits of the image (see Figure 2). One
way to create a potential barrier around the camera field of
view, assuring that all features are always observable and
do not affect the camera motion when they are sufficiently
far away from the image limits, is to define the repulsive
potential
as follow (see Figure 2) :

aQx
%
lx/+
r
%
%
%$
y
%$
$
%
%$
X
%$

if
|
and
4NxU
else.
(3)
where
is the vector made up of the coordinates
uvr
%$¡
,
wQr
%$¡
,
is the set
¢ZS£¥¤¦§u{r|¨V uf~o©ZYª V uf«
©¨umY
or
wQr|V w~¬©iY2ª}V w+©®w+Y]¯
,
©
being a pos-
itive constant denoting the distance of influence of the im-
age edges.
The artificial repulsive force deriving from
{
is :
npp°x±
{
4N
±
²
T
op°¥±

aQ
±
±
±³
±³
±
´²
T

Vm
Um
U
M
V
M
Vr
Vm
Um
U
M
V
M
Vr
Figure 2: Repulsive potential
where
³
denote the situation of the camera with respect to
a reference frame. The previous equation can be written :
n'µ
T¶¸·¹T
°
±

aQ
±
®²
T
where :
º
·
[».¼
»Q½
is the image Jacobian (or interaction matrix)
[4]. It relates the variation of image feature
to the ve-
locity screw of the camera
¾
:
¿
·
¾
. The well known
interaction matrix for a point
q
with coordinate
cÀpÁ´Ât
in camera frame and coordinates
spÃ4Ä'Åv
in the image
expressed in meters, for a one meter focal length is :
·
csÆÂtnð
%
Ç
È
Ç
ÄÅ §É¹)kÄ
l
ÊÅ
%
Ç ËÇ
É¹)kÅ
l
´Ä{ÅGÄ
²
When
is composed of the image coordinates of
points
the corresponding interaction matrix is :
·
4]Æ̹nHÍ
·nT
ÏÎ
%
ÆÂ
%
ÑÐ/ÐÒÐ
·¹T
ÓÎ
Æ$Â
Ô
T
(4)
º
µ
»\½
»NÕ
is the

Jacobian matrix that relates
the variation of
³
to the variation of
:
µ
¬Ö
SM
T
O
^×2`v×
^{×Ø`Ø× Ù
m%
Ú Û
where [11] :
Ù
m%
Ú
UÜyÝ
×2`v×
)
R
g
sinc
l
°
R
g
²
V ÞiY4ß)UX'
sinc
cR]yV ÞiY
l
ß
V ÞiY
ß
being the antisymmetric matrix of cross product
associated to
Þ
º
»Nà\á
¼
»\¼
is easily obtained according to (3).
Let us note, using (2), (3) and (1), we obtain a cam-
era trajectory in the workspace. A PbC could thus be used
to follow it. However, it is more interesting to perform
features trajectories in image in order to exploit the good
behavior of IbC when the current and desired camera
positions are close.
2D trajectories. Let
NM
"
,
NP
"
and
"NM
b
,
"QP
b
be the ro-
tations and translations mapping
E
"
with
E
and
E
b
with
Eâ"
, where
Eâ"
is the camera frame position at iteration
0
of
the path planning. With these notations we have :
d
"
Mb
M
T
"
Mb
"NPNb
SM
T
"
NPQb
NP
"
In order to perform visual servo control, we construct the
trajectory of the projection
sZr
of each point
qnr
%$¡
on-
to the image using the known coordinates
b.ã
r
of
q¹r
in
E
. The trajectory in image is obtained using the classical
assumption that the camera performs a perfect perspective
transformation with respect to the camera optic center (pin-
hole model) :
siråä
"
ÊæV
"
M
b
"
P
b
Y
b
ã
r
where
æ
is the matrix of camera intrinsic parameters. In
the next part, we extend this method to the case where the
target model is unknown.
3 Unknown planar target
In this section, we assume that the target is planar but
the target model is not available. After recalling the rela-
tions between two views of a planar target, we present the
method with accurate calibration parameters and then we
prove its robustness with respect to calibration error.
3.1 Euclidean reconstruction
Consider a reference plane
ç
given in desired camer-
a frame (
EJ
) by the vector
èiTéêV ë¥ìTkîí2ì$Y
, where
ë¥ì
is its unitary normal in
EJ
and
íØì
the distance from
ç
to the origin of
EJ
(see Figure 3). It is well known [5]
that the projection of point
q
r
lying on
ç
in current view
s
r
ÃV u
r
w
r
.YïT
and in the desired view
sxì
r
HV umì
r
wØì
r
yYzT
are linked by the projective relation :
ð
r.sirt(ñò2s
ì
r
(5)
where
ñ
ò
is a projective homography, expressed in pixels,
of plane
ç
between the current and desired images and
ð
a scaling factor. We can estimate it from a set of
óeôöõ
points (three points defining
ç
) in general case or from a
set of
ó«ô§÷
points belonging to
ç
[11, 5]. Assuming that
the camera calibration is known, the Euclidean homogra-
phy
øò
is computed as follows :
ø
ò
Êæ
i%
ñ
ò
æ
(6)
The matrix
øò
can be decomposed using motion parame-
ters between
E
and
E
O
[5]:
øò
O
M
)
O
P
í
ì
ë
ìÉT
M
T
O
M
T
O
PQùyú
ë
ìÉT
(7)

From
øò
it is possible to compute
NM
O
,
PNù
ú
GûÉüÉý
ùyú
, and
ë¥ì
using for example the algorithm presented in [5]. The
ratio
þ
r
between the coordinate
Â
r
of a point lying on
ç
,
with respect to camera frame, and
íØì
, that we will use in
the continuation, can also be determined [11] :
þ+râ
Â
r
í
ì
¹)´ëÿìT
O
M
T
O
P
£í2ì\
ë
ìT
O
M
T
æ
m%
sir
(8)
Z
X
Y
Z
d*
C
p*
Π
o
g
c
i
X
G
I
O
g
g
i
t
c
R
t
i
n*
p
target point
R
i
o
c
i
t
o
R
o
o
t
R
g g
g g
g
g
Figure 3: Euclidean reconstruction
3.2 Trajectory planning
We now choose the partial parameterization of the
workspace as
IT¨ V
P
T
ù
ú
u
R]ÉTiY
. We thus have
IT
F
V
P
T
ù
ú
F
u
R]
T
F
Y
and
U^_S`
%
. From initial and desired im-
ages, it is possible to compute the homography
ø
òØä
F
and
then to obtain
SM
F
,
P
ù
ú
Fÿ
NP
F£SíØì
,
ë¥ì
and thus
IF
. As in
the previous section, we construct a path starting at
IF
and
oriented along the induced forces given by :

h
 'µéT
aí2ìQ
·
T4]ÆíØì.

»Sà
á
¼
»\¼

T
According to (4) and (8),
·
a+Æåí2ìQ
can be written :
·
4]Æí
ì
¹

í
ì
 
(9)
where
KV
T
%
Ð/Ð
T
r
ÐÒÐ
T
YïT
and
V
T
%
Ð/Ð
T
r
Ð/Ð
T
YzT
are two
g

matrix independent of
íØì
:
rÑ

%
È
%
Ë

rt Ö
Ä
r
Å
r
JÄ
l
r
Å
r
¹)Å
l
r
JÄØr.ÅNr JÄØr
Û
The Jacobian matrix
µ
4íØìQ
is given by :
µ
aí
ì
n
Ö
íØì
QM
T
O
^{×Ø`Ø×
^×Ø`Ø× Ù
i%
Ú
Û
(10)
Using the above equation, the vector
"
can be computed
at each iteration and from
"
, the rotation matrix
NM
"
and
the vector
PQù
ú
ä
"
QP
"
£SíØì
are obtained.
2D trajectories The homography matrix
øò2ä
"
of plane
ç
relating the current and desired images can be computed
from
I"
using (7) :
ø
òØä
"Ñ
M
T
"
M
T
"
P
ù
ú
ä
"Në
ìÉT
According to (5) the image coordinates of the points
qnr
belonging to
ç
at time
0
are given by :
ð
r.sZrä
"
Í
ð
r
u
rä
"
ð
r
w
råä
"
ð
r
Ô
T
(ñò2ä
"
s
ì
r
(11)
s
rä
"
is easily obtained by dividing
ð
r
s
råä
"
by its last com-
ponent, thus the equation (11) allows us to obtain the tra-
jectories in the image.
Influence of
íØì
. The parameter
íØì
appears only in repul-
sive force through the matrix
composed of the product
of
µ
T
aí
ì
and
·
T
a+Æåí
ì
. According to (9) and (10) we
have :
}îµ
T
aí
ì
·
T
4]Æí
ì
x
Ö
SMO
T
·
T
Ú
T
Û
That proves that
and thus the trajectories in the image are
independent of parameter
íØì
.
Influence of intrinsic parameters. If the camera is not
perfectly calibred and
æ
is used instead of
æ
, the estimated
homography matrix is :
øò2ä
F

æ
m%
æøò2ä
F
æ
i%
æ
(12)
Let us assume the following hypothesis (H1):
ø
ò2ä
FZ

æ
i%
æIø
ò2ä
F4æ
m%
æK

ø
ò2ä
"

æ
m%
æIø
ò2ä
"Sæ
m%
æ
This assumption means that the initial error in the estimat-
ed homography is propagated along the trajectory. Accord-
ing to (11) and (6) we obtain :
ð
r
sZrä
"
æ
øò2ä
"
æ
m%
s
ì
r
(13)
Considering (H1), (12) and (13), we obtain :
ð
r
siråä
"
îæøòØä
"
æ
i%
s
ì
r
ð
r.sZrä
"
Therefore, under assumption H1, the trajectories in the im-
age are not disturbed by errors on intrinsic parameters. We
will check this nice property on the experimental results
given in Section 5.
4 Control Scheme
In order to track the trajectories using an Image-based
Control scheme, a vision-based task function
!
f
³
#"
ÉyÆ
$"
É
[4]
is defined as :
!

%
·
#
4W
³
#"
É¥
ì
#"
É
(14)

where
is composed of the current image coordinates,
Sì
is the desired trajectory of
computed in Sections 2,
3 and
·
#
is the pseudo-inverse of a chosen model of
·
.
The value of
·
at the current desired position is used for
·
:
º
if the target is known
·
·
4Sì
&
ÆÌì
"
where
̹ì
"
is
easily obtained from
"
and the target model
º
else
·
·
4Sì
&
Æ
í2ìQ
,
íØì
being an estimated value of
í2ì
In order that
!
exponentially decreases toward
the
velocity control law is given by [4] :
¾ép
(')!
!
+*
±
!
±
"
(15)
where
'
is a proportional gain and
»
-,
»
1
denotes an estimated
value of the time variation of
!
. If the target is motionless,
we obtain from (14) :
±
!
±
"
p
.
·
#
±
Nì
±
"
(16)
According to (16), we rewrite (15) :
¾é
('!
Ñ)
·
#
0/
±
ì
±
"
where the term
·
#
*
»\¼
ú
»
1
allows to compensate tracking error
in following the specified trajectory [1]. It can be estimated
as follow :
·
#
0/
±
ì
±
"
+
·
#
ì
"

ì
"Qm%
1
"
The discretized control law at time
0
1
"
can finally be writ-
ten :
¾op
('2
·
#
aQ"Ñ
ì
"
Z)
3
·
#
Nì
"
Nì
"Nm%
1
"
5 Experiments
The methods presented have been tested on a six d-o-f
eye-in-hand system. The target is a planar object with four
white marks (see Figure 4). Displacement between the ini-
tial and final camera positions is very significant (
"
È
]+
5464
,
"
Ë
8797
+
5464
,
";:
g
94<4
,
u
RW
È
g
õ+í
9=
,
u
R]
Ë
>
õ+í
9=
,
u
RW
;:
I\÷
?>
í
5=
) and in this case classical
image-based and position-based visual servoing fail. Fig-
ure 5(c) shows the importance of repulsive potential with-
out which the visual features get out largely of the camera
field of view.
The obtained results (see Figure 5) using the method
presented on Section 2 and correct intrinsic parameters are
very satisfactory. The positioning task is accurately real-
ized with regular velocities (because the error
"
Nì
"
keeps
a regular value). After the complete realization of the tra-
jectory, servoing is prolonged with a small gain and a con-
stant reference. We can notice that the desired trajectories
and the tracked trajectories are almost similar.
The method presented in Section 3 has been tested with
two set of parameters. In Figure 6, intrinsic parameters giv-
en by camera manufacturer and real value of
íØì
has been
used and in Figure 7, an error of 20% is added on intrinsic
parameters as well as on the parameter
íØì
. In both cases
the results are satisfactory. In particular and as expected,
we will note that the planned trajectories are practically
identical in both cases.
(a)
50 100 150 200 250 300
50
100
150
200
250
1
2
3
4
(b)
50 100 150 200 250 300
50
100
150
200
250
1
2
4
3
(c)
−150 −100 −50 0 50 100 150 200 250 300 350 400
0
100
200
300
400
500
600
700
Figure 4: Initial -a-, desired -b- images of the target and trajec-
tories planned without repulsive potential -c-
(a)
50 100 150 200 250 300
50
100
150
200
250
(b)
50 100 150 200 250 300
50
100
150
200
250
(c)
0 100 200 300 400 500 600 700 800
−8
−6
−4
−2
0
2
4
6
translations
rotations
(d)
0 100 200 300 400 500 600 700 800
−50
0
50
100
150
200
250
Figure 5: First case, a : planned trajectories, b : followed
trajectories, c : velocities (cm/s and dg/s), d : error on pix-
els coordinates -d-
6 Conclusion
In this paper, we have presented a powerful method to
increase the application area of visual servoing to the cas-
es where initial and desired positions of the camera are
distant. Experimental results show the validity of our ap-
proach and its robustness with respect to modeling errors.
Future work will be devoted to introduce supplementary
constraints in the planed trajectories : to avoid robot joint
limits, kinematic singularities, occlusions and obstacles.
Another perspective is to generate the trajectories in im-
age space of more complex features that
points in order
to apply our method to real objects.

Citations
More filters
Journal ArticleDOI

A new partitioned approach to image-based visual servo control

TL;DR: A partitioned approach to visual servo control is introduced that decouple the x-axis rotational and translational components of the control from the remaining degrees of freedom and incorporates a potential function that repels feature points from the boundary of the image plane.
Journal ArticleDOI

Path planning for robust image-based control

TL;DR: This paper proposes a new approach to resolve difficulties in vision feedback control loop techniques by coupling path planning in image space and image-based control and ensures robustness with respect to modeling errors.
Proceedings Article

Survey on Visual Servoing for Manipulation

TL;DR: The proposed terminology is used to introduce a young researcher and lead the experts in the field through a three decades long historical field of vision guided robotics.
Journal ArticleDOI

Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods

TL;DR: The theoretical proof of the stability of the model-free visual servoing methods and the experimental results prove the validity of the control strategy proposed in the paper.
Journal ArticleDOI

Stable Visual Servoing Through Hybrid Switched-System Control

TL;DR: This work presents a hybrid switched-system visual servo method that utilizes both image-based and position-based control laws, and proves the stability of a specific, state-based switching scheme and presents simulated and experimental results.
References
More filters
Journal ArticleDOI

Real-time obstacle avoidance for manipulators and mobile robots

TL;DR: This paper reformulated the manipulator con trol problem as direct control of manipulator motion in operational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and geometric transformation.
Book

Robot Motion Planning

TL;DR: This chapter discusses the configuration space of a Rigid Object, the challenges of dealing with uncertainty, and potential field methods for solving these problems.
Journal ArticleDOI

A tutorial on visual servo control

TL;DR: This article provides a tutorial introduction to visual servo control of robotic manipulators by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process.
Book ChapterDOI

A new approach to visual servoing in robotics

TL;DR: Vision-based control in robotics based on considering a vision system as a specific sensor dedicated to a task and included in a control servo loop is described, and stability and robustness questions arise.
Journal ArticleDOI

Model-based object pose in 25 lines of code

TL;DR: Compared to classic approaches making use of Newton's method, POSIT does not require starting from an initial guess, and computes the pose using an order of magnitude fewer floating point operations; it may therefore be a useful alternative for real-time operation.
Related Papers (5)
Frequently Asked Questions (6)
Q1. What are the contributions in "Path planning in image space for robust visual servoing" ?

In this paper the authors propose a new approach to resolve these difficulties by planning trajectories in the image. Furthermore, using this process, current measurement always remain close to their desired value and a control by Imagebased Servoing ensures the robustness with respect to modeling errors. 

Future work will be devoted to introduce supplementary constraints in the planed trajectories: to avoid robot joint limits, kinematic singularities, occlusions and obstacles. 

Future work will be devoted to introduce supplementary constraints in the planed trajectories : to avoid robot joint limits, kinematic singularities, occlusions and obstacles. 

The vector IF is then computed using the following relations :d QM Fe SMIbfF M Tb \\P F NM F FaPQb ) QPNb According to (1) the authors construct a path as the sequence of successive path segments starting at the initial configuration F . 

If the camera is not perfectly calibred and æ is used instead of æ , the estimated homography matrix is : ø ò2ä F æ m% æ ø ò2ä F æ i% æ (12) 

Using a pose estimation algorithm [3], the authors can determine FaM b , FcPNb , NMIb and NPQb that represent respectively the rotation and the translation from object frame E b to E'F and E b to EJ (see Figure 3).