scispace - formally typeset
Search or ask a question

Showing papers by "Paolo Robuffo Giordano published in 2008"


Journal ArticleDOI
TL;DR: This work proposes a visual servoing approach where depth is observed and made available for servoing by interpreting depth as an unmeasurable state with known dynamics, and building a non-linear observer that asymptotically recovers the actual value of Z for the selected feature.
Abstract: In the classical image-based visual servoing framework, error signals are directly computed from image feature parameters, allowing, in principle, control schemes to be obtained that need neither a complete three-dimensional (3D) model of the scene nor a perfect camera calibration. However, when the computation of control signals involves the interaction matrix, the current value of some 3D parameters is requiredfor each considered feature, and typically a rough approximation of this value is used. With reference to the case of a point feature, for which the relevant 3D parameter is the depth Z, we propose a visual servoing approach where Z is observed and made available for servoing. This is achieved by interpreting depth as an unmeasurable state with known dynamics, and by building a non-linear observer that asymptotically recovers the actual value of Z for the selected feature. A byproduct of our analysis is the rigorous characterization of camera motions that actually allow such observation. Moreover, in the case of a partially uncalibrated camera, it is possible to exploit complementary camera motions in order to preliminarily estimate the focal length without knowing Z. Simulations and experimental results are presented for a mobile robot with an on-board camera in order to illustrate the benefits of integrating the depth observation within classical visual servoing schemes.

177 citations


Proceedings ArticleDOI
19 May 2008
TL;DR: The proposed estimation scheme builds upon the theory of nonlinear observers, and in particular exploits the basic formulation of the persistency of excitation Lemma, and results are presented in order to support the effectiveness of the proposed approach.
Abstract: In the image-based visual servoing framework, image moments provide an appealing choice as visual features since they can be easily evaluated on any shape on the image plane, and do not require tracking and matching of individual geometric structures between distinct image frames (i.e., the so-called correspondence problem). However, computation of the moment interaction matrix still requires the knowledge of specific unmeasurable 3D quantities relative to the target object, quantities that are usually approximated in practical implementations. Therefore, in this paper we analyze the possibility to estimate on-line the value of such 3D quantities during the camera motion with the only assumption of a target shape with planar limb surface. The proposed estimation scheme builds upon the theory of nonlinear observers, and in particular exploits the basic formulation of the persistency of excitation Lemma. Simulation results are then presented in order to support the effectiveness of the proposed approach.

31 citations


Proceedings ArticleDOI
19 May 2008
TL;DR: An experimental evaluation of the performance of two redundancy resolution schemes, namely Task Priority and Task Sequencing, when adopted to realize IBVS tasks on a mobile robot equipped with a pan-tilt camera onboard is proposed.
Abstract: Within the standard IBVS framework for control of generic robotic systems, a suitable exploitation of redundancy w.r.t. the given visual task can significantly improve the overall task execution. Indeed, redundancy can be used to avoid occlusions, joint limits, or to realize tasks that would be ill-conditioned if addressed altogether. In this respect, we propose an experimental evaluation of the performance of two redundancy resolution schemes, namely Task Priority and Task Sequencing, when adopted to realize IBVS tasks on a mobile robot equipped with a pan-tilt camera onboard.

24 citations


Proceedings ArticleDOI
14 Oct 2008
TL;DR: An experimental evaluation of automatic robotic assembly of complex planar parts of DLR light-weight robot, equipped with an on-board camera (eye-in-hand configuration), and performance of humans and robot are compared in terms of overall execution time.
Abstract: In this paper we present an experimental evaluation of automatic robotic assembly of complex planar parts. The torque-controlled DLR light-weight robot, equipped with an on-board camera (eye-in-hand configuration), is committed with the task of looking for given parts on a table, picking them, and inserting them inside the corresponding holes on a movable plate. Visual servoing techniques are used for fine positioning over the selected part/hole, while insertion is based on active compliance control of the robot and robust assembly planning in order to align the parts automatically with the hole. Execution of the complete task is validated through extensive experiments, and performance of humans and robot are compared in terms of overall execution time.

20 citations