scispace - formally typeset
Search or ask a question

Showing papers by "Paolo Robuffo Giordano published in 2014"


Journal ArticleDOI
TL;DR: The reported simulative and experimental results fully support the theoretical analysis and clearly show the benefits of the proposed active estimation strategy, which is in particular able to impose a desired transient response to the estimation error equivalent to that of a reference linear second-order system with assigned poles.
Abstract: In this paper, we illustrate the application of a nonlinear active structure estimation from motion (SfM) strategy to three problems, namely 3-D structure estimation for 1) a point, 2) a sphere, and 3) a cylinder. In all three cases, an appropriate parameterization reduces the problem to the estimation of a single quantity. Knowledge of this estimated quantity and of the available measurements allows for then retrieving the full 3-D structure of the observed objects. Furthermore, in the point feature case, two different parameterizations based on either a planar or a spherical projection model are critically compared. Indeed, the two models yield, somehow unexpectedly, to different convergence properties for the SfM estimation task. The reported simulative and experimental results fully support the theoretical analysis and clearly show the benefits of the proposed active estimation strategy, which is in particular able to impose a desired transient response to the estimation error equivalent to that of a reference linear second-order system with assigned poles.

67 citations


Proceedings ArticleDOI
22 Sep 2014
TL;DR: A new framework for semi-autonomous path planning for mobile robots that extends the classical paradigm of bilateral shared control is presented and is validated with extensive experiments using a quadrotor UAV and a human in the loop with two haptic interfaces.
Abstract: A new framework for semi-autonomous path plan- ning for mobile robots that extends the classical paradigm of bilateral shared control is presented. The path is represented as a B-spline and the human operator can modify its shape by controlling the motion of a finite number of control points. An autonomous algorithm corrects in real time the human directives in order to facilitate path tracking for the mobile robot and ensures i) collision avoidance, ii) path regularity, and iii) attraction to nearby points of interest. A haptic feedback algorithm processes both human's and autonomous control terms, and their integrals, to provide an information of the mismatch between the path specified by the operator and the one corrected by the autonomous algorithm. The framework is validated with extensive experiments using a quadrotor UAV and a human in the loop with two haptic interfaces.

54 citations


Proceedings ArticleDOI
24 Jun 2014
TL;DR: In this paper, an extension of rigidity theory is made for frameworks embedded in the special Euclidean group SE(2) = ℝ2 × S1, where each robot has access to a relative bearing measurement taken from the local body frame of the robot, and the robots have no knowledge of a common reference frame.
Abstract: This work considers the problem of estimating the unscaled relative positions of a multi-robot team in a common reference frame from bearing-only measurements. Each robot has access to a relative bearing measurement taken from the local body frame of the robot, and the robots have no knowledge of a common reference frame. An extension of rigidity theory is made for frameworks embedded in the special Euclidean group SE(2) = ℝ2 × S1. We introduce definitions describing rigidity for SE(2) frameworks and provide necessary and sufficient conditions for when such a framework is infinitesimally rigid in SE(2). We then introduce the directed bearing rigidity matrix and show that an SE(2) framework is infinitesimally rigid if and only if the rank of this matrix is equal to 2|V| − 4, where |V| is the number of agents in the ensemble. The directed bearing rigidity matrix and its properties are then used in the implementation and convergence proof of a distributed estimator to determine the unscaled relative positions in a common frame. Simulation results are given to support the analysis.

51 citations


Proceedings ArticleDOI
01 May 2014
TL;DR: A solution for coupling the execution of a visual servoing task with a recently developed active Structure from Motion strategy able to optimize online the convergence rate in estimating the (unknown) 3D structure of the scene is proposed.
Abstract: In this paper we propose a solution for coupling the execution of a visual servoing task with a recently developed active Structure from Motion strategy able to optimize online the convergence rate in estimating the (unknown) 3D structure of the scene. This is achieved by suitably modifying the robot trajectory in the null-space of the servoing task so as to render the camera motion 'more informative' w.r.t. the states to be estimated. As a byproduct, the better 3D structure estimation also improves the evaluation of the servoing interaction matrix which, in turn, results in a better closed-loop convergence of the task itself. The reported experimental results support the theoretical analysis and show the benefits of the method.

12 citations


Proceedings ArticleDOI
01 May 2014
TL;DR: The experimental results fully support the theoretical analysis and clearly show the benefits of the proposed active SfM strategy, and it is possible to assign the error transient response and make it equivalent to that of a reference linear second-order system with desired poles.
Abstract: Structure estimation from motion (SfM) is a clas- sical and well-studied problem in computer and robot vision, and many solutions have been proposed to treat it as a recursive filtering/estimation task. However, the issue of actively optimiz- ing the transient response of the SfM estimation error has not received a comparable attention. In this paper, we provide an experimental validation of a recently proposed nonlinear active estimation strategy via two concrete SfM applications: 3D structure estimation for a spherical and a cylindrical target. The experimental results fully support the theoretical analysis and clearly show the benefits of the proposed active strategy. Indeed, by suitably acting on the camera motion and estimation gains, it is possible to assign the error transient response and make it match that of a reference linear second-order system with desired poles.

10 citations


01 May 2014
TL;DR: This presentation will focus on how to interface the matlab/simulink environment with V-REP using the ROS communication libraries (the publisher/subscriber paradigm) for fast prototyping of robot control algorithms.
Abstract: This presentation will focus on how to interface the matlab/simulink environment with V-REP using the ROS communication libraries (the publisher/subscriber paradigm) for fast prototyping of robot control algorithms. We will first show how to embed ROS nodes in simulink by including custom C S-Functions representing ROS topics to be listened/published. This will make it possible for Simulink to exchange data with V-REP in real-time for obtaining the robot data and computing the needed control actions. Then, we will demonstrate our architecture in two simulated scenarios: (i) visual control of a quadrotor UAV and (ii) visual control of an industrial manipulator. The first scenario will involve a quadrotor UAV equipped with a IMU and a down-looking camera meant to control its pose w.r.t. a ground target by means of a visual servoing law. The second scenario will consider the same situation for a fixed manipulator with an eye-in-hand camera performing a classical visual servoing task.

7 citations


Proceedings ArticleDOI
01 May 2014
TL;DR: This work proposes a novel active strategy in which a monocular camera tries to determine whether a set of observed point features belongs to a common plane, and, if so, what are the associated plane parameters.
Abstract: Plane detection and estimation from visual data is a classical problem in robotic vision. In this work we propose a novel active strategy in which a monocular camera tries to determine whether a set of observed point features belongs to a common plane, and, if so, what are the associated plane parameters. The active component of the strategy imposes an optimized camera motion (as a function of the observed scene) able to maximize the convergence in estimating the scene structure. Based on this strategy, two methods are then proposed to solve the plane estimation task: a classical solution exploiting the homography constraint (and, thus, almost com- pletely based on image correspondances across distant frames), and an alternative method fully taking advantage of the scene structure estimated incrementally during the camera motion. The two methods are extensively compared in several case studies by discussing the various pros/cons.

7 citations


Proceedings ArticleDOI
06 Nov 2014
TL;DR: This paper proposes a new correction strategy which tries to directly correct the relative pose between the camera and the target instead of only adjusting the error on the image plane.
Abstract: Predicting the behavior of visual features on the image plane over a future time horizon is an important possibility in many different control problems. For example when dealing with occlusions (or other constraints such as joint limits) in a classical visual servoing loop, or also in the more advanced model predictive control schemes recently proposed in the literature. Several possibilities have been proposed to perform the initial correction step for then propagating the visual features by exploiting the measurements currently available by the camera. But the predictions proposed so far are inaccurate in situations where the depths of the tracked points are not correctly estimated. We then propose in this paper a new correction strategy which tries to directly correct the relative pose between the camera and the target instead of only adjusting the error on the image plane. This correction is then analysed and compared by evaluating the corresponding improvements in the feature prediction phase. I. INTRODUCTION

6 citations