scispace - formally typeset
Search or ask a question

Showing papers by "Paolo Robuffo Giordano published in 2016"


Proceedings ArticleDOI
09 Oct 2016
TL;DR: The proposed control strategy relies on an extension of the rigidity theory to the case of directed bearing frameworks in ℝ3×S1 to devise a decentralized bearing controller which, unlike most of the present literature, does not need presence of a common reference frame or of reciprocal bearing measurements for the agents.
Abstract: This paper considers the problem of controlling a formation of quadrotor UAVs equipped with onboard cameras able to measure relative bearings in their local body frames w.r.t. neighboring UAVs. The control goal is twofold: (i) steering the agent group towards a formation defined in terms of desired bearings, and (ii) actuating the group motions in the ‘null-space’ of the current bearing formation. The proposed control strategy relies on an extension of the rigidity theory to the case of directed bearing frameworks in ℝ3×S1. This extension allows to devise a decentralized bearing controller which, unlike most of the present literature, does not need presence of a common reference frame or of reciprocal bearing measurements for the agents. Simulation and experimental results are then presented for illustrating and validating the approach.

58 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: Human/hardware-in-the-loop experiments with simulated slave robots and a real master device are reported for demonstrating the feasibility and effectiveness of the shared control framework for remote manipulation of objects using visual information.
Abstract: Cleaning up the past half century of nuclear waste represents the largest environmental remediation project in the whole Europe. Nuclear waste must be sorted, segregated and stored according to its radiation level in order to optimize maintenance costs. The objective of this work is to develop a shared control framework for remote manipulation of objects using visual information. In the presented scenario, the human operator must control a system composed of two robotic arms, one equipped with a gripper and the other one with a camera. In order to facilitate the operator's task, a subset of the gripper motion are assumed to be regulated by an autonomous algorithm exploiting the camera view of the scene. At the same time, the operator has control over the remaining null-space motions w.r.t. the primary (autonomous) task by acting on a force feedback device. A novel force feedback algorithm is also proposed with the aim of informing the user about possible constraints of the robotic system such as, for instance, joint limits. Human/hardware-in-the-loop experiments with simulated slave robots and a real master device are finally reported for demonstrating the feasibility and effectiveness of the approach.

43 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: A novel decentralized active perception strategy that maximizes the convergence rate in estimating the (unmeasurable) formation scale in the context of bearing-based formation localization for robots evolving in ℝ3 × S1.
Abstract: In this paper, we propose a novel decentralized active perception strategy that maximizes the convergence rate in estimating the (unmeasurable) formation scale in the context of bearing-based formation localization for robots evolving in ℝ3 × S1. The proposed algorithm does not assume presence of a global reference frame and only requires bearing-rigidity of the formation (for the localization problem to admit a unique solution), and presence of (at least) one pair of robots in mutual visibility. Two different scenarios are considered in which the active scale estimation problem is treated either as a primary task or as a secondary objective with respect to the constraint of attaining a desired bearing formation. The theoretical results are validated by realistic simulations.

29 citations


Journal ArticleDOI
26 Jan 2016
TL;DR: In this paper, the authors presented a method for image-based navigation from an image memory using line segments as landmarks, where the entire navigation process is based on 2D image information without using any 3D information at all.
Abstract: This letter presents a method for image-based navigation from an image memory using line segments as landmarks. The entire navigation process is based on 2-D image information without using any 3-D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are acquired during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images is done exploiting the line segment matching between the current acquired image and nearby reference images. Three view matching result is used to compute the rotational velocity of a mobile robot during its navigation by visual servoing. Real-time navigation has been validated inside a corridor and inside a room with a Pioneer 3-DX equipped with an on-board camera. The obtained results confirm the viability of our approach, and verify that accurate mapping and localization are not necessary for a useful indoor navigation as well as that line segments are better features in the structured indoor environment.

18 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: A complete framework for image-based navigation from an image memory that exploits mutual information and does not need any feature extraction, matching or any 3D information is presented.
Abstract: This paper presents a complete framework for image-based navigation from an image memory that exploits mutual information and does not need any feature extraction, matching or any 3D information. The navigation path is represented by a set of automatically selected key images obtained during a prior learning phase. The shared information (entropy) between the current acquired image and nearby key images is exploited to switch key images during navigation. Based on the key images and the current image, the control law proposed by [1] is used to compute the rotational velocity of a mobile robot during its qualitative visual navigation. Using our approach, real-time navigation has been performed inside a corridor and inside a room with a Pioneer 3-DX equipped with an on-board perspective camera without the need of accurate mapping and localization.

8 citations