Visual Servo Control
01 Jan 2012-
About: The article was published on 2012-01-01 and is currently open access. It has received 406 citations till now. The article focuses on the topics: Servo control.
Citations
More filters
TL;DR: This review aims to summarize the current state of the art from the heterogenous range of fields that study the different aspects of these problems specifically in dual arm manipulation.
Abstract: Recent advances in both anthropomorphic robots and bimanual industrial manipulators had led to an increased interest in the specific problems pertaining to dual arm manipulation. For the future, we foresee robots performing human-like tasks in both domestic and industrial settings. It is therefore natural to study specifics of dual arm manipulation in humans and methods for using the resulting knowledge in robot control. The related scientific problems range from low-level control to high level task planning and execution. This review aims to summarize the current state of the art from the heterogenous range of fields that study the different aspects of these problems specifically in dual arm manipulation.
435 citations
01 Jun 2018
TL;DR: In this article, a deep recurrent controller is trained to automatically determine which actions move the end-effector of a robotic arm to a desired object by using its memory of past movements, correcting mistakes and gradually moving closer to the target.
Abstract: Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we propose learning viewpoint invariant visual servoing skills in a robot manipulation task. We train a deep recurrent controller that can automatically determine which actions move the end-effector of a robotic arm to a desired object. This problem is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing approach uses its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to previous visual servoing methods, which assume known dynamics or require a calibration phase. We learn our recurrent controller using simulated data, synthetic demonstrations and reinforcement learning. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://www.youtube.com/watch?v=oLgM2Bnb7fo
137 citations
TL;DR: This paper addresses the landing problem of a vertical take-off and landing vehicle, exemplified by a quadrotor, on a moving platform using image-based visual servo control using a suitable control law based on Observable features on a flat and textured target plane.
Abstract: This paper addresses the landing problem of a vertical take-off and landing vehicle, exemplified by a quadrotor, on a moving platform using image-based visual servo control. Observable features on a flat and textured target plane are exploited to derive a suitable control law. The target plane may be moving with bounded linear acceleration in any direction. For control purposes, the image of the centroid for a collection of landmarks is used as position measurement, whereas the translational optical flow is used as velocity measurement. The proposed control law guarantees convergence to the desired landing spot on the target plane, without estimating any parameter related to the unknown height, which is also guaranteed to remain strictly positive. Moreover, convergence is guaranteed even in the presence of bounded and possibly time-varying disturbances, resulting, for example, from the motion of the target plane, measurement errors, or wind-induced force disturbances. To improve performance, an estimator for unknown constant force disturbances is also included in the control law. Simulation and experimental results are provided to illustrate and assess the performance of the proposed controller.
133 citations
TL;DR: This work presents a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions and uses a structured-light 3D scanner for patient- to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories.
Abstract: Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 ± 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.
119 citations
01 May 2014
TL;DR: This paper develops a dynamical model directly in the image space, shows that this is a differentially-flat system with the image features serving as flat outputs, and develops a geometric visual controller that considers the second order dynamics (in contrast to most visual servoing controllers that assume first order dynamics).
Abstract: This paper addresses the dynamics, control, planning, and visual servoing for micro aerial vehicles to perform high-speed aerial grasping tasks. We draw inspiration from agile, fast-moving birds, such as raptors, that detect, locate, and execute high-speed swoop maneuvers to capture prey. Since these grasping maneuvers are predominantly in the sagittal plane, we consider the planar system and present mathematical models and algorithms for motion planning and control, required to incorporate similar capabilities in quadrotors equipped with a monocular camera. In particular, we develop a dynamical model directly in the image space, show that this is a differentially-flat system with the image features serving as flat outputs, outline a method for generating trajectories directly in the image feature space, develop a geometric visual controller that considers the second order dynamics (in contrast to most visual servoing controllers that assume first order dynamics), and present validation of our methods through both simulations and experiments.
118 citations
Cites background or methods from "Visual Servo Control"
...Therefore, we cannot use the standard image Jacobian as in [12], which assumes the target points are stationary in the inertial frame....
[...]
...There are many excellent tutorials on visual servoing [11], [6], [12], [13]....
[...]
References
More filters
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing
23,396 citations
Book•
01 Jan 2000TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Abstract: From the Publisher:
A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.
15,558 citations
01 Jan 2001
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this multiple view geometry in computer vision. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.
14,282 citations
07 May 2006
TL;DR: A novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Abstract: In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.
13,011 citations
TL;DR: A novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Abstract: This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision.
12,449 citations