scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Image Based Visual Servoing for Tumbling Objects

TL;DR: This paper proposes a novel controller that is able to minimize the feature error directly in image space by observing that the feature points on the tumbling object follow a circular path around the axis of rotation and their projection creates an elliptical track in the image plane.
Abstract: Objects in space often exhibit a tumbling motion around the major inertial axis. In this paper, we address the image based visual servoing of a robotic system towards an uncooperative tumbling object. In contrast to previous approaches that require explicit reconstruction of the object and an estimation of its velocity, we propose a novel controller that is able to minimize the feature error directly in image space. This is achieved by observing that the feature points on the tumbling object follow a circular path around the axis of rotation and their projection creates an elliptical track in the image plane. Our controller minimizes the error between this elliptical track and the desired features, such that at the desired pose the features lie on the circumference of the ellipse. The effectiveness of our framework is exhibited by implementing the algorithm in simulation as well on a mobile robot.
Citations
More filters
Journal ArticleDOI
01 Oct 2021-Robotica
TL;DR: An extensive review of research directions and topics of different approaches such as sensing, learning and gripping, which have been implemented within the current five years are presented.
Abstract: Interaction between a robot and its environment requires perception about the environment, which helps the robot in making a clear decision about the object type and its location. After that, the end effector will be brought to the object’s location for grasping. There are many research studies on the reaching and grasping of objects using different techniques and mechanisms for increasing accuracy and robustness during grasping and reaching tasks. Thus, this paper presents an extensive review of research directions and topics of different approaches such as sensing, learning and gripping, which have been implemented within the current five years.

37 citations

Proceedings ArticleDOI
20 Apr 2020
TL;DR: Results show that this architecture can robustly control complex RPO maneuvers, to include RPO with tumbling targets, and two novel methods are proposed to generate goal images needed to implement IBVS with unknown targets.
Abstract: A visual navigation and control architecture for the final-approach phase of autonomous spacecraft rendezvous and proximity operations (RPO) with unknown targets is presented. Two spacecraft are considered: a maneuvering chaser, and a possibly tumbling non-cooperative target. The target spacecraft is assumed to be unknown, therefore none of its physical characteristics (geometry, moments of inertia, etc.) are available to the chaser a priori. An image-based visual servo (IBVS) controller is implemented to compute maneuvers for the chaser to achieve a desired goal pose with respect to the target. The controller seeks to minimize pixel location error of matched image feature points on the target. Importantly, because control error is calculated in the image space, a full relative pose estimate between the chaser and target is not required. In order to assess the controller under realistic conditions, the stochastic nature of image feature tracking is modeled via four types of errors: image feature location error, feature nonidentification (occlusion), feature misidentification (outliers), and feature depth estimation error. A bank of Kalman filters are used to track feature points and residual monitoring is used to reject feature measurement outliers. Two novel methods are proposed to generate goal images needed to implement IBVS with unknown targets. The architecture is implemented in a 6DOF simulation using nonlinear relative spacecraft dynamics and camera modeling. Results show that this architecture can robustly control complex RPO maneuvers, to include RPO with tumbling targets.

5 citations


Cites background from "Image Based Visual Servoing for Tum..."

  • ...Of these, [21] and [23] do not present results considering spacecraft relative motion...

    [...]

Proceedings ArticleDOI
02 Jul 2020
TL;DR: The robot aims at detection of hazardous gases and mapping global positioning system locations of the detected gases to the navigation terrain in real-time and a neural network-based classifier was implemented to recognize the gases with an average accuracy of 98%.
Abstract: This paper reports development of a robot that can sense the presence of hazardous gases in the environment. The robot aims at detection of hazardous gases and mapping global positioning system (GPS) locations of the detected gases to the navigation terrain in real-time. These information was transmitted to a hand-held device in a remote location for exploration of gas types, which holds promise for disaster management. The robot was equipped with a module of gas sensors, human detection sensor, GPS module and obstacle detection sensors in a coherent system. While navigating with collision avoidance to obstacles, the robot can transmit information about the presence of hazardous gases and human being in the area of navigation. It was tested in an uneven terrain to recognize the presence of hazardous gases like carbon dioxide, liquefied petroleum gas, vaporized alcohol gas vis-a-vis ambient gases in real-time. A ´ neural network-based classifier was implemented to recognize the gases with an average accuracy of 98%.

4 citations


Cites background from "Image Based Visual Servoing for Tum..."

  • ...Advances in the navigation of mobile robots have been reported based on sensor and image processing techniques [1] [2]....

    [...]

Proceedings ArticleDOI
23 Oct 2022
TL;DR: In this paper , a visual servoing approach is proposed for ground target tracking, which takes into account the satellite motion induced by its orbit, Earth rotational velocities, potential target own motion, but also rotational velocity and acceleration constraints of the system.
Abstract: Recent Earth observation satellites are now equipped with new instrument that allows image feedback in real-time. Problematic such as ground target tracking, moving or not, can now be addressed by precisely controlling the satellite attitude. In this paper, we propose to consider this problem using a visual servoing approach. While focusing on the target, the control scheme has also to take into account the satellite motion induced by its orbit, Earth rotational velocities, potential target own motion, but also rotational velocities and accelerations constraints of the system. We show the efficiency of our system using both simulation (considering real Earth image) and experiments on a robot that replicates actual high resolution satellite constraints.

1 citations

Journal ArticleDOI
TL;DR: In this article , a trajectory planning method for space manipulators is proposed, which can generate trajectory in Cartesian space with continuous joint jerk, and a bridging matrix is implemented to ensure desired pose varies continuously and smoothly.
References
More filters
Journal ArticleDOI
01 Oct 1996
TL;DR: This article provides a tutorial introduction to visual servo control of robotic manipulators by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process.
Abstract: This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.

3,619 citations


"Image Based Visual Servoing for Tum..." refers background or methods in this paper

  • ...Classical IBVS In classical image based visual servoing methods, visual information is represented by a set of features (s) extracted from the image measurements [7]....

    [...]

  • ...Although approaches for the capture of fixed object [7] are well studied, servoing to a dynamic object by a robotic arm system are seldom reported in the literature....

    [...]

Journal ArticleDOI
30 Nov 2006
TL;DR: This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot using basic techniques that are by now well established in the field.
Abstract: This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot. In this paper, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques

2,026 citations


"Image Based Visual Servoing for Tum..." refers background or methods in this paper

  • ...Once the features are obtained, next task is to compute the interaction matrix, which is a mapping between the change of visual features to camera velocity so that a velocity controller can be designed [15]....

    [...]

  • ...A comparison between the error plots from conventional visual servoing [15] and proposed visual servoing is shown in Fig....

    [...]

  • ...This is achieved by defining a task function that minimizes the error between the current s and the desired configuration of features s∗ [15], i....

    [...]

  • ...As such, several variants of IBVS methods are available based on the image features used [15], [16], [17] and tracking methods [18], [19], [20], [21]....

    [...]

  • ...The interaction matrices for a point located at (x, y) are the same as the conventional formulation [15] and are given by:...

    [...]

Book ChapterDOI
01 Jun 1992
TL;DR: Vision-based control in robotics based on considering a vision system as a specific sensor dedicated to a task and included in a control servo loop is described, and stability and robustness questions arise.
Abstract: Vision-based control in robotics based on considering a vision system as a specific sensor dedicated to a task and included in a control servo loop is described. Once the necessary modeling stage is performed, the framework becomes one of automatic control, and stability and robustness questions arise. State-of-the-art visual servoing is reviewed, and the basic concepts for modeling the concerned interactions are given. The interaction screw is thus defined in a general way, and the application to images follows. Starting from the concept of task function, the general framework of the control is described, and stability results are recalled. The concept of the hybrid task is presented and then applied to visual sensors. Simulation and experimental results are presented, and guidelines for future work are drawn in the conclusion. >

1,463 citations

Journal ArticleDOI
TL;DR: This tutorial has only considered velocity controllers, which is convenient for most of classical robot arms and geometrical features coming from a classical perspective camera is considered.
Abstract: This article is the second of a two-part tutorial on visual servo control. In this tutorial, we have only considered velocity controllers. It is convenient for most of classical robot arms. However, the dynamics of the robot must of course be taken into account for high speed task, or when we deal with mobile nonholonomic or underactuated robots. As for the sensor, geometrical features coming from a classical perspective camera is considered. Features related to the image motion or coming from other vision sensors necessitate to revisit the modeling issues to select adequate visual features. Finally, fusing visual features with data coming from other sensors at the level of the control scheme will allow to address new research topics

894 citations


"Image Based Visual Servoing for Tum..." refers background in this paper

  • ...To overcome such failure situations, few approaches consider the motion compensation in the control law [29], for dynamic object....

    [...]

Journal ArticleDOI
TL;DR: The analytical form of the interaction matrix related to any moment that can be computed from segmented images is determined, based on Green's theorem, which applies to classical geometrical primitives.
Abstract: In this paper, we determine the analytical form of the interaction matrix related to any moment that can be computed from segmented images. The derivation method we present is based on Green's theorem. We apply this general result to classical geometrical primitives. We then consider using moments in image-based visual servoing. For that, we select six combinations of moments to control the six degrees of freedom of the system. These features are particularly adequate, if we consider a planar object and the configurations such that the object and camera planes are parallel at the desired position. The experimental results we present show that a correct behavior of the system is obtained if we consider either a simple symmetrical object or a planar object with complex and unknown shape.

413 citations


"Image Based Visual Servoing for Tum..." refers methods in this paper

  • ...As such, several variants of IBVS methods are available based on the image features used [15], [16], [17] and tracking methods [18], [19], [20], [21]....

    [...]