Author
P.K. Khosla
Bio: P.K. Khosla is an academic researcher. The author has contributed to research in topics: Adaptive control & Visual servoing. The author has an hindex of 1, co-authored 1 publications receiving 55 citations.
Papers
More filters
07 Dec 1999
TL;DR: In this article, an image-based visual approach for the position control of a nonholonomic mobile robot is presented, where the robot is endowed with a fixed camera and visual feedback is used to control the robot pose with respect to a rigid object of interest.
Abstract: In this paper, a novel image-based visual approach for the position control of a nonholonomic mobile robot is presented. The mobile robot is endowed with a fixed camera, and visual feedback is used to control the robot pose with respect to a rigid object of interest. After introducing a three dimensional state space representation of the camera-object visual interaction model fully defined in the image plane, a closed-loop stabilizing control law is designed, based on Lyapunov's direct method. The image-based control scheme, which uses a discontinuous change of coordinates, ensures global asymptotic stability of the closed-loop visual system. Moreover, in the case of known height of the object, global stability is formally proved using an adaptive control law. Experimental results obtained with a tank model validate the framework, both in terms of system convergence and control robustness.
55 citations
Cited by
More filters
TL;DR: Simulation and experimental results show the effectiveness of the proposed control scheme, which exploits the epipolar geometry defined by the current and desired camera views and does not need any knowledge of the 3-D scene geometry.
Abstract: We present an image-based visual servoing strategy for driving a nonholonomic mobile robot equipped with a pinhole camera toward a desired configuration. The proposed approach, which exploits the epipolar geometry defined by the current and desired camera views, does not need any knowledge of the 3-D scene geometry. The control scheme is divided into two steps. In the first, using an approximate input-output linearizing feedback, the epipoles are zeroed so as to align the robot with the goal. Feature points are then used in the second translational step to reach the desired configuration. Asymptotic convergence to the desired configuration is proven, both in the calibrated and partially calibrated case. Simulation and experimental results show the effectiveness of the proposed control scheme
221 citations
TL;DR: This work proposes a visual servoing approach where depth is observed and made available for servoing by interpreting depth as an unmeasurable state with known dynamics, and building a non-linear observer that asymptotically recovers the actual value of Z for the selected feature.
Abstract: In the classical image-based visual servoing framework, error signals are directly computed from image feature parameters, allowing, in principle, control schemes to be obtained that need neither a complete three-dimensional (3D) model of the scene nor a perfect camera calibration. However, when the computation of control signals involves the interaction matrix, the current value of some 3D parameters is requiredfor each considered feature, and typically a rough approximation of this value is used. With reference to the case of a point feature, for which the relevant 3D parameter is the depth Z, we propose a visual servoing approach where Z is observed and made available for servoing. This is achieved by interpreting depth as an unmeasurable state with known dynamics, and by building a non-linear observer that asymptotically recovers the actual value of Z for the selected feature. A byproduct of our analysis is the rigorous characterization of camera motions that actually allow such observation. Moreover, in the case of a partially uncalibrated camera, it is possible to exploit complementary camera motions in order to preliminarily estimate the focal length without knowing Z. Simulations and experimental results are presented for a mobile robot with an on-board camera in order to illustrate the benefits of integrating the depth observation within classical visual servoing schemes.
177 citations
TL;DR: An image-based "eye-in-hand" visual servo-control design is proposed for underactuated rigid-body dynamics that exploits the geometry of the task considered and passivity-like properties of rigid- body dynamics to derive a control Lyapunov function using backstepping techniques.
Abstract: An image-based "eye-in-hand" visual servo-control design is proposed for underactuated rigid-body dynamics. The dynamic model considered is motivated by recent work on vertical takeoff and landing aerial robotic vehicles. The task considered is that of tracking parallel linear visual features. The proposed design exploits the geometry of the task considered and passivity-like properties of rigid-body dynamics to derive a control Lyapunov function using backstepping techniques.
114 citations
TL;DR: A depth-independent image Jacobian matrix framework for the wheeled mobile robots will be developed such that unknown parameters in the closed-loop system can be linearly parameterized and adaptive laws can be designed to estimate the unknown parameters online and the depth information of the feature point can be allowed to be time varying.
Abstract: In this paper, the uncalibrated image-based trajectory tracking control problem of wheeled mobile robots will be studied. The motion of the wheeled mobile robot can be observed using an uncalibrated fixed camera on the ceiling. Different from traditional vision-based control strategies of wheeled mobile robots in the fixed camera configuration, the camera image plane is not required to be parallel to the motion plane of the wheeled mobile robots and the camera can be placed at a general position. To guarantee that the wheeled mobile robot can efficiently track its desired trajectory, which is specified by the desired image trajectory of a feature point at the forward axis of the wheeled mobile robot, we will propose a new adaptive image-based trajectory tracking control approach without the exact knowledge of the camera intrinsic and extrinsic parameters and the position parameter of the feature point. To eliminate the nonlinear dependence on the unknown parameters from the closed-loop system, a depth-independent image Jacobian matrix framework for the wheeled mobile robots will be developed such that unknown parameters in the closed-loop system can be linearly parameterized. In this way, adaptive laws can be designed to estimate the unknown parameters online, and the depth information of the feature point can be allowed to be time varying in this case. The Lyapunov stability analysis will also be performed to show asymptotical convergence of image position and velocity tracking errors of the wheeled mobile robot. The simulation results based on a two-wheeled mobile robot will be given in this paper to illustrate the performance of the proposed approach as well. The experimental results based on a real wheeled mobile robot will also be provided to validate the proposed approach.
90 citations
TL;DR: Docking success under certain conditions is proved mathematically and simulation studies show the control law to be robust to camera intrinsic parameter errors.
Abstract: We present a new control law for the problem of docking a wheeled robot to a target at a certain location with a desired heading. Recent research into insect navigation has inspired a solution which uses only one video camera. The control law is of the ''behavioral'' type in that all control actions are based on immediate visual information. Docking success under certain conditions is proved mathematically and simulation studies show the control law to be robust to camera intrinsic parameter errors. Experiments were performed for verification of the control law.
71 citations