scispace - formally typeset
Search or ask a question

Showing papers on "Homography (computer vision) published in 1998"


Proceedings ArticleDOI
13 Oct 1998
TL;DR: Two algorithms are proposed to decouple image-based visual servoing by using the homography and the epipolar condition held between the goal image and the current image, and to generate the optimal trajectory of the robot motion to reach the goal straightforwardly.
Abstract: Image-based visual servoing interprets image change directly to camera motion, and control the position and pose of a robot mounting a camera to reach the goal, where the camera obtains just the same image as a given goal image. Because its strategy is to simply minimize the differences between the goal image and the currently obtained image, trajectory of the robot motion cannot be estimated beforehand, and sometimes it results in largely inefficient motion. This paper points out that this inefficient motion is caused by interferences of translation and rotation of images. Then, we propose two algorithms to decouple them by using the homography and the epipolar condition held between the goal image and the current image, and to generate the optimal trajectory of the robot motion to reach the goal straightforwardly.

151 citations


Book ChapterDOI
02 Jun 1998
TL;DR: It is shown that the homography between the images induced by the plane of the curve can be computed from two views given only the epipolar geometry, and that the trifocal tensor can be used to transfer a conic or the curvature from twoViews to a third.
Abstract: In this paper there are two innovations. First, the geometry of imaged curves is developed in two and three views. A set of results are given for both conics and non-algebraic curves. It is shown that the homography between the images induced by the plane of the curve can be computed from two views given only the epipolar geometry, and that the trifocal tensor can be used to transfer a conic or the curvature from two views to a third. The second innovation is an algorithm for automatically matching individual curves between images. The algorithm uses both photometric information and the multiple view geometric results. For image pairs the homography facilitates the computation of a neighbourhood cross-correlation based matching score for putative curve correspondences. For image triplets cross-correlation matching scores are used in conjunction with curve transfer based on the trifocal geometry to disambiguate matches. Algorithms are developed for both short and wide baselines. The algorithms are robust to deficiencies in the curve segment extraction and partial occlusion. Experimental results are given for image pairs and triplets, for varying motions between views, and for different scene types. The method is applicable to curve matching in stereo and trinocular rigs, and as a starting point for curve matching through monocular image sequences.

64 citations


Book ChapterDOI
08 Jan 1998
TL;DR: This paper presents a method that enables a mobile robot to locate obstacles in its field of view using two images of its surroundings based on the assumption that the robot is moving on a locally planar ground.
Abstract: Obstacle avoidance is an essential capability of an autonomous robot. This paper presents a method that enables a mobile robot to locate obstacles in its field of view using two images of its surroundings. The method is based on the assumption that the robot is moving on a locally planar ground. Using a set of point features (corners) that have been matched between the two views using normalized cross-correlation, a robust estimate of the homography of the ground is computed. Knowledge of this homography permits us to compensate for the motion of the ground and to detect obstacles as areas in the image that appear nonstationary after the motion compensation. The resulting method does not require camera calibration, is applicable either to stereo pairs or to motion sequence images, does not rely on a dense disparity/flow field and circumvents the 3D reconstruction problem. Experimental results from the application of the method on real images indicate that it is both effective and robust.

53 citations


Patent
Richard Szeliski1
05 Jun 1998
TL;DR: In this paper, a method for reconstructing 3D geometry by computing 3D points on an object or a scene including many objects visible in images taken from different views of the object or scene was proposed.
Abstract: The invention is embodied in a method for reconstructing 3-dimensional geometry by computing 3-dimensional points on an object or a scene including many objects visible in images taken from different views of the object or scene. The method includes identifying at least one set of initial pixels visible in both the views lying on a generally planar surface on the object, computing from the set of initial pixels an estimated homography between the views, defining at least an additional pixel on the one surface in one of the images and computing from the estimated homography a corresponding additional pixel in the other view, computing an optimal homography and an epipole from the initial and additional pixels (including at least some points not on the planar surface), and computing from the homography and the epipole 3-dimensional locations of points on the object by triangulation between the views of corresponding ones of the pixels. Each of the initial pixels in one of the views corresponds to one of the initial pixels in the other of the views and both correspond to a point on the object.

49 citations


Proceedings ArticleDOI
16 May 1998
TL;DR: A new vision-based robot control approach halfway between the classical position-based and image-based visual servoings that ensures the convergence of the control law in all the task space.
Abstract: In this paper we propose a new vision-based robot control approach halfway between the classical position-based and image-based visual servoings. It allows to avoid their respective disadvantages. The homography between some planar feature points extracted from two images (corresponding to the current and desired camera poses) is computed at each iteration. Then, an approximate partial-pose, where the translational term is known only up to a scale factor, is deduced, from which can be designed a closed-loop control law controlling the six camera DOF. Contrarily to the position-based visual servoing, our scheme does not need any geometric 3D model of the object. Furthermore and contrarily to the image-based visual servoing, our approach ensures the convergence of the control law in all the task space.

49 citations


Journal Article
TL;DR: This paper theoretically shows the existence of the calibration parameters up to an orthogonal transformation under the assumption that the skew of the camera is zero, and suggests a method for obtaining the calibrated parameters by searching the space of the principal point.
Abstract: 2 Self-Calibration is Possible We propose a method to calibrate a rotating and zooming camera without 30 pattern, where the internal parameters In this paper we consider a set of rotating cameras with camchange frame by frame. First, we show that the calibration is era matrices P k = Kk [Rk 101, where Kk is the camera caliunique up to an orthogonal transformation under the assumpbration matrix of zero skew defined by tion that the skew of the camera is zero. The auto-calibration is possible by analyzing inter-image homographies computed from the matches in the images of the same scene. At least four homographies are needed for auto-calibration in general. When we assume that the aspect ratio is known and the principal point i s f i e d then one homography will yield camera parameters, and when the aspect ratio is not known withfiedprincipal point then two homographies are enough. The algorithm is implemented and validated on several sets of synthetic data and real image data.

35 citations


Proceedings ArticleDOI
16 Aug 1998
TL;DR: A road extraction method using stereo images that supposes that a road (or a passable part) can be approximated by a plane and the road region can be extracted in the observed image by transforming one image by the homography matrix and simple matching.
Abstract: Extracting the road region in an observed image is an important technique for visual navigation of an autonomous vehicle In this paper, we propose a road extraction method using stereo images The method does not rely on the existence of any specific road painting or texture Instead, it supposes that a road (or a passable part) can be approximated by a plane Then, a homography matrix which represents a geometric relation between the road plane and the stereo images can be computed from the stereo images And the road region can be extracted in the observed image by transforming one image by the homography matrix and simple matching In this method, neither a predetermined geometric relation between the cameras and the road nor a strong camera calibration are necessary Experimental results with real scenes have shown the effectiveness of the proposed method

12 citations


Proceedings Article
01 Jan 1998
TL;DR: In this article, the authors describe an algorithm for optimally corn-to remove scale we the nor in order to make the Z component 1.normalization to make it 1.
Abstract: normalization to make the Z component 1. In order This paper describes an algorithm for optimally corn- to remove scale we the nor

11 citations


Proceedings ArticleDOI
04 Oct 1998
TL;DR: A new approach for the solution of three interrelated problems: image registration, mosaicing and camera calibration is introduced that uses well known similarity measures with particular interest in enhancing the performance for image registration.
Abstract: We introduce a new approach for the solution of three interrelated problems: image registration, mosaicing and camera calibration. First we use well known similarity measures with particular interest in enhancing the performance for image registration. After initial registration we use a camera model to refine the alignment updating estimated camera parameters. A relaxation method applied on this model automatically estimates the lens parameters and uses them to eliminate the perspective and lens distortion observed in the overlap area of two images. Image mosaics are constructed by combining corrected images blending the border that contains pixels with optimally low total absolute difference in different spatial frequencies resulting in seamless mosaic images.

8 citations


Proceedings ArticleDOI
14 Sep 1998
TL;DR: This paper studies the affine-to-Euclidean step in detail using the real Jordan decomposition of the infinite homography and shows that in some cases, it is possible to obtain complete calibration in the presence of critical motions.
Abstract: Autocalibration is a difficult problem. Not only is its computation very noisesensitive, but there also exist many critical motions that prevent the estimation of some of the camera parameters. When a ?stratified? approach is considered, affine and Euclidean calibration are computed in separate steps and it is possible to see that a part of these ambiguities occur during affine-to- Euclidean calibration. This paper studies the affine-to-Euclidean step in detail using the real Jordan decomposition of the infinite homography. It gives a new way to compute the autocalibration and analyzes the effects of critical motions on the computation of internal parameters. Finally, it shows that in some cases, it is possible to obtain complete calibration in the presence of critical motions.

5 citations


01 Jan 1998
TL;DR: In this paper, a multi-resolution patch-based optical flow estimation for making feature correspondences to automatically obtain a homography is proposed. But the method is not robust enough to process non-planar scenes with free camera motion.
Abstract: We propose an automatic image mosaicing method that can construct a panoramic image from digital still images. Our method is fast and robust enough to process non-planar scenes with free camera motion. The method includes the following two techniques. First, we use a multi-resolution patch-based optical flow estimation for making feature correspondences to automatically obtain a homography. Second, we developed a technique to obtain a homography from only three points instead of four, in order to divide a scene into triangles. Experiments using real images confirm the effectiveness of our method.

01 Jul 1998
TL;DR: This paper reviews research on digital image and scene analysis through the 1970's, finding practical progress has largely been due to enormous increases in computer power, allowing even "brute force" algorithms to be implemented very rapidly.
Abstract: : Almost as soon as digital computers became available, it was realized that they could be used to process and extract information from digitized images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial photographs; but by the 1960's, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence community began to work on robot vision, these paradigms were extended to include recovery of three-dimensional information, at first from single images of a scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision. This paper reviews research on digital image and scene analysis through the 1970's. This research has led to the formulation of many elegant mathematical models and algorithms; but practical progress has largely been due to enormous increases in computer power, allowing even "brute force" algorithms to be implemented very rapidly. Keywords: Image processing, Image analysis, Pattern recognition, Scene analysis, Computer vision

01 Jan 1998
TL;DR: In this article, a method that enables a mobile robot to locate obstacles in its field of view using two images of its surroundings is presented, based on the assumption that the robot is moving on a locally planar ground and using a set of reference point features (corners) that have been matched between the two views using normalized cross-correlation.
Abstract: Obstacle avoidance is an essential capability of an autonomous robot. This paper presents a method that enables a mobile robot to locate obstacles in its field of view using two images of its surroundings. The method is based on the assumption that the robot is moving on a locally planar ground. Using a set of reference point features (corners) that have been matched between the two views using normalized cross-correlation, a robust estimate of the homography of the ground is computed. Knowledge of this homography permits us to compensate for the motion of the ground and to detect obstacles as areas in the image that appear nonstationary after the motion compensation. The resulting method does not require camera calibration, is applicable either to stereo pairs or to motion sequence images, does not rely on a dense disparity/flow field and circumvents the 3D reconstruction problem. Experimental results from the application of the method on real images indicate that it is both effective and robust.

Book ChapterDOI
01 Jan 1998
TL;DR: The algorithm which has been implemented to estimate the 3D camera displacement is presented and it is shown that the rotation matrix and the translation vector can be solved for separately.
Abstract: This paper is focused on the use of 2D vision for motion estimation of an underwater Remotely Operated Vehicle. A monocular vision system allows the extraction of characteristic lines in the current image of the observed scene. The 3D camera motion estimation involves a matching between these 2D visual features and those obtained in the previous image. The camera has been calibrated and its intrinsic parameters are known.