scispace - formally typeset
Search or ask a question

Camera calibration combining images with two vanishing points

TL;DR: In this article, a single image calibration approach is presented, which exploits the existence of only two vanishing points on several independent images, and relies on direct geometric reasoning regarding the loci of the projection centres in the image system.
Abstract: Single image calibration is a fundamental task in photogrammetry and computer vision. It is known that camera constant and principal point can be recovered using exclusively the vanishing points of three orthogonal directions. Yet, three reliable and well-distributed vanishing points are not always available. On the other hand, two vanishing points basically allow only estimation of the camera constant (assuming a known principal point location). Here, a camera calibration approach is presented, which exploits the existence of only two vanishing points on several independent images. Using the relation between two vanishing points of orthogonal directions and the camera parameters, the algorithm relies on direct geometric reasoning regarding the loci of the projection centres in the image system (actually a geometric interpretation of the constraint imposed by two orthogonal vanishing points on the ‘image of the absolute conic’). Introducing point measurements on two sets of converging image lines as observations, the interior orientation parameters (including radial lens distortion) are estimated from a minimum of three images. Recovery of image aspect ratio is possible, too, at the expense of an additional image. Apart from line directions in space, full camera calibration is here independent from any exterior metric information (known points, lengths, length ratios etc.). Besides, since the sole requirement is two vanishing points of orthogonal directions on several images, the imaged scenes may simply be planar. Furthermore, calibration with images of 2D objects and/or ‘weak perspectives’ of 3D objects is expected to be more precise than single image approaches using 3D objects. Finally, no feature correspondences among views are required here; hence, images of totally different objects can be used. In this sense, one may still refer to a ‘single-image’ approach. The implemented algorithm has been successfully evaluated with simulated and real data, and its results have been compared to photogrammetric bundle adjustment and plane-based calibration.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, an approach for the automatic estimation of interior orientation from images with three vanishing points of orthogonal directions is presented. But this approach is limited to single images and cannot handle images with multiple vanishing points.
Abstract: Camera calibration is a fundamental task in photogrammetry and computer vision. This paper presents an approach for the automatic estimation of interior orientation from images with three vanishing points of orthogonal directions. Extraction of image line segments and their clustering into groups corresponding to three dominant vanishing points are performed without any human interaction. Camera parameters (camera constant, location of principal point, and two coefficients of radial lens distortion) and the vanishing points are estimated in a one-step adjustment of all participating image points. The approach may function in a single image mode, but is also capable of handling input from independent images (i.e. images not necessarily of the same object) with three and/or two vanishing points in a common solution. The reported experimental tests indicate that, within certain limits, results from single images compare satisfactorily with those from multi-image bundle adjustment.

65 citations


Cites background from "Camera calibration combining images..."

  • ...However, although one single image with two vanishing points lacks the information for full camera calibration, the authors have shown that combinations of several independent (single) images with vanishing points in two orthogonal directions supply adequate information for estimating camera geometry, including aspect ratio ( Grammatikopoulos et al., 2004 )....

    [...]

  • ...…of orthogonal directions allow finding only the primary internal camera parameters, further interior orientation parameters such as the image aspect ratio can be estimated at the expense of an additional pair of vanishing points from the same or a different image (Grammatikopoulos et al., 2004)....

    [...]

  • ...…lacks the information for full camera calibration, the authors have shown that combinations of several independent (single) images with vanishing points in two orthogonal directions supply adequate information for estimating camera geometry, including aspect ratio (Grammatikopoulos et al., 2004)....

    [...]

  • ...This is also the case when independent images (of the same or altogether different scenes) with only two vanishing points are adjusted, as outlined in Grammatikopoulos et al. (2004) ....

    [...]

  • ...This is also the case when independent images (of the same or altogether different scenes) with only two vanishing points are adjusted, as outlined in Grammatikopoulos et al. (2004)....

    [...]

Proceedings Article
20 Nov 2012
TL;DR: Two calibration methods that exploit the properties of vanishing points are presented in detail and compared to offer a practical tool for the choice of the appropriate calibration method depending on the application and on the initial conditions.
Abstract: The perspective projection models the way a 3D scene is transformed into a 2D image, usually through a camera or an eye. In a projective transformation, parallel lines intersect in a point called vanishing point. This paper presents in detail two calibration methods that exploit the properties of vanishing points. The aim of the paper is to offer a practical tool for the choice of the appropriate calibration method depending on the application and on the initial conditions. The methods, using two respectively three vanishing points, are presented in detail and are compared. First, the two models are analyzed using synthetic data. Finally, each method is tested in a real configuration and the results show the quality of the calibration.

49 citations

Journal ArticleDOI
TL;DR: In this scope some advanced multidisciplinary techniques which allow an automatic capture of meaningful elements for perspective models from a single image are discussed.
Abstract: Normal 0 21 false false false TR X-NONE X-NONE This paper describes the photogrammetric techniques and approaches developed for 3D object reconstruction based on a single image. It is possible to get metric information from a single view when the stereo-based photogrammetric technique is not available. In this scope some advanced multidisciplinary techniques which allow an automatic capture of meaningful elements for perspective models from a single image are discussed

39 citations

Journal ArticleDOI
TL;DR: By avoiding singularities, the precision and robustness of the method are improved: the relative mean errors are reduced to less than 5% at the noise level of one pixel which surpasses the state-of-the-art methods of the same category.

29 citations


Cites background from "Camera calibration combining images..."

  • ...It is clear that at least three non-parallel virtual planes are required to solve the intrinsic parameters [16]....

    [...]

01 Jul 2006
TL;DR: This paper revisits the vanishing points geometry and suggests a simple extrinsic parameter estimation algorithm which uses a single rectangle and presents a realtime vanishing point extraction algorithm and a pose estimation procedure.
Abstract: Vanishing points of an image contain important information for camera calibration. Various calibration techniques have been introduced using the properties of vanishing points to find intrinsic and extrinsic calibration parameters. This paper revisits the vanishing points geometry and suggests a simple extrinsic parameter estimation algorithm which uses a single rectangle. The comparison with the Camera Calibration Toolbox for Matlab ® shows that the proposed algorithm is highly competitive. The suggested technique is also applied to a realtime pose estimation for an unmanned air vehicle’s navigation in an urban environment. We present a realtime vanishing point extraction algorithm and a pose estimation procedure. The experimental result on a real flight video clip is presented.

21 citations


Cites methods from "Camera calibration combining images..."

  • ...Various camera calibration techniques have been introduced based on such properties, [3], [8], [ 4 ], and used in practice, for example, in Calibration Toolbox for Matlab® [1] (the Toolbox, hereafter) for initial estimation....

    [...]

References
More filters
Book
01 Jan 2000
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Abstract: From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.

15,558 citations

Journal ArticleDOI
ZhenQiu Zhang1
TL;DR: A flexible technique to easily calibrate a camera that only requires the camera to observe a planar pattern shown at a few (at least two) different orientations is proposed and advances 3D computer vision one more step from laboratory environments to real world use.
Abstract: We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.

13,200 citations

Journal ArticleDOI
TL;DR: Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.
Abstract: In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.

760 citations

Journal ArticleDOI
01 Sep 1999
TL;DR: Methods for creating 3D graphical models of scenes from a limited numbers of images, i.e. one or two, in situations where no scene co‐ordinate measurements are available are presented.
Abstract: We present methods for creating 3D graphical models of scenes from a limited numbers of images, i.e. one or two, in situations where no scene co-ordinate measurements are available. The methods employ constraints available from geometric relationships that are common in architectural scenes - such as parallelism and orthogonality - together with constraints available from the camera. In particular, by using the circular points of a plane simple, linear algorithms are given for computing plane rectification, plane orientation and camera calibration from a single image. Examples of image based 3D modelling are given for both single images and image pairs.

310 citations


"Camera calibration combining images..." refers background in this paper

  • ..., 2003) as well as in computer vision (Caprile & Torre, 1990; Liebowitz et al., 1999; Cipolla et al., 1999; Sturm & Maybank, 1999)....

    [...]

  • ...Alternatively, in the context of computer vision, several researchers perform camera calibration using three vanishing points in orthogonal directions, in order to compute the image ω of the absolute conic, and to subsequently decompose its expression to extract the camera internal parameters as the ‘calibration matrix’ (Liebowitz et al., 1999; Sturm & Maybank, 1999); the same outcome can be obtained by exploiting the properties of the rotation matrix (Cipolla et al....

    [...]

  • ...As mentioned above, recovery of interior orientation is possible through an estimation of the image ω of the absolute conic from three orthogonal vanishing points (Liebowitz et al., 1999)....

    [...]

Proceedings ArticleDOI
08 Apr 2000
TL;DR: This approach is more rigorous than existing techniques, since the information given by the condition of three mutual orthogonal directions in the scene is identified and incorporated and used to reject falsely detected vanishing points.
Abstract: A man-made environment is characterized by a lot of parallel lines and a lot of orthogonal edges. In this article, a new method for detecting the three mutual orthogonal directions of such an environment is presented. Since realtime performance is not necessary for architectural application, like building reconstruction, a computationally more intensive approach was chosen. On the other hand, our approach is more rigorous than existing techniques, since the information given by the condition of three mutual orthogonal directions in the scene is identified and incorporated. Since knowledge about the camera geometry can be deduced from the vanishing points of three mutual orthogonal directions, we use this knowledge to reject falsely detected vanishing points. Results are presented from interpreting outdoor scenes of buildings.

254 citations


"Camera calibration combining images..." refers methods in this paper

  • ...A further future task would be to elaborate the approach into a semi-automatic or fully automatic process, by introducing tools for automatically detecting vanishing points (see, for instance, van den Heuvel, 1998; Rother, 2000)....

    [...]