scispace - formally typeset
Search or ask a question
Topic

Camera auto-calibration

About: Camera auto-calibration is a research topic. Over the lifetime, 8555 publications have been published within this topic receiving 187403 citations.


Papers
More filters
Journal ArticleDOI
ZhenQiu Zhang1
TL;DR: A flexible technique to easily calibrate a camera that only requires the camera to observe a planar pattern shown at a few (at least two) different orientations is proposed and advances 3D computer vision one more step from laboratory environments to real world use.
Abstract: We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.

13,200 citations

Journal ArticleDOI
TL;DR: The first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches is presented.
Abstract: We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera

3,772 citations

Proceedings ArticleDOI
Zhengyou Zhang1
01 Sep 1999
TL;DR: Compared with classical techniques which use expensive equipment, such as two or three orthogonal planes, the proposed technique is easy to use and flexible, and advances 3D computer vision one step from laboratory environments to real-world use.
Abstract: Proposes a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment, such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one step from laboratory environments to real-world use. The corresponding software is available from the author's Web page ( ).

2,661 citations

Proceedings ArticleDOI
17 Jun 1997
TL;DR: This paper presents a four-step calibration procedure that is an extension to the two-step method, and a linear method for solving the parameters of the inverse model is presented.
Abstract: In geometrical camera calibration the objective is to determine a set of camera parameters that describe the mapping between 3-D reference coordinates and 2-D image coordinates. Various methods for camera calibration can be found from the literature. However surprisingly little attention has been paid to the whole calibration procedure, i.e., control point extraction from images, model fitting, image correction, and errors originating in these stages. The main interest has been in model fitting, although the other stages are also important. In this paper we present a four-step calibration procedure that is an extension to the two-step method. There is an additional step to compensate for distortion caused by circular features, and a step for correcting the distorted image coordinates. The image correction is performed with an empirical inverse model that accurately compensates for radial and tangential distortions. Finally, a linear method for solving the parameters of the inverse model is presented.

2,283 citations

01 Jan 2005
TL;DR: The plenoptic camera as mentioned in this paper uses a microlens array between the sensor and the main lens to measure the total amount of light deposited at that location, but how much light arrives along each ray.
Abstract: This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems. To the photographer, the plenoptic camera operates exactly like an ordinary hand-held camera. We have used our prototype to take hundreds of light field photographs, and we present examples of portraits, high-speed action and macro close-ups.

2,252 citations


Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
89% related
Image segmentation
79.6K papers, 1.8M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Pixel
136.5K papers, 1.5M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202359
2022143
20219
20204
20194
201816