scispace - formally typeset
Search or ask a question

External camera calibration for synchronized multi-video systems

01 Jan 2004-pp 537-544
TL;DR: A method for external camera calibration that is simple to use and generality in the positioning of the cameras is presented, which makes it very suitable for the calibration of mobile, synchronized camera setups.
Abstract: We present a method for external camera calibration that is simple to use and oers generality in the positioning of the cameras. This makes it very suitable for the calibration of mobile, synchronized camera setups. We use a camera graph to perform global registration which helps lifting restrictions on the camera setup imposed by other calibration methods. A further advantage is that all information is taken into account simultaneously. The method is based on a virtual calibration object which is constructed over time by tracking an easily identiable object through three-dimensional space. This implies that no calibration object must be visible simultaneously in all cameras.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
30 Sep 2008
TL;DR: A robust and efficient method for wide area calibration using virtual calibration object created by two LED markers is presented and novel parametrization for two-point calibration using direction normal is introduced.
Abstract: In this paper we address external calibration of distributed multi-camera system intended for tracking and observing. We present a robust and efficient method for wide area calibration using virtual calibration object created by two LED markers. Our algorithm does not require for all the cameras to share common volume; only pairwise overlap is required. We assume the cameras are internally calibrated prior to deployment. Calibration is performed by waiving the calibration bar over the camera coverage area. The initial pose of the cameras is calculated using essential matrix decompositions. Global calibration is solved by automatically constructing weighted vision graph and finding optimal transformation paths between the cameras. In the optimization process, we introduce novel parametrization for two-point calibration using direction normal. The results are increased accuracy and robustness of the method under the presence of noise. In the paper, we present experimental results on a synthetic and real camera setup. We have performed image noise analysis on a synthetic wide-area setup of 5 cameras. Finally, we present the results obtained on a real setup with 12 cameras. The results obtained on the real camera setup show that our approach compensates for error propagation when the path transformation includes two to three nodes. No significant difference in reprojection error was found between the cameras on non-direct and direct path of the vision graph. The mean reprojection error for the real cameras was below 0.4 pixels.

69 citations


Cites result from "External camera calibration for syn..."

  • ...In contrast to other methods [7, 9] our approach resolves Euclidean reconstruction (preserving metric information) and introduces novel parameters reduction in the case of two-point bar calibration for multiple cameras as compared to [6]....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a multi-camera system that creates a highly accurate, 3-D reconstruction of an environment in real-time (under 30 ms) that allows for remote interaction between users and describes algorithms to calibrate, reconstruct, and render objects in the system.
Abstract: The growing popularity of 3-D movies has led to the rapid development of numerous affordable consumer 3-D displays. In contrast, the development of technology to generate 3-D content has lagged behind considerably. In spite of significant improvements to the quality of imaging devices, the accuracy of the algorithms that generate 3-D data, and the hardware available to render such data, the algorithms available to calibrate, reconstruct, and then visualize such data remain difficult to use, extremely noise sensitive, and unreasonably slow. In this paper, we present a multi-camera system that creates a highly accurate (on the order of a centimeter), 3-D reconstruction of an environment in real-time (under 30 ms) that allows for remote interaction between users. This paper focuses on addressing the aforementioned deficiencies by describing algorithms to calibrate, reconstruct, and render objects in the system. We demonstrate the accuracy and speed of our results on a variety of benchmarks and data collected from our own system.

56 citations

Journal ArticleDOI
TL;DR: A dynamic calibration procedure is introduced, which handles cameras and projectors in a unified way and allows continuous flexible setup changes, while seamless projection alignment and blending is performed simultaneously.
Abstract: We present a framework for achieving user-defined on-demand displays in setups containing bricks of movable cameras and DLP-projectors. A dynamic calibration procedure is introduced, which handles cameras and projectors in a unified way and allows continuous flexible setup changes, while seamless projection alignment and blending is performed simultaneously. For interaction, an intuitive laser pointer based technique is developed, which can be combined with real-time 3D information acquired from the scene. All these tasks can be performed concurrently with the display of a user-chosen application in a non-disturbing way. This is achieved by using an imperceptible structured light approach enabling pixel-based surface light control suited for a wide range of computer graphics and vision algorithms. To ensure scalability of light control in the same working space, multiple projectors are multiplexed.

49 citations

Journal ArticleDOI
TL;DR: The development of a novel volumetric method for small objects, using a binocular machine vision system, and the achieved precision is high, providing a standard deviation of 0.04 mm.

46 citations

Journal ArticleDOI
TL;DR: This paper describes several applications where the use of the 3D teleimmersion for remote interaction and collaboration among professional and scientific users and outlines the issues pertaining to the capture, transmission, rendering, and interaction.
Abstract: Teleimmersion is an emerging technology that enables users to collaborate remotely by generating realistic 3D avatars in real time and rendering them inside a shared virtual space. The teleimmersive environment thus provides a venue for collaborative work on 3D data such as medical imaging, scientific data and models, archaeological datasets, architectural or mechanical designs, remote training (e.g., oil rigs, military applications), and remote teaching of physical activities (e.g., rehabilitation, dance). In this paper, we present our research work performed over the course of several years in developing the teleimmersive technology using image-based stereo and more recently Kinect. We outline the issues pertaining to the capture, transmission, rendering, and interaction. We describe several applications where we have explored the use of the 3D teleimmersion for remote interaction and collaboration among professional and scientific users. We believe the presented findings are relevant for future developers in teleimmersion and apply across various 3D video capturing technologies.

42 citations


Cites methods from "External camera calibration for syn..."

  • ...Despite several methods for extrinsic camera calibration available (e.g., Svoboda et al. 2005; Cheng et al. 2000; Ihrke et al. 2004), many of the methods are slow when applied to large numbers of cameras or they require a high degree of overlap between the camera views or they provide only relative calibration (i....

    [...]

  • ...Despite several methods for extrinsic camera calibration available (e.g., Svoboda et al. 2005; Cheng et al. 2000; Ihrke et al. 2004), many of the methods are slow when applied to large numbers of cameras or they require a high degree of overlap between the camera views or they provide only relative…...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point.
Abstract: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

27,271 citations

Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
Roger Y. Tsai1
01 Aug 1987
TL;DR: In this paper, a two-stage technique for 3D camera calibration using TV cameras and lenses is described, aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,940 citations

Book
03 Jan 1992
TL;DR: A new technique for three-dimensional camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses using two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,816 citations

Book ChapterDOI
21 Sep 1999
TL;DR: A survey of the theory and methods of photogrammetric bundle adjustment can be found in this article, with a focus on general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Abstract: This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.

3,521 citations