scispace - formally typeset
Search or ask a question

Showing papers on "Homography (computer vision) published in 2008"


Proceedings ArticleDOI
23 Jun 2008
TL;DR: A method for simultaneously tracking all the people in a densely crowded scene using a set of cameras with overlapping fields of view that was successful in tracking up to 21 people walking in a small area, in spite of severe and persistent occlusions.
Abstract: Tracking people in a dense crowd is a challenging problem for a single camera tracker due to occlusions and extensive motion that make human segmentation difficult. In this paper we suggest a method for simultaneously tracking all the people in a densely crowded scene using a set of cameras with overlapping fields of view. To overcome occlusions, the cameras are placed at a high elevation and only peoplepsilas heads are tracked. Head detection is still difficult since each foreground region may consist of multiple subjects. By combining data from several views, height information is extracted and used for head segmentation. The head tops, which are regarded as 2D patches at various heights, are detected by applying intensity correlation to aligned frames from the different cameras. The detected head tops are then tracked using common assumptions on motion direction and velocity. The method was tested on sequences in indoor and outdoor environments under challenging illumination conditions. It was successful in tracking up to 21 people walking in a small area (2.5 people per m2), in spite of severe and persistent occlusions.

180 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm can select sufficient control points semi-automatically to reduce the local distortions caused by local height variation, resulting in improved image registration results.

97 citations


Journal ArticleDOI
TL;DR: Two methods are proposed: 1) geometry and 2) homography calibration, where polynomials with automated model selection are used to approximate the camera's projection model and spatial mapping, respectively, to improve the mapping accuracy and improve flexibility in adjusting to varying system configurations.
Abstract: Dual-camera systems have been widely used in surveillance because of the ability to explore the wide field of view (FOV) of the omnidirectional camera and the wide zoom range of the PTZ camera. Most existing algorithms require a priori knowledge of the omnidirectional camera's projection model to solve the nonlinear spatial correspondences between the two cameras. To overcome this limitation, two methods are proposed: 1) geometry and 2) homography calibration, where polynomials with automated model selection are used to approximate the camera's projection model and spatial mapping, respectively. The proposed methods not only improve the mapping accuracy by reducing its dependence on the knowledge of the projection model but also feature reduced computations and improved flexibility in adjusting to varying system configurations. Although the fusion of multiple cameras has attracted increasing attention, most existing algorithms assume comparable FOV and resolution levels among multiple cameras. Different FOV and resolution levels of the omnidirectional and PTZ cameras result in another critical issue in practical tracking applications. The omnidirectional camera is capable of multiple object tracking while the PTZ camera is able to track one individual target at one time to maintain the required resolution. It becomes necessary for the PTZ camera to distribute its observation time among multiple objects and visit them in sequence. Therefore, this paper addresses a novel scheme where an optimal visiting sequence of the PTZ camera is obtained so that in a given period of time the PTZ camera automatically visits multiple detected motions in a target-hopping manner. The effectiveness of the proposed algorithms is illustrated via extensive experiments using both synthetic and real tracking data and comparisons with two reference systems.

80 citations


Journal ArticleDOI
TL;DR: A novel algorithm is proposed, which computes a 2D homography, to re-orientate the sign for further steps, like sign recognition, showing a great robustness of the algorithm to object scaling, rotation, projective deformation, partial occlusions and noise.

67 citations


Proceedings ArticleDOI
Yuxin Jin1, Linmi Tao1, Huijun Di1, Naveed Iqbal Rao1, Guangyou Xu1 
12 Dec 2008
TL;DR: A novel multi-layer homography algorithm for background modeling from a free-moving camera that Rectified by the corresponding homography, each static pixel in the shared view can find its match in the previous frame.
Abstract: This paper proposes a novel multi-layer homography algorithm for background modeling from a free-moving camera. Background is composed of many planes. Different planes satisfy with different homographies which can be found by our algorithm. Each pixel except for the moving pixel definitely belongs to some plane. Rectified by the corresponding homography, each static pixel in the shared view can find its match in the previous frame. Thus, frames can be rectified to a specific viewpoint for background modeling. Experiment shows it is effective. Our approach can be used in motion detection from a free-moving camera.

60 citations


Journal ArticleDOI
TL;DR: A GPU library, MinGPU, which contains all of the necessary functions to convert an existing CPU code to GPU, and has created GPU implementations of several well known computer vision algorithms, including the homography transformation between two 3D views.
Abstract: In the field of computer vision, it is becoming increasingly popular to implement algorithms, in sections or in their entirety, on a graphics processing unit (GPU). This is due to the superior speed GPUs offer compared to CPUs. In this paper, we present a GPU library, MinGPU, which contains all of the necessary functions to convert an existing CPU code to GPU. We have created GPU implementations of several well known computer vision algorithms, including the homography transformation between two 3D views. We provide timing charts and show that our MinGPU implementation of homography transformations performs approximately 600 times faster than its C++ CPU implementation.

55 citations


Proceedings ArticleDOI
08 Dec 2008
TL;DR: A new technique is described which improves the robustness of ear registration and recognition, addressing issues of pose variation, background clutter and occlusion.
Abstract: Significant recent progress has shown ear recognition to be a viable biometric. Good recognition rates have been demonstrated under controlled conditions, using manual registration or with specialised equipment. This paper describes a new technique which improves the robustness of ear registration and recognition, addressing issues of pose variation, background clutter and occlusion. By treating the ear as a planar surface and creating a homography transform using SIFT feature matches, ears can be registered accurately. The feature matches reduce the gallery size and enable a precise ranking using a simple 2D distance algorithm. When applied to the XM2VTS database it gives results comparable to PCA with manual registration. Further analysis on more challenging datasets demonstrates the technique to be robust to background clutter, viewing angles up to plusmn13 degrees and with over 20% occlusion.

51 citations


Journal ArticleDOI
TL;DR: This work proposes a new approach for coping with this problem in multi-camera systems with overlapped Fields of View (FoVs), and proposes a complete system that provides segmentation and tracking of people in each camera module.

47 citations


Proceedings ArticleDOI
28 May 2008
TL;DR: An efficient rectification algorithm for multi-view images captured by a parallel camera array that rectified images have uniform horizontal disparities and no vertical mismatches between adjacent views is presented.
Abstract: In this paper, we present an efficient rectification algorithm for multi-view images captured by a parallel camera array. Since conventional stereo image rectification methods did not consider three or more cameras simultaneously, we propose an algorithm to rectify multi-view images at the same time. We calculate the common baseline considering all camera positions and apply the rectifying transformation defined by camera rotations and camera intrinsic parameters. From our experiments, we have found that the proposed method rectifies multi-view images efficiently. Rectified images have uniform horizontal disparities and no vertical mismatches between adjacent views.

47 citations


Proceedings ArticleDOI
30 Sep 2008
TL;DR: A system which robustly detects and tracks objects in a multi camera environment and performs a subsequent behavioral analysis based on luggage related events is presented.
Abstract: CCTV systems have been introduced in most public spaces in order to increase security. Video outputs are observed by human operators if possible but mostly used as a forensic tool. Therefore it seems desirable to automate video surveillance systems, in order to be able to detect potentially dangerous situations as soon as possible. Multi camera systems have seem to be the prerequisite for huge spaces where frequently occlusions appear. In this paper we will present a system which robustly detects and tracks objects in a multi camera environment and performs a subsequent behavioral analysis based on luggage related events.

44 citations


Journal ArticleDOI
TL;DR: It is shown that when the image line(s) is passing through or close to the origin, the line-based homography estimation could become wildly unstable whereas the point-based estimation performs normally.

Journal ArticleDOI
03 Jan 2008
TL;DR: This paper presents an approach which is capable of stitching single endoscopic video images to a combined panoramic image, and shows a correct stitching and lead to a better overview and understanding of the operation field.
Abstract: The medical diagnostic analysis and therapy of urinary bladder cancer based on endoscopes are state of the art in urological medicine. Due to the limited field of view of endoscopes, the physician can examine only a small part of the whole operating field at once. This constraint makes visual control and navigation difficult, especially in hollow organs. A panoramic image, covering a larger field of view, can overcome this difficulty. Directly motivated by a physician we developed an image mosaicing algorithm for endoscopic bladder fluorescence video sequences. In this paper, we present an approach which is capable of stitching single endoscopic video images to a combined panoramic image. Based on SIFT features we estimate a 2-D homography for each image pair, using an affine model and an iterative model-fitting algorithm. We then apply the stitching process and perform a mutual linear interpolation. Our panoramic image results show a correct stitching and lead to a better overview and understanding of the operation field.

Journal ArticleDOI
TL;DR: This work proposes a new camera calibration algorithm based on the calibration objects with three noncollinear points, which are midway between 1D and 2D calibration objects.
Abstract: Plane-based (2D) camera calibration is becoming a hot research topic in recent years because of its flexibility. However, at least four image points are needed in every view to denote the coplanar feature in the 2D camera calibration. Can we do the camera calibration by using the calibration object that only has three points? Some 1D camera calibration techniques use the setup of three collinear points with known distances, but it is a kind of special conditions of calibration object setup. How about the general setup-three noncollinear points? We propose a new camera calibration algorithm based on the calibration objects with three noncollinear points. Experiments with simulated data and real images are carried out to verify the theoretical correctness and numerical robustness of our results. Because the objects with three noncollinear points have special properties in camera calibration, they are midway between 1D and 2D calibration objects. Our method is actually a new kind of camera calibration algorithm.

Journal ArticleDOI
TL;DR: This paper proposes a straightforward geometric statement of plane-based self-calibration, through the concept of metric rectification of images, which appears to be theoretically equivalent but conceptually simpler.

Journal ArticleDOI
TL;DR: This work provides a completely new rigorous matrix formulation of the absolute quadratic complex (AQC), given by the set of lines intersecting the absolute conic, and completely characterize the 6×6 matrices acting on lines which are induced by a spatial homography.
Abstract: We provide a completely new rigorous matrix formulation of the absolute quadratic complex (AQC), given by the set of lines intersecting the absolute conic. The new results include closed-form expressions for the camera intrinsic parameters in terms of the AQC, an algorithm to obtain the dual absolute quadric from the AQC using straightforward matrix operations, and an equally direct computation of a Euclidean-upgrading homography from the AQC. We also completely characterize the 6×6 matrices acting on lines which are induced by a spatial homography. Several algorithmic possibilities arising from the AQC are systematically explored and analyzed in terms of efficiency and computational cost. Experiments include 3D reconstruction from real images.

Journal Article
TL;DR: The originality of the method resides in the new technique used to estimate the homography of the plane at infinity by the minimization of a non-linear cost function that is based on a particular motion of the camera "translation and small rotation".
Abstract: In this article, we are interested in the camera self-calibration from three views of a 3-D scene. The originality of our method resides in the new technique used to estimate the homography of the plane at infinity by the minimization of a non-linear cost function that is based on a particular motion of the camera "translation and small rotation". Our approach also permits to calculate the camera parameters and the depths of interest points detected in the images. Experimental results demonstrate the performance of our algorithms, in term of precision and convergence.

Journal ArticleDOI
TL;DR: A novel variational approach for simultaneous segmentation of two images of the same object taken from different viewpoints, with a unified level-set framework for region and edge based segmentation associated with a shape similarity term.
Abstract: We present a novel variational approach for simultaneous segmentation of two images of the same object taken from different viewpoints. Due to noise, clutter and occlusions, neither of the images contains sufficient information for correct object-background partitioning. The evolving object contour in each image provides a dynamic prior for the segmentation of the other object view. We call this process mutual segmentation. The foundation of the proposed method is a unified level-set framework for region and edge based segmentation, associated with a shape similarity term. The suggested shape term incorporates the semantic knowledge gained in the segmentation process of the image pair, accounting for excess or deficient parts in the estimated object shape. Transformations, including planar projectivities, between the object views are accommodated by a registration process held concurrently with the segmentation. The proposed segmentation algorithm is demonstrated on a variety of image pairs. The homography between each of the image pairs is estimated and its accuracy is evaluated.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: The explicit application of articulation constraints for estimating the motion of a system of planes is described, relating articulations to the relative homography between planes and showing that for affine cameras, these articulations translate into linear equality constraints on a linear least squares system, yielding accurate and numerically stable estimates of motion.
Abstract: In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of planes. We relate articulations to the relative homography between planes and show that for affine cameras, these articulations translate into linear equality constraints on a linear least squares system, yielding accurate and numerically stable estimates of motion. The global nature of motion estimation allows us to handle areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the accuracy of the algorithm in a variety of cases such as human body tracking, motion estimation of rigid, piecewise planar scenes and motion estimation of triangulated meshes.

Patent
Zhao Ming1
11 Mar 2008
TL;DR: In this paper, a digital image stabilization method is presented, which includes extracting characterizing points from a current frame and matching them with points of a previous frame, detecting an outlier from the matched points and removing the outlier, and calculating homography using the characterising points from which the outliers have been removed and correcting the current frame using the homography.
Abstract: Provided is a digital image stabilization method. The method includes: extracting characterizing points from a current frame; matching the characterizing points of the current frame with characterizing points of a previous frame; detecting an outlier from the matched characterizing points and removing the outlier; calculating homography using the characterizing points from which the outlier has been removed; and correcting the current frame using the homography.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: It is shown that a conjugate rotation has 7 degrees of freedom (as opposed to 8 for a general homography) and give a minimal parameterization and the proposed algorithm directly yields rotation, focal length and principal point.
Abstract: When rotating a pinhole camera, images are related by the infinite homography KRK-1, which is algebraically a conjugate rotation. Although being a very common image transformation, e.g. important for self-calibration or panoramic image mosaicing, it is not completely understood yet. We show that a conjugate rotation has 7 degrees of freedom (as opposed to 8 for a general homography) and give a minimal parameterization. To estimate the conjugate rotation, authors traditionally made use of point correspondences, which can be seen as local zero order Taylor approximations to the image transformation. Recently however, affine feature correspondences have become increasingly popular. We observe that each such affine correspondence now provides a local first order Taylor approximation, which has not been exploited in the context of geometry estimation before. Using those two novel concepts above, we finally show that it is possible to estimate a conjugate rotation from a single affine feature correspondence under the assumption of square pixels and zero skew. As a byproduct, the proposed algorithm directly yields rotation, focal length and principal point.

Proceedings Article
01 Sep 2008
TL;DR: This paper proposes a compensating transformation based on a two-dimensional homography that can provide not only a clear viewpoint change in the multi-view video but also the simplicity and accuracy in matching between adjacent views.
Abstract: In this paper, we present a geometrical compensation method for a calibrated multi-view video that is captured by the one-dimensional (1-D) parallel or arc camera array. Since the multi-view video may include geometical errors, we define a compensating transformation based on a two-dimensional (2-D) homography. After we apply the compensating transformation, we obtain all the image planes which are aligned vertically and rotated to be suitable for the camera array. Experimental results show that the proposed method correctly compensates the geometrical errors of the multi-view video. We can provide not only a clear viewpoint change in the multi-view video but also the simplicity and accuracy in matching between adjacent views.

Proceedings ArticleDOI
30 Sep 2008
TL;DR: A novel continuous object tracking using the probabilistic camera hand-off, which does not require the complicated pre-processing such as the camera calibration, is proposed and results show the accuracy and stability of the dominant camera selection are more precise than those of using the ratio of foreground block only.
Abstract: Continuous object tracking for visual surveillance using multiple cameras is a difficult task because several problems must be solved such as the randomness of object movement, the uncertainty of external environment, and the hand-off among cameras. To overcome these problems, a novel continuous object tracking using the probabilistic camera hand-off, which does not require the complicated pre-processing such as the camera calibration, has been proposed as follows. First, we obtain the foreground objects using the SKDA (sequential kernel density approximation)-based background subtraction. Second, we compute the proximity probabilities based on the number of foreground blocks and the angle distance between the camera and the object and find the dominant camera by selecting the highest proximity probability. And we perform a probabilistic camera hand-off using the dominant camera probabilities between two frames. Finally, we estimate the object trajectories using the homography between the dominant camera and the map. Experiment results show that (1) the accuracy and stability of the dominant camera selection using the ratio of foreground blocks and the ratio of the angle distance are more precise than those of using the ratio of foreground block only, (2) the position error between the ground-truth position and the tracked position is approximately 40 cm on average.

Proceedings ArticleDOI
11 Jun 2008
TL;DR: A method is presented to attach reference frames to piecewise planar objects in view a single camera using Euclidean homography relationships and a single known geometric length on a single object to be useful in position-based visual servo control.
Abstract: A method is presented to attach reference frames to piecewise planar objects in view a single camera. This method uses Euclidean homography relationships and a single known geometric length on a single object. By attaching reference frames to objects in the scene, the method is useful in position-based visual servo control, where it allows control of pose with respect to an object. The method is distinguished from methods that require a detailed model of the object/scene to give camera pose relative to an object, and it is distinguished from methods that can only give current camera pose with respect to a pose where a reference image was taken. Simulations of the method for camera-in-hand and camera-to-hand visual servo control tasks are presented. Experiments are presented where the reconstruction method is used to estimate the pose of a vehicle. These experiments represent the initial steps in a vision- based vehicle following controller.

Proceedings ArticleDOI
12 Dec 2008
TL;DR: It is shown that in the case of a general rotation the problem of calibrating a rotating and zooming camera can be solved in closed- form by applying a series of Givens rotations, providing five intrinsic parameters from only a minimum set of two images.
Abstract: Due to their ease of use and availability, pan-tilt-zoom (PTZ) cameras are everywhere. However, in order to utilize these cameras for a meaningful application, it is necessary that these cameras be calibrated. In this paper, we address the problem of calibrating such a rotating and zooming camera and present a simple yet novel solution. We show that in the case of a general rotation the problem can be solved in closed- form by applying a series of Givens rotations, providing five intrinsic parameters from only a minimum set of two images. Whereas other methods use orthogonality of the rotation matrix as a constraint, our approach applies direct decomposition of the infinite homography As a result, we are able to obtain four constraints for estimating camera parameters. We rigourously test our method on synthetic data by introducing very large noise. We also compare our method with the state- of-the-art PTZ camera calibration method. Our solutions and analysis are thoroughly validated on both synthetic and real data.

Patent
04 Mar 2008
TL;DR: In this article, a method for calibrating a camera including obtaining a two-dimensional (2D) homography that maps each of parallelograms projected onto images taken by two arbitrary cameras into a rectangle, wherein the 2D homography is defined as a rectification homography and wherein new cameras that have virtual images are defined as rectified cameras and a new infinite homography was generated between the two rectification cameras, the virtual images being transformed from original images by the rectificationhomography.
Abstract: A method for calibrating a camera including (a) obtaining a two-dimensional (2D) homography that maps each of parallelograms projected onto images taken by two arbitrary cameras into a rectangle, wherein the 2D homography is defined as a rectification homography and wherein new cameras that have virtual images are defined as rectified cameras and a new infinite homography is generated between the two rectified cameras, the virtual images being transformed from original images by the rectification homography; (b) obtaining an original infinite homography by using the correlations among the new infinite homography, the rectification homography and the original infinite homography; and (c) obtaining intrinsic camera parameters based on the correlation between the original infinite homography and the intrinsic camera parameters, thereby calibrating the camera.

Proceedings ArticleDOI
07 Jan 2008
TL;DR: This paper presents an unsupervised data driven scheme to automatically estimate the relative topology of overlapping cameras in a large visual sensor network by employing the statistics of co-occurring observations (of moving targets) in each sensor.
Abstract: This paper presents an unsupervised data driven scheme to automatically estimate the relative topology of overlapping cameras in a large visual sensor network. The proposed method learns the camera topology by employing the statistics of co-occurring observations (of moving targets) in each sensor. Since target observation data is typically very noisy in realistic scenarios, an efficient two step method is used for robust estimation of the planar homography between camera views. In the first step, modes in the co-occurrence data are learned using meanshift. In the second step, a RANSAC based procedure is used to estimate the homography from weighted co-occurrence modes. Note that the first step not only lessens the effects of noise but also reduces the search space for efficient calculation. Unlike most existing algorithms for overlapping camera calibration, the proposed method uses an update mechanism to adapt online to the changes in network topology. The method does not assume prior knowledge about the scene, target, or network properties. It is also robust to noise, traffic intensity, and the amount of overlap between the fields of view. Experiments and quantitative evaluation using both synthetic and real data are presented to support the above claims.

Proceedings ArticleDOI
Zhang Xuebo1, Fang Yongchun1, Ma Bojun1, Liu Xi1, Zhang Ming1 
16 Jul 2008
TL;DR: A fast homography decomposition algorithm for visual servo of mobile robots is presented and the complexity of the algorithm decreases significantly by avoiding the singular value decomposition (SVD) step in most cases, which is potentially competitive in such applications as visual Servo task executed at the video rate.
Abstract: A fast homography decomposition algorithm for visual servo of mobile robots is presented. Motion constraints of the mobile robot are exploited and thus the complexity of the algorithm decreases significantly by avoiding the singular value decomposition (SVD) step in most cases, which is potentially competitive in such applications as visual servo task executed at the video rate. Moreover, the ambiguity problem, which is troublesome for general approaches on the unconstrained configuration such as manipulators, does not exist for the mobile robot except when the reference target plane is perpendicular to the horizontal ground. Besides, the proposed algorithm can also be exploited in other related areas such as vision-based localization, provided that the camera undergoes a planar motion. Simulation results considering the pixel noise are provided to demonstrate the performance of the proposed method.

DissertationDOI
01 Jan 2008
TL;DR: In this article, an overview of existing algorithms estimating the ego-motion is given, based on it a suitable algorithm is selected and extended by a motion model, which considerably increases the accuracy as well as the robustness of the estimate.
Abstract: Traffic is increasing continuously. Nevertheless the number of traffic fatalities decreased in the past. One reason for this are the passive safety systems, such as side crash protection or airbag, which have been engineered the last decades and which are standard in today's cars. Active safety systems are increasingly developed. They are able to avoid or at least to mitigate accidents. For example, the adaptive cruise control (ACC) original designed as a comfort system is developed towards an emergency brake system. Active safety requires sensors perceiving the vehicle environment. ACC uses radar or laser scanner. However, cameras are also interesting sensors as they are capable of processing visual information such as traffic signs or lane markings. In traffic moving objects (cars, bicyclists, pedestrians) play an important role. To perceive them is essential for active safety systems. This thesis deals with the detection of moving objects utilizing a monocular camera. The detection is based on the motions within the video stream (optical flow). If the ego-motion and the location of the camera with respect to the road plane are known the viewed scene can be 3D reconstructed exploiting the measured optical flow. In this thesis an overview of existing algorithms estimating the ego-motion is given. Based on it a suitable algorithm is selected and extended by a motion model. The latter one considerably increases the accuracy as well as the robustness of the estimate. The location of the camera with respect to the road plane is estimated using the optical flow on the road. The road might be temporary low-textured making it hard to measure the optical flow. Consequently, the road homography estimate will be poor. A novel Kalman filtering approach combining the estimate of the ego-motion and the estimate of the road homography leads to far better results. The 3D reconstruction of the viewed scene is performed pointwise for each measured optical flow vector. A point is reconstructed through intersection of the viewing rays which are determined by the optical flow vector. This only yields a correct result for static, i.e. non-moving, points. Further, static points fulfill four constraints: epipolar constraint, trifocal constraint, positive depth constraint, and positive height constraint. If at least one constraint is violated the point is moving. For the first time an error metric is developed exploiting all four constraints. It measures the deviation from the constraints quantitatively in a unified manner. Based on this error metric the detection limits are investigated. It is shown that overtaking objects are detected very well whereas objects being overtaken are detected hardly. Oncoming objects on a straight road are not detected by means of the available constraints. Only if one assumes that these objects are opaque and touch the ground the detection becomes feasible. An appropriate heuristic is introduced. In conclusion, the developed algorithms are a system to detect moving points robustly. The problem of clustering the detected moving points to objects is outlined. It serves as a starting point for further research activities.

Proceedings ArticleDOI
12 Dec 2008
TL;DR: The proposed method takes two uncalibrated images as input, extracts and matches interest points, and then performs plane identification and matching defined by sets of three points.
Abstract: In this paper, we propose a new approach for extracting major planes of the scene from uncalibrated pairs of images. In contrast to existing methods, our method does not make any assumption on the images or co-planarity of points. The proposed method takes two uncalibrated images as input, extracts and matches interest points, and then performs plane identification and matching defined by sets of three points. For each set of three points, a plane homography is then calculated. Once all possible planes have been identified, a merging stage is carried out to improve the robustness and to make sure that same planes are associated with a single homography. Furthermore, the method is capable to distinguish between physical and virtual planes. Experiments on a variety of real images demonstrate the validity of the proposed approach.

Journal ArticleDOI
TL;DR: A method is presented to estimate the rotation matrix and translation vector between the camera and the projector using plane-based homography and an approach is introduced to analyze theoretically the error sensitivity in the estimated pose parameters with respect to noise in the projection points.
Abstract: We investigate the problem of dynamic calibration for our structured light system. First, a method is presented to estimate the rotation matrix and translation vector between the camera and the projector using plane-based homography. Then an approach is introduced to analyze theoretically the error sensitivity in the estimated pose parameters with respect to noise in the projection points. This algorithm is simple and easy to implement. Finally, some numerical simulations and real data experiments are carried out to validate our method.