scispace - formally typeset
Search or ask a question

Showing papers on "Homography (computer vision) published in 2006"


Proceedings ArticleDOI
23 Oct 2006
TL;DR: The proposed spatiotemporal video attention framework has been applied on over 20 testing video sequences, and attended regions are detected to highlight interesting objects and motions present in the sequences with very high user satisfaction rate.
Abstract: Human vision system actively seeks interesting regions in images to reduce the search effort in tasks, such as object detection and recognition. Similarly, prominent actions in video sequences are more likely to attract our first sight than their surrounding neighbors. In this paper, we propose a spatiotemporal video attention detection technique for detecting the attended regions that correspond to both interesting objects and actions in video sequences. Both spatial and temporal saliency maps are constructed and further fused in a dynamic fashion to produce the overall spatiotemporal attention model. In the temporal attention model, motion contrast is computed based on the planar motions (homography) between images, which is estimated by applying RANSAC on point correspondences in the scene. To compensate the non-uniformity of spatial distribution of interest-points, spanning areas of motion segments are incorporated in the motion contrast computation. In the spatial attention model, a fast method for computing pixel-level saliency maps has been developed using color histograms of images. A hierarchical spatial attention representation is established to reveal the interesting points in images as well as the interesting regions. Finally, a dynamic fusion technique is applied to combine both the temporal and spatial saliency maps, where temporal attention is dominant over the spatial model when large motion contrast exists, and vice versa. The proposed spatiotemporal attention framework has been applied on over 20 testing video sequences, and attended regions are detected to highlight interesting objects and motions present in the sequences with very high user satisfaction rate.

983 citations


Journal Article
TL;DR: In this paper, a multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed, where several iterations of segmentation are performed using information from human appearance models and ground plane homography.
Abstract: A multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed. During tracking, several iterations of segmentation are performed using information from human appearance models and ground plane homography. To more precisely locate the ground location of a person, all center vertical axes of the person across views are mapped to the top-view plane and their intersection point on the ground is estimated. To tackle the explosive state space due to multiple targets and views, iterative segmentation-searching is incorporated into a particle filtering framework. By searching for people's ground point locations from segmentations, a set of a few good particles can be identified, resulting in low computational cost. In addition, even if all the particles are away from the true ground point, some of them move towards the true one through the iterated process as long as they are located nearby. We demonstrate the performance of the approach on several video sequences.

200 citations


Journal ArticleDOI
J. Black1, Tim Ellis1
TL;DR: This paper presents a method for multi-camera image tracking in the context of image surveillance that exploits multiple camera views to resolve object occlusion and uses the Linear Kalman Filter, which is less cumbersome to implement than the Extended Kalman filter.

180 citations


Journal ArticleDOI
TL;DR: A visual servo tracking controller for a monocular camera system mounted on an underactuated wheeled mobile robot (WMR) subject to nonholonomic motion constraints (i.e., the camera-in-hand problem).
Abstract: A visual servo tracking controller is developed in this paper for a monocular camera system mounted on an underactuated wheeled mobile robot (WMR) subject to nonholonomic motion constraints (i.e., the camera-in-hand problem). A prerecorded image sequence (e.g., a video) of three target points is used to define a desired trajectory for the WMR. By comparing the target points from a stationary reference image with the corresponding target points in the live image and the prerecorded sequence of images, projective geometric relationships are exploited to construct Euclidean homographies. The information obtained by decomposing the Euclidean homography is used to develop a kinematic controller. A Lyapunov-based analysis is used to develop an adaptive update law to actively compensate for the lack of depth information required for the translation error system. Experimental results are provided to demonstrate the control design.

147 citations


Book ChapterDOI
07 May 2006
TL;DR: A multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed and a set of a few good particles can be identified, resulting in low computational cost.
Abstract: A multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed. During tracking, several iterations of segmentation are performed using information from human appearance models and ground plane homography. To more precisely locate the ground location of a person, all center vertical axes of the person across views are mapped to the top-view plane and their intersection point on the ground is estimated. To tackle the explosive state space due to multiple targets and views, iterative segmentation-searching is incorporated into a particle filtering framework. By searching for people's ground point locations from segmentations, a set of a few good particles can be identified, resulting in low computational cost. In addition, even if all the particles are away from the true ground point, some of them move towards the true one through the iterated process as long as they are located nearby. We demonstrate the performance of the approach on several video sequences.

120 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: The combination of a robust homography estimation and of an adaptive thresholding of correlation scores between registered images yields the update of a stochastic grid, that exhibits the horizontal planar areas perceived and allows the integration of data gathered at various altitudes.
Abstract: This paper presents an approach to detect safe landing areas for a flying robot, on the basis of a sequence of monocular images. The approach does not require precise position and attitude sensors: it exploits the relations between 2D image homographies and 3D planes. The combination of a robust homography estimation and of an adaptive thresholding of correlation scores between registered images yields the update of a stochastic grid, that exhibits the horizontal planar areas perceived. This grid allows the integration of data gathered at various altitudes. Results are presented throughout the article.

84 citations


Proceedings ArticleDOI
15 May 2006
TL;DR: A new homography-based approach to image-based visual servoing that does not need any measure of the 3D structure of the observed target and the theoretical proof of the local stability of the control law is provided.
Abstract: The objective of this paper is to propose a new homography-based approach to image-based visual servoing. The visual servoing method does not need any measure of the 3D structure of the observed target. Only visual information measured from the reference and the current image are needed to compute the task function (isomorphic to the camera pose) and the control law to be applied to the robot. The control law is designed in order to make the task function converge to zero. We provide the theoretical proof of the existence of the isomorphism between the task function and the camera pose and the theoretical proof of the local stability of the control law. The experimental results, obtained with a 6 d.o.f. robot, show the advantages of the proposed method with respect to the existing approaches

72 citations


01 Jan 2006
TL;DR: A novel imaging technique with the help of 3-D time-of-flight camera is proposed, which provides better image quality with range data and the 2D/3D data fusion becomes facile for mapping and/or image registration.
Abstract: In this paper, a novel imaging technique for enhancing 3-D vision is proposed. The enhancement is achieved with the help of 3-D time-of-flight camera, this 3D camera delivers the intensity and depth information of the scene in real time. Although this kind of 3-D cameras provide accurate depth information, its low 2-D image res- olution tends to be a hindering factor for many image pro- cessing applications. This limitation can be inundated with the proposed setup; a 2-D sensor (CCD/CMOS) with higher resolution is used to improve the image resolution. The 2-D and 3-D cameras are placed in a special housing, so that the field-of-view (FOV) is nearly same for both the cameras. The 2D/3D data fusion becomes facile for mapping and/or image registration. Hence the new system provides better image quality with range data. Within this paper, we discuss the initial results and findings of the proposed system.

71 citations


Proceedings ArticleDOI
15 May 2006
TL;DR: A practical approach to ground detection in mobile robot applications based on a monocular sequence captured by an on-board camera, and an efficient algorithm for the estimation of the dominant homography between two frames taken from the sequence.
Abstract: This paper presents a practical approach to ground detection in mobile robot applications based on a monocular sequence captured by an on-board camera. We formulate the problem of ground plane detection as one of estimating the dominant homography between two frames taken from the sequence, and then design an efficient algorithm for the estimation. In particular, we analyze a problem inherent to any homography-based approach to the given task, and show how the proposed approach can address this problem to a large degree. Although not explicitly discussed, the proposed method can be used to guide the maneuver of the robot, as the detected ground plane can in turn be used in obstacle avoidance

57 citations


Journal ArticleDOI
TL;DR: This paper develops an algorithm for dominant plane detection using the optical flow and shows that the points on the dominant plane in a pair of two successive images are combined with an affine transformation if the mobile robot obtains successive images for optical flow computation.

55 citations


Proceedings ArticleDOI
20 Aug 2006
TL;DR: This method is unique in not restricting the camera position, thus allowing greater flexibility than scanner-based or fixed-camera-based approaches and can produce a very sharp, high resolution and accurate full page mosaic from small image patches of a document.
Abstract: In this paper we present an image mosaicing method for camera-captured documents. Our method is unique in not restricting the camera position, thus allowing greater flexibility than scanner-based or fixed-camera-based approaches. To accommodate for the perspective distortions introduced by varying poses, we implement a two-step image registration process that relies on accurately computing the projectivity between any two document images with an overlapping area as small as 10%. In the overlapping area, we apply a sharpness based selection process to obtain seamless blending across the border and within. Experiments show that our approach can produce a very sharp, high resolution and accurate full page mosaic from small image patches of a document.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A homography-based approach to detect the ground plane from monocular sequences captured by a robot platform that is not only more efficient and robust, but also able to avoid false detection due to virtual planes.
Abstract: We present a homography-based approach to detect the ground plane from monocular sequences captured by a robot platform. By assuming that the camera is fixed on the robot platform and can at most rotate horizontally, we derive the constraints that the homograph of the ground plane must satisfy and then use these constraints to design algorithms for detecting the ground plane. Due to the reduced degree of freedom, the resultant algorithm is not only more efficient and robust, but also able to avoid false detection due to virtual planes. We present experiments with real data from a robot platform to validate the proposed approaches.

Book ChapterDOI
TL;DR: Novel monocular-vision based target parking-slot recognition by recognizing parking-slots markings when driver designates a seed-point inside the target Parking-slot with touch screen is proposed.
Abstract: Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking line-segments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking line-segments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parking-slot under various situations and illumination conditions.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper proposes a technique based on correspondences of contours that computes the projective homography in an iterative manner in Fourier domain and does not require explicit point to point correspondences.
Abstract: Homography estimation is an important step in many computer vision algorithms. Most existing algorithms estimate the homography from point or line correspondences which are difficult to reliably obtain in many real-life situations. In this paper we propose a technique based on correspondences of contours. Homography estimation is carried out in Fourier domain. Starting from an affine estimate, the proposed algorithm computes the projective homography in an iterative manner. This technique does not require explicit point to point correspondences; in fact such point correspondences are a by-product of the proposed algorithm. Experimental results and applications validate the use of our technique.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: Instead of the typical feature matching or tracking, this work uses an improved stereo-tracking method that simultaneously decides the feature displacement in both cameras to calculate visual odometry for outdoor robots equipped with a stereo rig.
Abstract: In this paper, we present an approach of calculating visual odometry for outdoor robots equipped with a stereo rig. Instead of the typical feature matching or tracking, we use an improved stereo-tracking method that simultaneously decides the feature displacement in both cameras. Based on the matched features, a three-point algorithm for the resulting quadrifocal setting is carried out in a RANSAC framework to recover the unknown odometry. In addition, the change in rotation can be derived from infinity homography, and the remaining translational unknowns can be obtained even faster consequently . Both approaches are quite robust and deal well with challenging conditions such as wheel slippage.

Journal ArticleDOI
TL;DR: This paper proposes a novel method for camera calibration using images of a mirror symmetric object and shows that interimage homographies can be expressed as a function of only the principal point by minimizing symmetric transfer errors.
Abstract: This paper proposes a novel method for camera calibration using images of a mirror symmetric object. Assuming unit aspect ratio and zero skew, we show that interimage homographies can be expressed as a function of only the principal point. By minimizing symmetric transfer errors, we thus obtain an accurate solution for the camera parameters. We also extend our approach to a calibration technique using images of a 1-D object with a fixed pivoting point. Unlike existing methods that rely on orthogonality or pole-polar relationship, our approach utilizes new interimage constraints and does not require knowledge of the 3-D coordinates of feature points. To demonstrate the effectiveness of the approach, we present results for both synthetic and real images

Journal ArticleDOI
TL;DR: A robust recognition method for the detection of a checkerboard pattern for camera calibration is proposed and shows that the method is robust over a range of illumination and a complicated background.
Abstract: Camera calibration is an important procedure for the image-based three-dimensional (3D) shape reconstruction of existing objects. Methods that use different mathematics for the computation of the camera calibration have been reported in the literature. But a reliable detection of calibration patterns is not considered in these methods, which prevents their practical use. A robust recognition method for the detection of a checkerboard pattern for camera calibration is proposed. After introduction of the principle of this method, experiments are used to check the method on a wide range of real data. The results show that the method is robust over a range of illumination and a complicated background. The proposed method can be used for automatic camera calibration in real-world environments.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: An image deformation model combining thin-plate splines with 3D entities-a 3D control mesh and a camera-overcoming the above mentioned drawback is proposed and demonstrated on simulated and real data.
Abstract: Registering images of a deforming surface is a well-studied problem. It is common practice to describe the image deformation fields with Thin-Plate Splines. This has the advantage to involve small numbers of parameters, but has the draw-back that the 3D surface is not explicitly reconstructed. We propose an image deformation model combining Thin-Plate Splines with 3D entities ? a 3D control mesh and a camera ? overcoming the above mentioned drawback. An original solution to the non-rigid image registration problem using this model is proposed and demonstrated on simulated and real data.

Proceedings ArticleDOI
15 May 2006
TL;DR: A homography-based visual servo controller is developed in this paper based on an error system composed of the unit quaternion representation that regulates a camera to a desired position and orientation that is determined from a desired image.
Abstract: Previous homography-based visual servo controllers have been developed using an error system that contains a singularity resulting from the representation of the rotation matrix. For some aerospace applications such as visual servo control of satellites or air vehicles, the singularity introduced by the rotation representation may be restrictive. To eliminate this singularity, a homography-based visual servo controller is developed in this paper based on an error system composed of the unit quaternion representation. The proposed adaptive controller regulates a camera to a desired position and orientation that is determined from a desired image. A quaternion-based Lyapunov function is developed to facilitate the control design and the stability analysis

Proceedings ArticleDOI
27 Oct 2006
TL;DR: This paper shows a hand-off function can be constructed by computing the ratio of co-occurrence to occurrence for all pairs of points in two views, and analyzes the cases of hand-offs ambiguity and failure.
Abstract: For robust continuous tracking across multiple views for the wide surveillance environment it is helpful to know the geometric relationship or a hand-off function in addition to photometric relationship between views. Previous works for camera hand-off were mainly focused on the planar ground with homography computation or are strongly coupled with camera calibration and 3D scene recovery. In this paper, we propose a new method for camera hand-off without camera calibration for the general scene containing non-planarground.It is required to establish correspondences between views for camera hand-off. The concept of occurrence and co-ccurrence is introduced to solve the problem. We show a hand-off function can be constructed by computing the ratio of co-occurrence to occurrence for all pairs of points in two views. In addition,we analyze the cases of hand-off ambiguity and failure.Our approach to camera hand-off does not require camera calibration or 3D scene recovery.It can be applied to the scene with non-planar ground.Further-more, all operations from construction of the hand-off function to hand-off of objects across views can be performed in a full automatic manner.Experimental results for image sequences of a real outdoor scene are shown.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A camera position and homography estimation method is proposed considering the operations in image based rendering and the results show that the watermark detection is achieved successfully for the cases in which the imagery camera is arbitrarily located on the camera plane.
Abstract: The recent advances in image based rendering (IBR) has pioneered a new technology, free-view television, in which TV-viewers select freely the viewing position and angle by the application of IBR on the transmitted multi-view video. Noting that the TV-viewer might also record a personal video for this arbitrarily selected view and misuse this content, it is apparent that copyright and copy protection problems also exist and should be solved for free-view TV. In this paper, we focus on this problem by proposing a watermarking method for free-view video. The watermark is embedded into every frame of multiple views by exploiting the spatial masking properties of the human visual system (HVS). Assuming that the position and rotation for the imagery view is known, the proposed method extracts the watermark successfully from an arbitrarily generated image. In order to extend the method for the case of an unknown imagery camera position and rotation, the modifications on the watermark pattern due to image based rendering operations are also analyzed. Based on this analysis, a camera position and homography estimation method is proposed considering the operations in image based rendering. The results show that the watermark detection is achieved successfully for the cases in which the imagery camera is arbitrarily located on the camera plane.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: A visual servo tracking controller is developed in this paper for a monocular camera system mounted on a unmanned aerial vehicle (UAV) to track a leading UAV with a fixed relative position and orientation.
Abstract: A visual servo tracking controller is developed in this paper for a monocular camera system mounted on a Unmanned Aerial Vehicle (UAV) to track a leading UAV with a fixed relative position and orientation. Specifically, by comparing the feature points on the leading UAV from a prerecorded desired image with the corresponding feature points in the live image, projective geometric relationships are exploited to construct a Euclidean homography. A theoretical framework is developed for homography based visual servo technique for the general case in which both the camera and the object are moving relative to an inertial coordinate frame. This technique can also be applied to visual estimation of velocity or Euclidean struction of a moving object by a moving camera. The information obtained by decomposing the Euclidean homography is used to develop a robust kinematic controller which eliminates the knowledge of the leading UAV's velocities. A Lyapunov-based analysis is used to show that the proposed control strategy achieves global uniformly ultimately bounded (GUUB) position and orientation tracking.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: It is shown that the image registration process can be dealt with from the perspective of a compression problem, and it is demonstrated that the similarity metric, introduced by Li et al., performs well in image registration.
Abstract: Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A second-order optimization algorithm is proposed that considerably increases the convergence domain and the convergence rate of standard first- order optimization algorithms while having an almost equivalent computational complexity.
Abstract: This papers deals with the problem of tracking a piecewise-planar scene in a video sequence and, at the same time, with the problem of estimating accurately the 3D displacement of the camera for robotic applications. A new approach to the problem is proposed and two noticeable contributions are given. Firstly, the explicit dependency between the 2D image transformation parameters (a homography for each plane) and the 3D camera displacement parameters is computed. Secondly, a second-order optimization algorithm is proposed. The second-order optimization considerably increases the convergence domain and the convergence rate of standard first-order optimization algorithms while having an almost equivalent computational complexity

Proceedings ArticleDOI
21 Aug 2006
TL;DR: A new vision-based state estimation method that allows sets of feature points to be related such that the aircraft position and orientation can be correlated to previous GPS data so that GPS-like navigation can be maintained in denied environments.
Abstract: While a Global Positioning System (GPS) is the most widely used sensor modality for aircraft navigation, researchers have been motivated to investigate other navigational sensor modalities because of the desire to operate in GPS denied environments. Due to advances in computer vision and control theory, monocular camera systems have received growing interest as an alternative/collaborative sensor to GPS systems. Cameras can act as navigational sensors by detecting and tracking feature points in an image. One limiting factor in this method is the current inability to relate feature points as they enter and leave the camera …eld of view. The contribution of this paper is a new vision-based state estimation method that allows sets of feature points to be related such that the aircraft position and orientation can be correlated to previous GPS data so that GPS-like navigation can be maintained in denied environments. The Volpe report acted as an impetus for many companies and institutions to investigate mitigation strategies for the vulnerabilities associated with the current GPS navigation aid protocol, nearly all follow- ing the suggested GPS backup methods that revert to the archaic/legacy methods. Unfortunately, these navigational modalities are limited by the range of their land-based transmitters, which are expensive and may not be feasible for remote or hazardous environments. Based on these restrictions, researchers have investigated local methods of estimating position when GPS is denied. Given the advancements in computer vision and control theory, monocular camera systems have received growing interest as a local alternative/collaborative sensor to GPS systems. One issue that has inhibited the use of a vision system as a navigational aid is the di¢ culty in reconstructing inertial measurements from the projected image. Current approaches to estimating the aircraft state through a camera system utilize the motion of feature points in an image. Current approaches to recover the inertial state of the aircraft via a camera system include linear or nonlinear estimation methods. In contrast to these estimation methods, a geometric approach is proposed in this paper that uses a series of homography relationships. Speci…cally, a new method is proposed to create a series of daisy-chained images in which the feature points can be related so that the inertial coordinates of an aircraft can be determined between each successive image. Through these relationships, GPS data can be linked with the image data to provide inertial measurements in navigational regions where GPS is denied. Recently, a similar method using homography relationships between images to estimate the pose of an aircraft were presented by Caballero, et al. 2 Their method is

Proceedings ArticleDOI
04 Jan 2006
TL;DR: The proposed approach for registering aerial images taken at different time, viewpoints, or heights does not need image matching or correspondence and can handle the situation of large rotation and scaling between reference and observed images.
Abstract: This paper presents an approach for registering aerial images taken at different time, viewpoints, or heights. Different from conventional image registration algorithms, our approach does not need image matching or correspondence. In this approach, we extract a number of corner features as the basis for registration and create a number of image patches with the corner points as centers on both reference and observed images. In order to let the corresponding patches cover same scene, we use a circle which the radius can be changed as the shape of the image patches. In this way, the image patches can handle the case in which there are rotation and scaling at the same time between reference and observed images. With the orientation differences of patches between these two images, we create an angle histogram with a voting procedure. The rotation angle between the two images can be determined by seeking the orientation difference that corresponds to the maximum peak in the histogram. Once we get the rotation angle, we seek back for the two corresponding patches which the value of orientation difference is the same as the rotation angle. The ratio of radii of these two patches is the value of the scaling. The proposed approach can handle the situation of large rotation and scaling between reference and observed images. It is applied to real aerial images and the results are very satisfying.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: An original method to reconstruct the road in the specific context of urban environment thanks to the data provided by an uncalibrated stereo-vision system, which copes with the dense traffic conditions.
Abstract: We present in this article an original method to reconstruct the road in the specific context of urban environment thanks to the data provided by an uncalibrated stereo-vision system. The method consists on extracting then tracking features (points, lines) from the road and estimate the homography induced by the plane between two poses. The purposed method copes with the dense traffic conditions: the free space required (first ten meters in front of the vehicle) is slightly equivalent to the security distance between two vehicles. Experimental issues from real data are presented and discussed.

Proceedings ArticleDOI
22 Oct 2006
TL;DR: A method to automatically detect and reconstruct planar surfaces for immediate use in AR tasks by dividing the image into a grid and rectangles that belong to the same planar surface are clustered around the local maxima of a Hough transform.
Abstract: This paper proposes a method to automatically detect and reconstruct planar surfaces for immediate use in AR tasks. Traditional methods for plane detection are typically based on the comparison of transfer errors of a homography, which make them sensitive to the choice of a discrimination threshold. We propose a very different approach: the image is divided into a grid and rectangles that belong to the same planar surface are clustered around the local maxima of a Hough transform. As a result, we simultaneously get clusters of coplanar rectangles and the image of their intersection line with a reference plane, which easily leads to their 3D position and orientation. Results are shown on both synthetic and real data.

Proceedings ArticleDOI
17 Jun 2006
TL;DR: This work studies the problem of automatically reconstructing the 3-D location of the victim of a shooting from photographs of planar surfaces with blood splattered on them, and analyzes this problem in terms of the multiple-view geometry ofPlanar conic sections.
Abstract: Reconstruction of the point source of blood splatter in a crime scene is an important and difficult problem in forensic science. We study the problem of automatically reconstructing the 3-D location of the victim of a shooting from photographs of planar surfaces with blood splattered on them. We analyze this problem in terms of the multiple-view geometry of planar conic sections. Using projective invariants associated with pairs of conic sections, we match images of multiple conic sections taken from widely separated viewpoints. We further recover the homography between two views using the common tangents of pairs of conic sections. The location of the point source is then retrieved from the reconstructed scene geometry. We suggest how to extend these results to scenes containing multiple planar surfaces, and verify the proposed method with experiments on both synthetic and real images.

Journal ArticleDOI
TL;DR: A flexible and straightforward way of using different amounts of knowledge of the translational motion for the calibration task is derived, mainly applicable in a robot vision setting, and the calculation of the hand-eye orientation and the special case of stereo head calibration is addressed.
Abstract: In this paper, a technique for calibrating a camera using a planar calibration object with known metric structure, when the camera (or the calibration plane) undergoes pure translational motion, is presented. The study is an extension of the standard formulation of plane-based camera calibration where the translational case is considered as degenerate. We derive a flexible and straightforward way of using different amounts of knowledge of the translational motion for the calibration task. The theory is mainly applicable in a robot vision setting, and the calculation of the hand-eye orientation and the special case of stereo head calibration are also being addressed. Results of experiments on both computer-generated and real image data are presented. The paper covers the most useful instances of applying the technique to a real system and discusses the degenerate cases that needs to be considered. The paper also presents a method for calculating the infinite homography between the two image planes in a stereo head, using the homographies estimated between the calibration plane and the image planes. Its possible usage and usefulness for simultaneous calibration of the two cameras in the stereo head are discussed and illustrated using experiments