scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Trajectory triangulation: 3D reconstruction of moving points from a monocular image sequence

01 Apr 2000-IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE Computer Society)-Vol. 22, Iss: 4, pp 348-357
TL;DR: The problem of reconstructing the 3D coordinates of a moving point seen from a monocular moving camera is considered, i.e., to reconstruct moving objects from line-of-sight measurements only, and the solutions for points moving along a straight-line and along conic-section trajectories are investigated.
Abstract: We consider the problem of reconstructing the 3D coordinates of a moving point seen from a monocular moving camera, i.e., to reconstruct moving objects from line-of-sight measurements only. The task is feasible only when some constraints are placed on the shape of the trajectory of the moving point. We coin the family of such tasks as "trajectory triangulation." We investigate the solutions for points moving along a straight-line and along conic-section trajectories, We show that if the point is moving along a straight line, then the parameters of the line (and, hence, the 3D position of the point at each time instant) can be uniquely recovered, and by linear methods, from at least five views. For the case of conic-shaped trajectory, we show that generally nine views are sufficient for a unique reconstruction of the moving point and fewer views when the conic is of a known type (like a circle in 3D Euclidean space for which seven views are sufficient). The paradigm of trajectory triangulation, in general, pushes the envelope of processing dynamic scenes forward. Thus static scenes become a particular case of a more general task of reconstructing scenes rich with moving objects (where an object could be a single point).

Content maybe subject to copyright    Report

Citations
More filters
Patent
26 Oct 2015
TL;DR: In this article, a forward-facing vision system for a vehicle includes a forwardfacing camera disposed in a windshield electronics module attached at a windshield of the vehicle and viewing through the windshield.
Abstract: A forward-facing vision system for a vehicle includes a forward-facing camera disposed in a windshield electronics module attached at a windshield of the vehicle and viewing through the windshield. A control includes a processor that, responsive to processing of captured image data, detects taillights of leading vehicles during nighttime conditions and, responsive to processing of captured image data, detects lane markers on a road being traveled by the vehicle. The control, responsive to lane marker detection and a determination that the vehicle is drifting out of a traffic lane, may control a steering system of the vehicle to mitigate such drifting, with the steering system manually controllable by a driver of the vehicle irrespective of control by the control. The processor, based at least in part on detection of lane markers via processing of captured image data, determines curvature of the road being traveled by the vehicle.

615 citations

Patent
16 Jan 2012
TL;DR: In this article, the camera is disposed at an interior portion of a vehicle equipped with the vehicular vision system, where the camera one of (i) views exterior of the equipped vehicle through the windshield of the vehicle and forward of the equipment and (ii) views from the windshield into the interior cabin of the equipments.
Abstract: A vehicular vision system includes a camera having a lens and a CMOS photosensor array having a plurality of photosensor elements. The camera is disposed at an interior portion of a vehicle equipped with the vehicular vision system. The camera one of (i) views exterior of the equipped vehicle through the windshield of the equipped vehicle and forward of the equipped vehicle and (ii) views from the windshield of the equipped vehicle into the interior cabin of the equipped vehicle. A control includes an image processor that processes image data captured by the photosensor array. The image processor processes captured image data to detect an object viewed by the camera. The photosensor array is operable at a plurality of exposure periods and at least one exposure period of the plurality of exposure periods is dynamically variable.

576 citations

Patent
02 Apr 2008
TL;DR: In this article, a light source transilluminates the single transparency with optical radiation so as to project the pattern onto the object, and a processor processes the image captured by the image capture assembly to reconstruct a 3D map of the object.
Abstract: Apparatus for mapping an object includes an illumination assembly, which includes a single transparency containing a fixed pattern of spots. A light source transilluminates the single transparency with optical radiation so as to project the pattern onto the object. An image capture assembly captures an image of the pattern that is projected onto the object using the single transparency. A processor processes the image captured by the image capture assembly so as to reconstruct a three-dimensional (3D) map of the object.

529 citations

Patent
18 Nov 2013
TL;DR: In this paper, an adaptive speed control system for controlling the speed of a vehicle is proposed to detect a curve in the road ahead of the vehicle via processing by the image processor of image data captured by the imaging device.
Abstract: A driver assistance system for a vehicle includes an imaging device having a field of view forward of a vehicle and in a direction of travel of the equipped vehicle, an image processor operable to process image data captured by the imaging device, and a global positioning system operable to determine a geographical location of the vehicle. The equipped vehicle includes an adaptive speed control system for controlling the speed of the equipped vehicle. The adaptive speed control system may reduce the speed of the equipped vehicle responsive at least in part to a detection of a curve in the road ahead of the equipped vehicle via processing by the image processor of image data captured by the imaging device.

305 citations

Journal ArticleDOI
TL;DR: This article presents for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments and identifies three main problems: how to perform reconstruction, how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction.
Abstract: In the last few decades, Structure from Motion (SfM) and visual Simultaneous Localization and Mapping (visual SLAM) techniques have gained significant interest from both the computer vision and robotic communities. Many variants of these techniques have started to make an impact in a wide range of applications, including robot navigation and augmented reality. However, despite some remarkable results in these areas, most SfM and visual SLAM techniques operate based on the assumption that the observed environment is static. However, when faced with moving objects, overall system accuracy can be jeopardized. In this article, we present for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments. We identify three main problems: how to perform reconstruction (robust visual SLAM), how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction. Based on this categorization, we provide a comprehensive taxonomy of existing approaches. Finally, the advantages and disadvantages of each solution class are critically discussed from the perspective of practicality and robustness.

298 citations


Cites background from "Trajectory triangulation: 3D recons..."

  • ..., depending on the trajectory assumption [6])....

    [...]

  • ...Avidan and Shashua [5, 6] coined the term trajectory triangulation as a technique to reconstruct 3D points of the moving object when the object trajectory is known or satisfies a parametric form....

    [...]

  • ...Instead of assuming that the object is moving along a line, Shashua et al. [144] assumed that the object is moving over a conic section....

    [...]

  • ...Prior knowledge about the camera motion is not needed, although some approaches [5, 6] assume that the camera pose is available....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for fitting conic sections to scattered data, which permits an extension to conic splines around extended digitized curves, expediting a smooth reconstruction of their curvature.
Abstract: The problem of fitting conic sections to scattered data has arisen in several applied literatures. The quadratic from Ax 2 + Bxy + Cy 2 + Dx + Ey + F that is minimized in mean-square is proportional to the ratio of two squared distances along rays through the center of a conic. Considerations of invariance under translation, rotation, and scaling of the data configuration lead to a straightforward method of estimation somewhat different from earlier suggestions. The method permits an extension to conic splines around extended digitized curves, expediting a smooth reconstruction of their curvature. Some examples are presented indicating how the technique might be applied in morphometrics.

568 citations

Book
01 Oct 1976

426 citations


"Trajectory triangulation: 3D recons..." refers methods in this paper

  • ...We would like to thank Yair Ramati for pointing our attention to the literature on orbit determination and reference [3]....

    [...]

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The formalism of the Grassmann-Cayley algebra is proposed to use as the simplest way to make both geometric and algebraic statements in a very synthetic and effective way (i.e. allowing actual computation if needed).
Abstract: We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the Grassmann-Cayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effective way (i.e. allowing actual computation if needed). We have a fairly complete picture of the situation in the case of points; there are only three types of algebraic relations which are satisfied by the coordinates of the images of a 3-D point: bilinear relations arising when we consider pairs of images among the N and which are the well-known epipolar constraints, trilinear relations arising when we consider triples of images among the N, and quadrilinear relations arising when we consider four-tuples of images among the N. In the case of lines, we show how the traditional perspective projection equation can be suitably generalized and that in the case of three images there exist two independent trilinear relations between the coordinates of the images of a 3-D line. >

271 citations

Journal ArticleDOI
TL;DR: A new fitting scheme called renormalization is presented for computing an unbiased estimate by automatically adjusting to noise in a statistical model of noise in terms of the covariance matrix of the N-vector.
Abstract: Introducing a statistical model of noise in terms of the covariance matrix of the N-vector, we point out that the least-squares conic fitting is statistically biased. We present a new fitting scheme called renormalization for computing an unbiased estimate by automatically adjusting to noise. Relationships to existing methods are discussed, and our method is tested using real and synthetic data. >

160 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: It is shown that the antipenumbra is, in general, a disconnected set bounded by portions of quadric surfaces, and an implemented O(n2) time algorithm that computes this boundary is described.
Abstract: We define the antiumbra and the antipenumbra of a convex areal light source shining through a sequence of convex areal holes in three dimensions. The antiumbra is the volume beyond the plane of the final hole from which all points on the light source can be seen. The antipenumbra is the volume from which some, but not all, of the light source can be seen. We show that the antipenumbra is, in general, a disconnected set bounded by portions of quadric surfaces, and describe an implemented O(n2) time algorithm that computes this boundary. The antipenumbra computation is motivated by visibility computations and might prove useful in rendering shadowed objects. We also present an implemented extension of the algorithm that computes planar and quadratic surfaces of discontinuous illumination useful for polygon meshing in global illumination computations.

159 citations