scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Novel view synthesis by cascading trilinear tensors

01 Oct 1998-IEEE Transactions on Visualization and Computer Graphics (IEEE Educational Activities Department)-Vol. 4, Iss: 4, pp 293-306
TL;DR: This work presents a new method for synthesizing novel views of a 3D scene from two or three reference images in full correspondence through the use and manipulation of an algebraic entity, termed the "trilinear tensor", that links point correspondences across three images.
Abstract: Presents a new method for synthesizing novel views of a 3D scene from two or three reference images in full correspondence. The core of this work is the use and manipulation of an algebraic entity, termed the "trilinear tensor", that links point correspondences across three images. For a given virtual camera position and orientation, a new trilinear tensor can be computed based on the original tensor of the reference images. The desired view can then be created using this new trilinear tensor and point correspondences across two of the reference images.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: An overview of 3-D shape measurement using various optical methods, and a focus on structured light tech- niques where various optical configurations, image acquisition technology, data postprocessing and analysis methods and advantages and limitations are presented.
Abstract: We first provide an overview of 3-D shape measurement us- ing various optical methods. Then we focus on structured light tech- niques where various optical configurations, image acquisition tech- niques, data postprocessing and analysis methods and advantages and limitations are presented. Several industrial application examples are presented. Important areas requiring further R&D are discussed. Finally, a comprehensive bibliography on 3-D shape measurement is included, although it is not intended to be exhaustive. © 2000 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(00)00101-X)

1,481 citations

Patent
26 Oct 2015
TL;DR: In this article, a forward-facing vision system for a vehicle includes a forwardfacing camera disposed in a windshield electronics module attached at a windshield of the vehicle and viewing through the windshield.
Abstract: A forward-facing vision system for a vehicle includes a forward-facing camera disposed in a windshield electronics module attached at a windshield of the vehicle and viewing through the windshield. A control includes a processor that, responsive to processing of captured image data, detects taillights of leading vehicles during nighttime conditions and, responsive to processing of captured image data, detects lane markers on a road being traveled by the vehicle. The control, responsive to lane marker detection and a determination that the vehicle is drifting out of a traffic lane, may control a steering system of the vehicle to mitigate such drifting, with the steering system manually controllable by a driver of the vehicle irrespective of control by the control. The processor, based at least in part on detection of lane markers via processing of captured image data, determines curvature of the road being traveled by the vehicle.

615 citations

Patent
16 Jan 2012
TL;DR: In this article, the camera is disposed at an interior portion of a vehicle equipped with the vehicular vision system, where the camera one of (i) views exterior of the equipped vehicle through the windshield of the vehicle and forward of the equipment and (ii) views from the windshield into the interior cabin of the equipments.
Abstract: A vehicular vision system includes a camera having a lens and a CMOS photosensor array having a plurality of photosensor elements. The camera is disposed at an interior portion of a vehicle equipped with the vehicular vision system. The camera one of (i) views exterior of the equipped vehicle through the windshield of the equipped vehicle and forward of the equipped vehicle and (ii) views from the windshield of the equipped vehicle into the interior cabin of the equipped vehicle. A control includes an image processor that processes image data captured by the photosensor array. The image processor processes captured image data to detect an object viewed by the camera. The photosensor array is operable at a plurality of exposure periods and at least one exposure period of the plurality of exposure periods is dynamically variable.

576 citations

Patent
18 Nov 2013
TL;DR: In this paper, an adaptive speed control system for controlling the speed of a vehicle is proposed to detect a curve in the road ahead of the vehicle via processing by the image processor of image data captured by the imaging device.
Abstract: A driver assistance system for a vehicle includes an imaging device having a field of view forward of a vehicle and in a direction of travel of the equipped vehicle, an image processor operable to process image data captured by the imaging device, and a global positioning system operable to determine a geographical location of the vehicle. The equipped vehicle includes an adaptive speed control system for controlling the speed of the equipped vehicle. The adaptive speed control system may reduce the speed of the equipped vehicle responsive at least in part to a detection of a curve in the road ahead of the equipped vehicle via processing by the image processor of image data captured by the imaging device.

305 citations

Patent
25 Jan 2013
TL;DR: In this article, a driver assistance system for a vehicle includes an imager disposed in a housing and a control, which includes a CMOS photosensor array of photosensor elements and a lens.
Abstract: A driver assistance system for a vehicle includes an imager disposed in a housing and a control. The imager includes a CMOS photosensor array of photosensor elements and a lens. With the housing disposed in a vehicle, the imager views forwardly to the exterior of the vehicle through the vehicle windshield at a region of the windshield that is swept by a windshield wiper of the vehicle. The CMOS photosensor array is operable to capture image data. The control includes an image processor disposed in the housing. The driver assistance system identifies objects viewed by the imager via processing by the image processor of captured image data. At least in part responsive to processing of captured image data by the image processor, streetlights present exterior of the vehicle and viewed by imager are discriminated from other objects present exterior the vehicle and viewed by imager.

154 citations

References
More filters
Proceedings ArticleDOI
01 Jan 1988
TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Abstract: The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.

13,993 citations


"Novel view synthesis by cascading t..." refers methods in this paper

  • ...• Find Matching Points: The method we use is a variant of Harris corner detector [21]....

    [...]

Proceedings Article
24 Aug 1981
TL;DR: In this paper, the spatial intensity gradient of the images is used to find a good match using a type of Newton-Raphson iteration, which can be generalized to handle rotation, scaling and shearing.
Abstract: Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system.

12,944 citations

Journal ArticleDOI
TL;DR: A technique for image encoding in which local operators of many scales but identical shape serve as the basis functions, which tends to enhance salient image features and is well suited for many image analysis tasks as well as for image compression.
Abstract: We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.

6,975 citations


"Novel view synthesis by cascading t..." refers methods in this paper

  • ...We construct a Laplacian pyramid [11] and recover the motion parameters at each level, using the estimate of the previous level as our initial guess....

    [...]

Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.
Abstract: A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis

4,426 citations


"Novel view synthesis by cascading t..." refers background in this paper

  • ...Levoy and Hanrahan [31] and Gortler et al. [19] interpolate between a dense set of several thousand example images to reconstruct a reduced plenoptic function (under an occlusion-free world assumption)....

    [...]

  • ...Levoy and Hanrahan [31] and Gortler et al....

    [...]

Proceedings ArticleDOI
01 Aug 1996
TL;DR: A new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions.
Abstract: This paper discusses a new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions. Unlike the shape capture process traditionally used in computer vision and the rendering process traditionally used in computer graphics, our approach does not rely on geometric representations. Instead we sample and reconstruct a 4D function, which we call a Lumigraph. The Lumigraph is a subset of the complete plenoptic function that describes the flow of light at all positions in all directions. With the Lumigraph, new images of the object can be generated very quickly, independent of the geometric or illumination complexity of the scene or object. The paper discusses a complete working system including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images from this new representation.

2,986 citations


"Novel view synthesis by cascading t..." refers background in this paper

  • ...[19] interpolate between a dense set of several thousand example images to reconstruct a reduced plenoptic function (under an occlusion-free world assumption)....

    [...]