scispace - formally typeset
Search or ask a question

Showing papers on "View synthesis published in 1996"


Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.
Abstract: A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis

4,426 citations


Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations and works by prewarping two images prior to computing a morph and then postwarped the interpolated images.
Abstract: Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations. The technique, called view morphing, works by prewarping two images prior to computing a morph and then postwarping the interpolated images. Because no knowledge of 3D shape is required, the technique may be applied to photographs and drawings, as well as rendered scenes. The ability to synthesize changes both in viewpoint and image structure affords a wide variety of interesting 3D effects via simple image transformations. CR

872 citations


Proceedings ArticleDOI
18 Jun 1996
TL;DR: The method does not explicitly model scene geometry, and enables fast and exact generation of synthetic views, and it is demonstrated that it is possible to efficiently synthesize realistic new views even from inaccurate and incomplete depth information.
Abstract: We propose a new method for view synthesis from real images using stereo vision. The method does not explicitly model scene geometry, and enables fast and exact generation of synthetic views. We also reevaluate the requirements on stereo algorithms for the application of view synthesis and discuss ways of dealing with partially occluded regions of unknown depth and with completely occluded regions of unknown texture. Our experiments demonstrate that it is possible to efficiently synthesize realistic new views even from inaccurate and incomplete depth information.

91 citations


Journal ArticleDOI
TL;DR: An algorithm for synthesizing intermediate views from a single stereo-pair is presented, incorporating of scene assumptions and a disparity estimation confidence measure that lead to the accurate synthesis of occluded and ambiguously referenced regions.
Abstract: In this paper, we present an algorithm for synthesizing intermediate views from a single stereo-pair. The key contributions of this algorithm are the incorporation of scene assumptions and a disparity estimation confidence measure that lead to the accurate synthesis of occluded and ambiguously referenced regions. The synthesized views have been displayed on a multi-view binocular imaging system, with subjectively effective motion parallax and diminished eye strain.

44 citations


01 Jan 1996
TL;DR: A full coupled Jacobian is estimated on-line without any prior models, or the introduction of special calibration movements, and it is shown how the estimated models can be used for visual space robot task specification, planning and control.
Abstract: We present a novel approach for combined visual model acqusition and agent control. The approach differs from previous work in that a full coupled Jacobian is estimated on-line without any prior models, or the introduction of special calibration movements. We show how the estimated models can be used for visual space robot task specification, planning and control. In the other direction the same type of models can be used for view synthesis.

39 citations


Proceedings Article
01 May 1996
TL;DR: This work proposes to completely avoid inferring and reasoning in 3-D by using projective invariants derived from corre­ sponding points in the prestored images, which should allow the integration of computer generated and real imagery for applications such as walkthroughs in realistic virtual environments.
Abstract: Synthesizing the image of a 3-D scene as it would be captured by a camera from an arbitrary viewpoint is a central problem in Computer Graphics. Given a com­ plete 3-D model, it is possible to render the scene from any viewpoint. The construction of models is a tedious task. Here, we propose to bypass the model construction phase altogether, and to generate images of a 3-D scene from any novel viewpoint from prestored images. Unlike methods presented so far, we propose to completely avoid inferring and reasoning in 3-D by using projective invariants. These invariants are derived from corre­ sponding points in the prestored images. The correspon­ dences between features are established off-line in a semi-automated way. It is then possible to generate wire­ frame animation in real time on a standard computing platform. Well understood texture mapping methods can be applied to the wireframes to realistically render new images from the prestored ones. The method proposed here should allow the integration of computer generated and real imagery for applications such as walkthroughs in realistic virtual environments. We illustrate our ap­ proach on synthetic and real indoor and outdoor images.

15 citations


Book ChapterDOI
23 Nov 1996
TL;DR: Two different algorithms solving the problem, how to select the optimal set of reference views from a dense set of real primary views are proposed, posed as a selection and fitting of parametric models.
Abstract: Recently, much attention has been devoted to image-based scene representations. They allow to construct an arbitrary view of a 3-D scene by the interpolation (transfer) from a sparse set of real 2-D (reference) images, rather than by rendering an explicit 3-D model. While many authors address mainly the purely geometrical aspect of the task, we focus on the problem, how to select the optimal set of reference views. Selection of reference views from a dense set of real primary views is posed as a selection and fitting of parametric models. The selected set must minimize a weighted sum of the number of reference views and the total fit error. We propose two different algorithms solving this optimization problem. The experimental results on synthetic and real data indicate the feasibility of the approach for 1-DOF camera movement. We discuss the possibility to extend one of the algorithms for more general case.