scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Multiple viewpoint rendering

24 Jul 1998-pp 243-254
TL;DR: The characteristics of MVR algorithms in general are described, along with the design, implementation, and applications of a particular MVR rendering system.
Abstract: This paper presents an algorithm for rendering a static scene from multiple perspectives. While most current computer graphics algorithms render scenes as they appear from a single viewpoint (the location of the camera) multiple viewpoint rendering (MVR) renders a scene from a range of spatially-varying viewpoints. By exploiting perspective coherence, MVR can produce a set of images orders of magnitude faster than conventional rendering methods. Images produced by MVR can be used as input to multiple-perspective displays such as holographic stereograms, lenticular sheet displays, and holographic video. MVR can also be used as a geometry-to-image prefilter for image-based rendering algorithms. MVR techniques are adapted from single viewpoint computer graphics algorithms and can be accelerated using existing hardware graphics subsystems. This paper describes the characteristics of MVR algorithms in general, along with the design, implementation, and applications of a particular MVR rendering system.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
01 Jul 2000
TL;DR: The frequency domain and ray-space aspects of dynamic reparameterization are explored, and an interactive rendering technique that takes advantage of today's commodity rendering hardware is presented.
Abstract: This research further develops the light field and lumigraph image-based rendering methods and extends their utility. We present alternate parameterizations that permit 1) interactive rendering of moderately sampled light fields of scenes with significant, unknown depth variation and 2) low-cost, passive autostereoscopic viewing. Using a dynamic reparameterization, these techniques can be used to interactively render photographic effects such as variable focus and depth-of-field within a light field. The dynamic parameterization is independent of scene geometry and does not require actual or approximate geometry of the scene. We explore the frequency domain and ray-space aspects of dynamic reparameterization, and present an interactive rendering technique that takes advantage of today's commodity rendering hardware.

754 citations


Cites background from "Multiple viewpoint rendering"

  • ...[ 10 ]. Given a minimum depth and a maximum depth, Chai, Tong, Chan, and Shum...

    [...]

Proceedings ArticleDOI
30 May 2000
TL;DR: The continuum between images and geometry used in image-based rendering techniques suggests that image- based rendering with traditional 3D graphics can be united in a joint image and geometry space.
Abstract: In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.

516 citations


Cites background from "Multiple viewpoint rendering"

  • ...2 Multiple viewpoint rendering An approach that bridges the notions of the light field or lumigraph and 3D scene geometry is what Halle calls multiple viewpoint rendering [13]....

    [...]

Journal ArticleDOI
TL;DR: This Commemorative Review presents an overview of literature on physical principles and applications of integral imaging, and applications including 3D underwater imaging, 3D imaging in photon-starved environments, 2D tracking of occluded objects,3D optical microscopy, and 3D polarimetric imaging are reviewed.
Abstract: Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.

461 citations


Cites methods from "Multiple viewpoint rendering"

  • ...Methods to simultaneously render multiple images have been studied in [73,74]....

    [...]

Journal ArticleDOI
TL;DR: A computer-assisted three-dimensional virtual osteotomy system for orthognathic surgery (CAVOS) is presented and the virtual reality workbench is used for surgical planning.

238 citations

Proceedings ArticleDOI
24 Jul 1998
TL;DR: This work develops and discusses multiple-center-of-projection images, and explains their advantages over conventional range images for image-based rendering, including greater flexibility during image acquisition and improved image reconstruction due to greater connectivity information.
Abstract: In image-based rendering, images acquired from a scene are used to represent the scene itself. A number of reference images are required to fully represent even the simplest scene. This leads to a number of problems during image acquisition and subsequent reconstruction. We present the multiple-center-of-projection image, a single image acquired from multiple locations, which solves many of the problems of working with multiple range images. This work develops and discusses multiple-center-ofprojection images, and explains their advantages over conventional range images for image-based rendering. The contributions include greater flexibility during image acquisition and improved image reconstruction due to greater connectivity information. We discuss the acquisition and rendering of multiple-center-of-projection datasets, and the associated sampling issues. We also discuss the unique epipolar and correspondence properties of this class of image. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation – Digitizing and scanning, Viewing algorithms; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.4.10 [Image Processing]: Scene Analysis

237 citations


Cites methods from "Multiple viewpoint rendering"

  • ...Since there is much coherence from one camera position to the next along the path, a method such as multiple viewpoint rendering [12] may be used to accelerate the rendering....

    [...]

References
More filters
Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.
Abstract: A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis

4,426 citations


"Multiple viewpoint rendering" refers background or methods in this paper

  • ...[8] and Levoy and Hanrahan [14] produce a single output image from a perspective image set, while similar algorithms developed for optical predistortion in synthetic holography derive not just one but an entirely new set of images [10]....

    [...]

  • ...This camera geometry has been used in computer vision and synthetic holography since the late 1970’s; it is identical to the one described by Levoy and Hanrahan [14]....

    [...]

Journal ArticleDOI
TL;DR: Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.
Abstract: The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.

3,393 citations


"Multiple viewpoint rendering" refers methods in this paper

  • ...The most common view dependent shading algorithm is Phong shading [Bui-Tuong75], which interpolates surface normals between a polygon’s vertices and performs lighting calculations at each pixel....

    [...]

Proceedings ArticleDOI
01 Aug 1996
TL;DR: A new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions.
Abstract: This paper discusses a new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions. Unlike the shape capture process traditionally used in computer vision and the rendering process traditionally used in computer graphics, our approach does not rely on geometric representations. Instead we sample and reconstruct a 4D function, which we call a Lumigraph. The Lumigraph is a subset of the complete plenoptic function that describes the flow of light at all positions in all directions. With the Lumigraph, new images of the object can be generated very quickly, independent of the geometric or illumination complexity of the scene or object. The paper discusses a complete working system including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images from this new representation.

2,986 citations


"Multiple viewpoint rendering" refers background in this paper

  • ...[8] and Levoy and Hanrahan [14] produce a single output image from a perspective image set, while similar algorithms developed for optical predistortion in synthetic holography derive not just one but an entirely new set of images [10]....

    [...]

01 Jan 1991
TL;DR: Early vision as discussed by the authors is defined as measuring the amounts of various kinds of visual substances present in the image (e.g., redness or rightward motion energy) rather than in how it labels "things".
Abstract: What are the elements of early vision? This question might be taken to mean, What are the fundamental atoms of vision?—and might be variously answered in terms ofsuch candidate structures as edges, peaks, corners, and so on. In this chapter we adopt a rather different point of view and ask the question, What are the fundamentalsubstances of vision? This distinction is important becausewe wish to focus on the first steps in extraction of visualinformation. At this level it is premature to talk aboutdiscrete objects, even such simple ones as edges and corners.There is general agreement that early vision involvesmeasurements of a number of basic image properties in-cluding orientation, color, motion, and so on. Figure l.lshows a caricature (in the style of Neisser, 1976), of the sort of architecture that has become quite popular as a model for both human and machine vision. The first stageof processing involves a set of parallel pathways, eachdevoted to one particular-visual property. We propose that the measurements of these basic properties be con-sidered as the elements of early vision. We think of earlyvision as measuring the amounts of various kinds of vi-sual "substances" present in the image (e.g., redness orrightward motion energy). In other words, we are inter- ested in how early vision measures “stuff” rather than in how it labels “things.”What, then, are these elementary visual substances?Various lists have been compiled using a mixture of intui-tion and experiment. Electrophysiologists have describedneurons in striate cortex that are selectively sensitive tocertain visual properties; for reviews, see Hubel (1988) and DeValois and DeValois (1988). Psychophysicists haveinferred the existence of channels that are tuned for cer- tain visual properties; for reviews, see Graham (1989), Olzak and Thomas (1986), Pokorny and Smith (1986), and Watson (1986). Researchers in perception have foundaspects of visual stimuli that are processed pre-attentive- ly (Beck, 1966; Bergen & Julesz, 1983; Julesz & Bergen,

1,576 citations

Proceedings ArticleDOI
15 Sep 1995
TL;DR: An image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function is presented and a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections are introduced.
Abstract: Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.

1,555 citations


"Multiple viewpoint rendering" refers background in this paper

  • ...McMillan and Bishop [ McMillan95 ] present work relevant to...

    [...]