scispace - formally typeset
Journal ArticleDOI

A Geometric Analysis of Light Field Rendering

Zhouchen Lin, +1 more
- 01 Jul 2004 - 
- Vol. 58, Iss: 2, pp 121-138
Reads0
Chats0
TLDR
The key observation is that anti-aliased light field rendering is equivalent to eliminating the “double image” artifacts caused by view interpolation, and a closed-form solution of the minimum sampling rate is presented.
Abstract
Recently, many image-based modeling and rendering techniques have been successfully designed to render photo-realistic images without the need for explicit 3D geometry. However, these techniques (e.g., light field rendering (Levoy, M. and Hanrahan, P., 1996. In SIGGRAPH 1996 Conference Proceedings, Annual Conference Series, Aug. 1996, pp. 31–42) and Lumigraph (Gortler, S.J., Grzeszczuk, R., Szeliski, R., and Cohen, M.F., 1996. In SIGGRAPH 1996 Conference Proceedings, Annual Conference Series, Aug. 1996, pp. 43–54)) may require a substantial number of images. In this paper, we adopt a geometric approach to investigate the minimum sampling problem for light field rendering, with and without geometry information of the scene. Our key observation is that anti-aliased light field rendering is equivalent to eliminating the “double image” artifacts caused by view interpolation. Specifically, we present a closed-form solution of the minimum sampling rate for light field rendering. The minimum sampling rate is determined by the resolution of the camera and the depth variation of the scene. This rate is ensured if the optimal constant depth for rendering is chosen as the harmonic mean of the maximum and minimum depths of the scene. Moreover, we construct the minimum sampling curve in the joint geometry and image space, with the consideration of depth discontinuity. The minimum sampling curve quantitatively indicates how reduced geometry information can be compensated by increasing the number of images, and vice versa. Experimental results demonstrate the effectiveness of our theoretical analysis.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Light Field Image Processing: An Overview

TL;DR: A comprehensive overview and discussion of research in light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data are presented.
Proceedings ArticleDOI

Light Field Reconstruction Using Deep Convolutional Network on EPI

TL;DR: This paper takes advantage of the clear texture structure of the epipolar plane image (EPI) in the light field data and model the problem of light field reconstruction from a sparse set of views as a CNN-based angular detail restoration on EPI.
Journal ArticleDOI

Light Field Reconstruction Using Shearlet Transform

TL;DR: In this article, an image based rendering technique based on light field reconstruction from a limited set of perspective views acquired by cameras was developed, which utilizes sparse representation of epipolar-plane images (EPI) in shearlet transform domain.
Journal ArticleDOI

A system for acquiring, processing, and rendering panoramic light field stills for virtual reality

TL;DR: In this article, a system for acquiring, processing, and rendering panoramic light field still photography for display in Virtual Reality (VR) is presented, where a real-time light field reconstruction algorithm uses a per-view geometry and a disk-based blending field.
References
More filters
Proceedings ArticleDOI

Light field rendering

TL;DR: This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.
Proceedings ArticleDOI

The lumigraph

TL;DR: A new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions.

The Plenoptic Function and the Elements of Early Vision

TL;DR: Early vision as discussed by the authors is defined as measuring the amounts of various kinds of visual substances present in the image (e.g., redness or rightward motion energy) rather than in how it labels "things".
Proceedings ArticleDOI

Layered depth images

TL;DR: A set of efficient image based rendering methods capable of rendering multiple frames per second on a PC that warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques and splatting an efficient solution to the resampling problem.
Book

Digital Video Processing

TL;DR: Digital Video Processing, Second Edition, reflects important advances in image processing, computer vision, and video compression, including new applications such as digital cinema, ultra-high-resolution video, and 3D video.