scispace - formally typeset
Open Access

Efficient Free Form Light Field Rendering

Reads0
Chats0
TLDR
In this paper, a convex free-form camera surface and a set of arbitrarily oriented camera planes are used to represent all possible views of the scene without the need for multiple slabs, and it allows for relatively uniform sampling.
Abstract
We show a simple and efficient way for rendering arbitrary views from so-called free-form light fields , employing a convex free form camera surface and a set of arbitrarily oriented camera planes. This way directionally varying real-world imagery can be displayed without intermediate resampling steps, and yet rendering of free form light fields can be performed as efficiently as for two-planeparameterized light fields using texture mapping graphics hardware. Comparable to sphere-based parameterizations, a single free form light field can represent all possible views of the scene without the need for multiple slabs, and it allows for relatively uniform sampling. Furthermore, we extend the rendering algorithm to account for occlusion in certain input views. We apply our method to synthetic and real-world datasets with and without additional geometric information and compare the resulting rendering performance and quality to twoplane-parameterized light field rendering.

read more

Citations
More filters

Fast (Spherical) Light Field Rendering with Per-Pixel Depth

TL;DR: A novel light field rendering technique which performs per-pixel depth correction of rays for high-quality reconstruction and provides a rendering technique that performs without exhaustive pre-processing for 3D object reconstruction or real-time ray-object intersection calculations at rendering time.
Journal ArticleDOI

GPU-Based Spherical Light Field Rendering with Per-Fragment Depth Correction

TL;DR: A novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction and stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup.
Journal ArticleDOI

Evaluating the quality of light fields computed from hand-held camera images

TL;DR: Using structure-from-motion algorithms and optimization techniques camera motion and a 3-D reconstruction of the scene are established and the light field is completed by computing local depth information for each input image.
Journal ArticleDOI

Real-time view synthesis from a sparse set of views

TL;DR: The essence of the method is to perform necessary depth estimation up to the level required by the minimal joint image-geometry sampling rate using off-the-shelf graphics hardware, so that real-time anti-aliased light field rendering is achieved even if the image samples are insufficient.
Proceedings ArticleDOI

A Parallel Multi-view Rendering Architecture

TL;DR: An architecture for rendering multiple views efficiently on a cluster of GPUs where the original scene is sampled by virtual cameras which are used later to reconstruct the desired views.