scispace - formally typeset
ReportDOI

The Light Field Video Camera

TLDR
The Light Field Video Camera as mentioned in this paper is a modular embedded design based on the 1EEE1394 High Speed Serial Bus, with an image sensor and MPEG2 compression at each node.
Abstract
: We present the Light Field Video Camera, an array of CMOS image sensors for video image based rendering applications. The device is designed to record a synchronized video dataset from over one hundred cameras to a hard disk array using as few as one PC per fifty image sensors. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. The Light Field Video Camera is a modular embedded design based on the 1EEE1394 High Speed Serial Bus, with an image sensor and MPEG2 compression at each node. We show both the flexibility and scalability of the design with a six camera prototype.

read more

Citations
More filters
Journal ArticleDOI

High-quality video view interpolation using a layered representation

TL;DR: This paper shows how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms, and develops a novel temporal two-layer compressed representation that handles matting.
Journal ArticleDOI

High performance imaging using large camera arrays

TL;DR: A unique array of 100 custom video cameras that are built are described, and their experiences using this array in a range of imaging applications are summarized.
Patent

Capturing and processing of images using monolithic camera array with heterogeneous imagers

TL;DR: In this paper, the system and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed.
Journal ArticleDOI

3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes

TL;DR: This is the first real-time end-to-end 3D TV system with enough views and resolution to provide a truly immersive 3D experience and presents the calibration and image alignment procedures that are necessary to achieve good image quality.
Journal ArticleDOI

Light Field Image Processing: An Overview

TL;DR: A comprehensive overview and discussion of research in light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data are presented.
References
More filters
Proceedings ArticleDOI

Light field rendering

TL;DR: This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.
Proceedings ArticleDOI

The lumigraph

TL;DR: A new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions.
Proceedings ArticleDOI

Image-based visual hulls

TL;DR: This paper describes an efficient image-based approach to computing and shading visual hulls from silhouette image data that takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel.
Proceedings ArticleDOI

Plenoptic sampling

TL;DR: From a spectral analysis of light field signals and using the sampling theorem, the analytical functions to determine the minimum sampling rate for light field rendering are derived and this approach bridges the gap between image- based rendering and traditional geometry-based rendering.
Proceedings Article

Modeling and Rendering Architecture from Photographs

TL;DR: This thesis presents an approach for modeling and rendering existing architectural scenes from sparse sets of still photographs and presents view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models.