scispace - formally typeset
Search or ask a question
Topic

Light field

About: Light field is a research topic. Over the lifetime, 5357 publications have been published within this topic receiving 87424 citations.


Papers
More filters
Proceedings ArticleDOI
25 Jun 2003
TL;DR: A new reconstruction filter is presented that significantly reduces the "ghosting" artifacts seen in undersampled light fields, while preserving important high-fidelity features such as sharp object boundaries and view-dependent reflectance and allows acceptable images to be generated from smaller image sets.
Abstract: This paper builds on previous research in the light field area of image-based rendering. We present a new reconstruction filter that significantly reduces the "ghosting" artifacts seen in undersampled light fields, while preserving important high-fidelity features such as sharp object boundaries and view-dependent reflectance. By improving the rendering quality achievable from undersampled light fields, our method allows acceptable images to be generated from smaller image sets. We present both frequency and spatial domain justifications for our techniques. We also present a practical framework for implementing the reconstruction filter in multiple rendering passes.

111 citations

Journal ArticleDOI
TL;DR: A light transport framework for understanding the fundamental limits of light field camera resolution that can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes.
Abstract: Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.

111 citations

Proceedings ArticleDOI
30 Apr 2007
TL;DR: LightShop is a system that allows a user to interactively manipulate, composite and render multiple light fields, and shows applications in digital photography and demonstrates how to integrate light fields into a modern space-flight game using LightShop.
Abstract: Light fields can be used to represent an object's appearance with a high degree of realism. However, unlike their geometric counterparts, these image-based representations lack user control for manipulating them. We present a system that allows a user to interactively manipulate, composite and render multiple light fields. LightShop is a modular system consisting of three parts: 1) a set of functions that allow a user to model a scene containing multiple light fields, 2) a ray-shading language that describes how an image should be constructed from a set of light fields, and 3) a real-time light field rendering system in OpenGL that can plug into existing 3D engines as a GLSL shader.We show applications in digital photography and we demonstrate how to integrate light fields into a modern space-flight game using LightShop.

111 citations

Patent
30 Sep 2013
TL;DR: In this paper, a system for the synthesis of light field images from virtual viewpoints is described, where a processor and a memory are configured to store captured light field image data and an image manipulation application is used to generate an image from the perspective of the virtual viewpoint.
Abstract: Systems and methods for the synthesis of light field images from virtual viewpoints in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system includes a processor and a memory configured to store captured light field image data and an image manipulation application, wherein the captured light field image data includes image data, pixel position data, and a depth map, and wherein the image manipulation application configures the processor to obtain captured light field image data, determine a virtual viewpoint for the captured light field image data, where the virtual viewpoint includes a virtual location and virtual depth information, compute a virtual depth map based on the captured light field image data and the virtual viewpoint, and generate an image from the perspective of the virtual viewpoint based on the captured light field image data and the virtual depth map.

110 citations

Journal ArticleDOI
TL;DR: In this paper, a hybrid imaging system is proposed to generate a full light field video at 30 fps by propagating the angular information from the light field sequence to the 2D video, so that warp input images to the target view.
Abstract: Light field cameras have many advantages over traditional cameras, as they allow the user to change various camera settings after capture. However, capturing light fields requires a huge bandwidth to record the data: a modern light field camera can only take three images per second. This prevents current consumer light field cameras from capturing light field videos. Temporal interpolation at such extreme scale (10x, from 3 fps to 30 fps) is infeasible as too much information will be entirely missing between adjacent frames. Instead, we develop a hybrid imaging system, adding another standard video camera to capture the temporal information. Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps. We adopt a learning-based approach, which can be decomposed into two steps: spatio-temporal flow estimation and appearance estimation. The flow estimation propagates the angular information from the light field sequence to the 2D video, so we can warp input images to the target view. The appearance estimation then combines these warped images to output the final pixels. The whole process is trained end-to-end using convolutional neural networks. Experimental results demonstrate that our algorithm outperforms current video interpolation methods, enabling consumer light field videography, and making applications such as refocusing and parallax view generation achievable on videos for the first time.

110 citations


Network Information
Related Topics (5)
Optical fiber
167K papers, 1.8M citations
79% related
Image processing
229.9K papers, 3.5M citations
78% related
Pixel
136.5K papers, 1.5M citations
78% related
Laser
353.1K papers, 4.3M citations
78% related
Quantum information
22.7K papers, 911.3K citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023135
2022375
2021274
2020493
2019555
2018503