scispace - formally typeset
Proceedings ArticleDOI

A Unified Deep Learning Approach for Foveated Rendering & Novel View Synthesis from Sparse RGB-D Light Fields

TLDR
In this paper, an end-to-end convolutional neural network was designed to perform both foveated reconstruction and view synthesis using only 1.2% of the total light field data.
Abstract
Near-eye light field displays provide a solution to visual discomfort when using head mounted displays by presenting accurate depth and focal cues. However, light field HMDs require rendering the scene from a large number of viewpoints. This computational challenge of rendering sharp imagery of the foveal region and reproduce retinal defocus blur that correctly drives accommodation is tackled in this paper. We designed a novel end-to-end convolutional neural network that leverages human vision to perform both foveated reconstruction and view synthesis using only 1.2% of the total light field data. The proposed architecture comprises of log-polar sampling scheme followed by an interpolation stage and a convolutional neural network. To the best of our knowledge, this is the first attempt that synthesizes the entire light field from sparse RGB-D inputs and simultaneously addresses foveation rendering for computational displays. Our algorithm achieves fidelity in the fovea without any perceptible artifacts in the peripheral regions. The performance in fovea is comparable to the state-of-the-art view synthesis methods, despite using around 10x less light field data.

read more

Citations
More filters
Journal ArticleDOI

An integrative view of foveated rendering

TL;DR: Foveated rendering as mentioned in this paper adapts the image synthesis process to the user's gaze by exploiting the human visual system's limitations, in particular in terms of reduced acuity in peripheral vision, it strives to deliver high-quality visual experiences at very reduced computational, storage and transmission costs.
Journal ArticleDOI

2T-UNET: A Two-Tower UNet with Depth Clues for Robust Stereo Depth Estimation

TL;DR: The depth estimation problem is revisits, avoiding the explicit stereo matching step using a simple two-tower convolutional neural network, and the proposed algorithm is entitled 2T-UNet, which surpasses state-of-the-art monocular and stereo depth estimation methods on the challenging Scene dataset.
References
More filters
Journal ArticleDOI

ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data

TL;DR: In this article, a novel deep learning architecture, ResUNet-a, is proposed for the task of semantic segmentation of monotemporal very high-resolution aerial images.
Journal ArticleDOI

Learning-based view synthesis for light field cameras

TL;DR: In this paper, a learning-based approach is proposed to synthesize new views from a sparse set of input views using two sequential convolutional neural networks to model disparity and color estimation components and train both networks simultaneously by minimizing the error between the synthesized and ground truth images.
Book ChapterDOI

A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields

TL;DR: In computer vision communities such as stereo, optical flow, or visual tracking, commonly accepted and widely used benchmarks have enabled objective comparison and boosted scientific progress.
Journal ArticleDOI

Foveated 3D graphics

TL;DR: This work exploits the falloff of acuity in the visual periphery to accelerate graphics computation by a factor of 5-6 on a desktop HD display, and develops a general and efficient antialiasing algorithm easily retrofitted into existing graphics code to minimize "twinkling" artifacts in the lower-resolution layers.
Proceedings ArticleDOI

Near-eye light field displays

TL;DR: A light-field-based approach to near-eye display that allows for thin, lightweight head-mounted displays capable of depicting accurate accommodation, convergence, and binocular disparity depth cues, and a GPU-accelerated stereoscopic light field renderer is proposed.
Related Papers (5)