scispace - formally typeset
Open AccessPosted Content

NeRF++: Analyzing and Improving Neural Radiance Fields.

Reads0
Chats0
TLDR
A parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, unbounded 3D scenes is addressed, and the method improves view synthesis fidelity in this challenging scenario.
Abstract
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings, including 360 capture of bounded scenes and forward-facing capture of bounded and unbounded scenes. NeRF fits multi-layer perceptrons (MLPs) representing view-invariant opacity and view-dependent color volumes to a set of training images, and samples novel views based on volume rendering techniques. In this technical report, we first remark on radiance fields and their potential ambiguities, namely the shape-radiance ambiguity, and analyze NeRF's success in avoiding such ambiguities. Second, we address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, unbounded 3D scenes. Our method improves view synthesis fidelity in this challenging scenario. Code is available at this https URL.

read more

Citations
More filters
Proceedings ArticleDOI

TensoRF: Tensorial Radiance Fields

TL;DR: TensoRF is presented, a novel approach to model and reconstruct radiance fields as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features, and a novel vector-matrix (VM) decomposition that relaxes the low-rank constraints for two modes of a tensor and factorizes tensors into compact vector and matrix factors.
Posted Content

NeRD: Neural Reflectance Decomposition from Image Collections

TL;DR: A neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties enabling fast real-time rendering with novel illuminations.
Proceedings ArticleDOI

Block-NeRF: Scalable Large Scene Neural View Synthesis

TL;DR: It is demonstrated that when scaling NeRF to render city-scale scenes spanning multiple blocks, it is vital to de-compose the scene into individually trained NeRFs, which decouples rendering time from scene size, enables rendering to scale to arbitrarily large environments, and allows per-block updates of the environment.
Proceedings ArticleDOI

NeX: Real-time View Synthesis with Neural Basis Expansion

TL;DR: NeX as discussed by the authors models view-dependent effects by parameterizing each pixel as a linear combination of basis functions learned from a neural network and proposes a hybrid implicit-explicit modeling strategy that improves upon fine detail.
References
More filters
Proceedings ArticleDOI

Light field rendering

TL;DR: This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.
Posted Content

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

TL;DR: A new dataset of human perceptual similarity judgments is introduced and it is found that deep features outperform all previous metrics by large margins on this dataset, and suggests that perceptual similarity is an emergent property shared across deep visual representations.
Proceedings ArticleDOI

Structure-from-Motion Revisited

TL;DR: This work proposes a new SfM technique that improves upon the state of the art to make a further step towards building a truly general-purpose pipeline.
Proceedings ArticleDOI

The lumigraph

TL;DR: A new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions.
Proceedings ArticleDOI

Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach

TL;DR: This work presents a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs, which combines both geometry-based and imagebased techniques, and presents view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models.
Related Papers (5)