scispace - formally typeset
Open AccessPosted Content

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

TLDR
In this article, a normalizing flow model is used to regularize the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training.
Abstract
Neural Radiance Fields (NeRF) have emerged as a powerful representation for the task of novel view synthesis due to their simplicity and state-of-the-art performance. Though NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available, its performance drops significantly when this number is reduced. We observe that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training. We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training. We additionally use a normalizing flow model to regularize the color of unobserved viewpoints. Our model outperforms not only other methods that optimize over a single scene, but in many cases also conditional models that are extensively pre-trained on large multi-view datasets.

read more

Citations
More filters
Journal ArticleDOI

Reconstructing Personalized Semantic Facial NeRF Models from Monocular Video

TL;DR: A novel semantic model for human head defined with neural radiance field that can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape and applies to many tasks like facial retargeting and expression editing.
Journal ArticleDOI

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image

TL;DR: Li et al. as mentioned in this paper proposed a single view neural radiance field (SinNeRF) framework consisting of semantic and geometry regularizations, where geometry pseudo labels and semantic pseudo labels are introduced and propagated to guide the progressive training process.
Book ChapterDOI

ARF: Artistic Radiance Fields

Ken Krechmer
TL;DR: In this article , the authors proposed a method for transferring the artistic features of an arbitrary style image to a 3D scene by using a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency.
Journal ArticleDOI

Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty Quantification

TL;DR: In this article , a conditional-flow NeRF (CF-NeRF) is proposed to incorporate uncertainty quantification into NeRF-based approaches, which learns a distribution over all possible radiance fields modelling the scene which is used to quantify the uncertainty associated with the modelled scene.
Book ChapterDOI

Neural Mesh-Based Graphics

TL;DR: In this paper , a view-dependent mesh-based denser point descriptor rasterization was proposed to speed up the view synthesis process, in addition to a foreground/background scene rendering split and an improved loss.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

TL;DR: In this paper, the authors introduce a new dataset of human perceptual similarity judgments, and systematically evaluate deep features across different architectures and tasks and compare them with classic metrics, finding that deep features outperform all previous metrics by large margins on their dataset.
Proceedings ArticleDOI

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

TL;DR: DeepSDF as mentioned in this paper represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape.
Proceedings ArticleDOI

Revisiting Unreasonable Effectiveness of Data in Deep Learning Era

TL;DR: In this paper, the authors investigated how the performance of current vision tasks would change if this data was used for representation learning and found that the performance on vision tasks increases logarithmically based on volume of training data size.