Open AccessProceedings Article
Dynamic View Synthesis From Dynamic Monocular Video
Chen Gao,Ayush Saraf,Johannes Kopf,Jia-Bin Huang +3 more
- pp 5712-5721
Reads0
Chats0
TLDR
In this article, an unsupervised method for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene is presented. But learning this implicit function from a single video is highly ill-posed (with infinitely many solutions that match the input video), so they introduce regularization losses to encourage a more physically plausible solution.Abstract:
We present an algorithm for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene. Our work builds upon recent advances in neural implicit representation and uses continuous and differentiable functions for modeling the time-varying structure and the appearance of the scene. We jointly train a time-invariant static NeRF and a time-varying dynamic NeRF, and learn how to blend the results in an unsupervised manner. However, learning this implicit function from a single video is highly ill-posed (with infinitely many solutions that match the input video). To resolve the ambiguity, we introduce regularization losses to encourage a more physically plausible solution. We show extensive quantitative and qualitative results of dynamic view synthesis from casually captured videos.read more
Citations
More filters
Posted Content
Space-time Neural Irradiance Fields for Free-Viewpoint Video
TL;DR: A method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video using the scene depth estimated from video depth estimation methods, aggregating contents from individual frames into a single global representation.
Posted Content
Advances in Neural Rendering
Ayush Tewari,Justus Thies,Ben Mildenhall,Pratul P. Srinivasan,Edgar Tretschk,Yifan Wang,Christoph Lassner,Vincent Sitzmann,Ricardo Martin-Brualla,Stephen Lombardi,Tomas Simon,Christian Theobalt,Matthias Niessner,Jonathan T. Barron,Gordon Wetzstein,Michael Zollhoefer,Vladislav Golyanik +16 more
TL;DR: In this paper, a state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often referred to as neural scene representations.
Posted Content
NeuralDiff: Segmenting 3D objects that move in egocentric videos
TL;DR: The authors decompose the observed 3D scene into a static background and a dynamic foreground containing the objects that move in the video sequence, and further separate the dynamic component into objects and the actor that observes and moves them.
Posted Content
360MonoDepth: High-Resolution 360{\deg} Monocular Depth Estimation
TL;DR: In this article, a flexible framework for monocular depth estimation from high-resolution 360-deg images using tangent images is proposed, which projects the input image onto a set of tangent planes that produce perspective views.
Posted Content
T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis.
Benjamin Attal,Eliot Laidlaw,Aaron Gokaslan,Changil Kim,Christian Richardt,James Tompkin,Matthew O'Toole +6 more
TL;DR: In this paper, a neural representation based on an image formation model for continuous-wave ToF cameras is proposed to improve robustness of dynamic scene reconstruction to erroneous calibration and large motions.