scispace - formally typeset
M

Matthew Tancik

Researcher at University of California, Berkeley

Publications -  36
Citations -  6578

Matthew Tancik is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: View synthesis & Rendering (computer graphics). The author has an hindex of 16, co-authored 36 publications receiving 1491 citations. Previous affiliations of Matthew Tancik include Massachusetts Institute of Technology.

Papers
More filters
Posted Content

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

TL;DR: This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Book ChapterDOI

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

TL;DR: In this article, a fully-connected (non-convolutional) deep network is used to synthesize novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views.
Posted Content

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

TL;DR: An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested.
Posted Content

pixelNeRF: Neural Radiance Fields from One or Few Images

TL;DR: For example, pixelNeRF as discussed by the authors predicts a continuous neural scene representation conditioned on one or few input images, which can be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views.
Proceedings ArticleDOI

pixelNeRF: Neural Radiance Fields from One or Few Images

TL;DR: PixelNeRF as mentioned in this paper is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views.