scispace - formally typeset
J

Jiajun Wu

Researcher at Stanford University

Publications -  216
Citations -  13655

Jiajun Wu is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Object (computer science). The author has an hindex of 48, co-authored 169 publications receiving 9618 citations. Previous affiliations of Jiajun Wu include Massachusetts Institute of Technology & Princeton University.

Papers
More filters
Proceedings ArticleDOI

Neural Scene De-rendering

TL;DR: This work proposes a new approach to learn an interpretable distributed representation of scenes, using a deterministic rendering function as the decoder and a object proposal based encoder that is trained by minimizing both the supervised prediction and the unsupervised reconstruction errors.
Posted Content

Visual Object Networks: Image Generation with Disentangled 3D Representation

TL;DR: In this article, a new generative model, Visual Object Networks (VON), is proposed to synthesize natural images of objects with a disentangled 3D representation by unraveling the image formation process into three conditionally independent factors (shape, viewpoint, and texture).
Proceedings ArticleDOI

Look, Listen, and Act: Towards Audio-Visual Embodied Navigation

TL;DR: In this paper, an approach to audio-visual embodied navigation that takes advantage of both visual and audio pieces of evidence is presented. But it is based on three key ideas: a visual perception mapper module that constructs its spatial memory of the environment, a sound perception module that infers the relative location of the sound source from the agent, and a dynamic path planner that plans a sequence of actions based on the audio visual observations and the spatial memory in the environment to navigate toward the goal.
Posted Content

Learning Shape Priors for Single-View 3D Completion and Reconstruction

TL;DR: The proposed ShapeHD pushes the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth.
Book ChapterDOI

Learning Shape Priors for Single-View 3D Completion and Reconstruction

TL;DR: In this article, the authors propose ShapeHD, which integrates deep generative models with adversarially learned shape priors, which serve as a regularizer, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth.