scispace - formally typeset
J

Jiajun Wu

Researcher at Stanford University

Publications -  216
Citations -  13655

Jiajun Wu is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Object (computer science). The author has an hindex of 48, co-authored 169 publications receiving 9618 citations. Previous affiliations of Jiajun Wu include Massachusetts Institute of Technology & Princeton University.

Papers
More filters
Proceedings Article

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

TL;DR: A novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets, and a powerful 3D shape descriptor which has wide applications in 3D object recognition.
Posted Content

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

TL;DR: Wang et al. as discussed by the authors proposed a 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets.
Journal ArticleDOI

Video Enhancement with Task-Oriented Flow

TL;DR: Task-Oriented Flow (TOFlow) as mentioned in this paper is a self-supervised, task-specific representation for low-level video processing, which is trained in a supervised manner.
Book ChapterDOI

Ambient Sound Provides Supervision for Visual Learning

TL;DR: This work trains a convolutional neural network to predict a statistical summary of the sound associated with a video frame, and shows that this representation is comparable to that of other state-of-the-art unsupervised learning methods.
Posted Content

The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision

TL;DR: The Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, the model learns by simply looking at images and reading paired questions and answers.