K
Kevin Chen
Researcher at Stanford University
Publications - 18
Citations - 2633
Kevin Chen is an academic researcher from Stanford University. The author has contributed to research in topics: Recurrent neural network & Natural language. The author has an hindex of 10, co-authored 18 publications receiving 1852 citations.
Papers
More filters
Book ChapterDOI
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
TL;DR: 3D-R2N2 as discussed by the authors proposes a 3D Recurrent Reconstruction Neural Network that learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data.
Posted Content
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
TL;DR: The 3D-R2N2 reconstruction framework outperforms the state-of-the-art methods for single view reconstruction, and enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).
Journal ArticleDOI
The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues
TL;DR: This work presents the first wearable VR display supporting high image resolution as well as focus cues, and demonstrates significant improvements in resolution and retinal blur quality over related near-eye displays.
Proceedings ArticleDOI
Lattice Long Short-Term Memory for Human Action Recognition
TL;DR: Lattice-LSTM (L2STM) as discussed by the authors extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations, which effectively enhances the ability to model dynamics across time and addresses the nonstationary issue of long-term motion dynamics without significantly increasing the model complexity.
Proceedings ArticleDOI
DeLay: Robust Spatial Layout Estimation for Cluttered Indoor Scenes
TL;DR: This paper presents a method that uses a fully convolutional neural network (FCNN) in conjunction with a novel optimization framework for generating layout estimates, and demonstrates that it is robust in the presence of clutter and handles a wide range of highly challenging scenes.