D
David Eigen
Researcher at New York University
Publications - 36
Citations - 11763
David Eigen is an academic researcher from New York University. The author has contributed to research in topics: Deep learning & Task (project management). The author has an hindex of 18, co-authored 36 publications receiving 9765 citations. Previous affiliations of David Eigen include Brown University & Courant Institute of Mathematical Sciences.
Papers
More filters
Proceedings Article
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
TL;DR: In this article, a multiscale and sliding window approach is proposed to predict object boundaries, which is then accumulated rather than suppressed in order to increase detection confidence, and OverFeat is the winner of the ImageNet Large Scale Visual Recognition Challenge 2013.
Proceedings Article
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network
TL;DR: In this article, two deep network stacks are employed to make a coarse global prediction based on the entire image, and another to refine this prediction locally, which achieves state-of-the-art results on both NYU Depth and KITTI.
Proceedings ArticleDOI
Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture
David Eigen,Rob Fergus +1 more
TL;DR: This paper addresses three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling using a multiscale convolutional network that is able to adapt easily to each task using only small modifications.
Posted Content
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
TL;DR: This integrated framework for using Convolutional Networks for classification, localization and detection is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 and obtained very competitive results for the detection and classifications tasks.
Posted Content
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network
TL;DR: This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale.