R
Rob Fergus
Researcher at New York University
Publications - 175
Citations - 103027
Rob Fergus is an academic researcher from New York University. The author has contributed to research in topics: Object (computer science) & Reinforcement learning. The author has an hindex of 82, co-authored 165 publications receiving 85690 citations. Previous affiliations of Rob Fergus include California Institute of Technology & University of Oxford.
Papers
More filters
Proceedings ArticleDOI
Image and depth from a conventional camera with a coded aperture
TL;DR: A simple modification to a conventional camera is proposed to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture, and introduces a criterion for depth discriminability which is used to design the preferred aperture pattern.
Proceedings Article
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
TL;DR: In this paper, the authors exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation, while keeping the accuracy within 1% of the original model.
Posted Content
Intriguing properties of neural networks
Christian Szegedy,Wojciech Zaremba,Ilya Sutskever,Joan Bruna,Dumitru Erhan,Ian Goodfellow,Rob Fergus,Rob Fergus +7 more
TL;DR: This article showed that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend, which suggests that it is the space, rather than individual units, that contains of the semantic information in the high layers of neural networks.
Proceedings ArticleDOI
Adaptive deconvolutional networks for mid and high level feature learning
TL;DR: A hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling, relying on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches.
Posted Content
End-To-End Memory Networks
TL;DR: A neural network with a recurrent attention model over a possibly large external memory that is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings.