scispace - formally typeset
R

Rob Fergus

Researcher at New York University

Publications -  175
Citations -  103027

Rob Fergus is an academic researcher from New York University. The author has contributed to research in topics: Object (computer science) & Reinforcement learning. The author has an hindex of 82, co-authored 165 publications receiving 85690 citations. Previous affiliations of Rob Fergus include California Institute of Technology & University of Oxford.

Papers
More filters
Patent

Systems and methods for identifying users in media content based on poselets and neural networks

TL;DR: In this article, a first image including the first set of poselets can be inputted into a first instance of a neural network to generate a first multi-dimensional vector, which can then be input into a second instance of the neural network for generating a second multidimensional vector.
Posted Content

Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

TL;DR: In this article, the authors argue that image modeling can greatly improve the precision of Kepler in pointing-degraded two-wheel mode, and demonstrate that the expected drift or jitter in positions in the two-weel era will help with constraining calibration parameters.
Journal ArticleDOI

S4: A Spatial-Spectral model for Speckle Suppression

TL;DR: In this paper, a flexible data-driven model for the unocculted (and highly speckled) light in the P1640 spectroscopic coronograph is introduced.
Posted Content

Improving Image Classification with Location Context

TL;DR: In this article, the authors tackle the problem of performing image classification with location context, in which they are given the GPS coordinates for images in both the train and test phases, and they explore different ways of encoding and extracting features from GPS coordinates, and show how to naturally incorporate these features into a CNN, the current state-of-theart for most image classification and recognition problems.