scispace - formally typeset
Search or ask a question

Showing papers by "Rob Fergus published in 2006"


Journal ArticleDOI
TL;DR: It is found that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.
Abstract: Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.

2,976 citations


Journal ArticleDOI
01 Jul 2006
TL;DR: This work introduces a method to remove the effects of camera shake from seriously blurred images, which assumes a uniform camera blur over the image and negligible in-plane camera rotation.
Abstract: Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequency-domain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow convoluted paths, and a spatial domain prior can better maintain visually salient image characteristics. We introduce a method to remove the effects of camera shake from seriously blurred images. The method assumes a uniform camera blur over the image and negligible in-plane camera rotation. In order to estimate the blur from the camera shake, the user must specify an image region without saturation effects. We show results for a variety of digital photographs taken from personal photo collections.

1,919 citations


02 Sep 2006
TL;DR: In this article, a random lens is defined as one for which the function relating the input light ray to the output sensor location is pseudo-random, and two machine learning methods are compared for both camera calibration and image reconstruction.
Abstract: We call a random lens one for which the function relating the input light ray to the output sensor location is pseudo-random. Imaging systems with random lenses can expand the space of possible camera designs, allowing new trade-offs in optical design and potentially adding new imaging capabilities. Machine learning methods are critical for both camera calibration and image reconstruction from the sensor data. We develop the theory and compare two different methods for calibration and reconstruction: an MAP approach, and basis pursuit from compressive sensing [5]. We show proof-of-concept experimental results from a random lens made from a multi-faceted mirror, showing successful calibration and image reconstruction. We illustrate the potential for super-resolution and 3D imaging.

108 citations


Patent
28 Jul 2006
TL;DR: In this paper, a computer method and system for deblurring an image is provided, which employs statistics on distribution of intensity gradients of a known model, based on a natural image which may be unrelated to the subject image to be deblurred by the system.
Abstract: A computer method and system for deblurring an image is provided The invention method and system of deblurring employs statistics on distribution of intensity gradients of a known model The known model is based on a natural image which may be unrelated to the subject image to be deblurred by the system Given a subject image having blur, the invention method/system estimates a blur kernel and a solution image portion corresponding to a sample area of the subject image, by applying the statistics to intensity gradients of the sample area and solving for most probable solution image The estimation process is carried out at multiple scales and results in a blur kernel In a last step, the subject image is deconvolved image using the resulting blur kernel The deconvolution generates a deblurred image corresponding to the subject image

38 citations


Journal Article
TL;DR: A parts and structure model for object category recognition that can be learnt efficiently and in a weakly-supervised manner, bypassing the need for feature detectors, to give the globally optimal match within a query image.
Abstract: We present a parts and structure model for object category recognition that can be learnt efficiently and in a weakly-supervised manner: the model is learnt from example images containing category instances, without requiring segmentation from background clutter. The model is a sparse representation of the object, and consists of a star topology configuration of parts modeling the output of a variety of feature detectors. The optimal choice of feature types (whose repertoire includes interest points, curves and regions) is made automatically. In recognition, the model may be applied efficiently in a complete manner, bypassing the need for feature detectors, to give the globally optimal match within a query image. The approach is demonstrated on a wide variety of categories, and delivers both successful classification and localization of the object within the image.

22 citations