scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

46,906 citations


Cites background or methods from "Object recognition from local scale..."

  • ...The initial implementation of this approach (Lowe, 1999) simply located keypoints at the location and scale of the central sample point....

    [...]

  • ...Earlier work by the author (Lowe, 1999) extended the local feature approach to achieve scale invariance....

    [...]

  • ...More details on applications of these features to recognition are available in other pape rs (Lowe, 1999; Lowe, 2001; Se, Lowe and Little, 2002)....

    [...]

  • ...To efficiently detect stable keypoint locations in scale space, we have proposed (Lowe, 1999) using scalespace extrema in the difference-of-Gaussian function convolved with the image, D(x, y, σ ), which can be computed from the difference of two nearby scales separated by a constant multiplicative…...

    [...]

  • ...More details on applications of these features to recognition are available in other papers (Lowe, 1999, 2001; Se et al., 2002)....

    [...]

Proceedings ArticleDOI
27 Jun 2016
TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
Abstract: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.

27,256 citations


Cites methods from "Object recognition from local scale..."

  • ...Detection pipelines generally start by extracting a set of robust features from input images (Haar [25], SIFT [23], HOG [4], convolutional features [6])....

    [...]

01 Jan 2011
TL;DR: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.
Abstract: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.

14,708 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


Cites methods from "Object recognition from local scale..."

  • ...al., 2013). In fact, deep MPCNNs pre-trained by SL can extract useful features from quite diverse off-training-set images, yielding better results than traditional, widely used features such as SIFT (Lowe, 1999, 2004) on many vision tasks (Razavian et al., 2014). To deal with changing datasets, slowly learning deep NNs were also combined with rapidly adapting “surface” NNs (Kak et al., 2010). Remarkably, in...

    [...]

  • ...In fact, deep MPCNNs pre-trained by SL can extract useful features from quite diverse off-training-set images, yielding better results than traditional,widely used features such as SIFT (Lowe, 1999, 2004) on many vision tasks (Razavian, Azizpour, Sullivan, & Carlsson, 2014)....

    [...]

Book ChapterDOI
07 May 2006
TL;DR: A novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Abstract: In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.

13,011 citations


Cites methods from "Object recognition from local scale..."

  • ...Focusing on speed, Lowe [12] approximated the Laplacian of Gaussian (LoG) by a Difference of Gaussians (DoG) filter....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The human homologue of the monkey inferotemporal cortex has been identified through use of new non-invasive techniques and the close correlation between the activity of inference cells and the perception of object images has been revealed.

221 citations


"Object recognition from local scale..." refers background in this paper

  • ...Recent research in neuroscience has shown that object recognition in primates makes use of features of intermediate complexity that are largely invariant to changes in scale, location, and illumination (Tanaka [21], Perrett & Oram [16])....

    [...]

Proceedings ArticleDOI
25 Aug 1996
TL;DR: The method for histogram matching is extended to compute the probability of the presence of an object in an image and shows that receptive field histograms provide a technique for object recognition which is robust, has low computational cost and a computational complexity which is linear with the number of pixels.
Abstract: This paper describes a probabilistic object recognition technique which does not require correspondence matching of images This technique is an extension of our earlier work (1996) on object recognition using matching of multi-dimensional receptive field histograms In the earlier paper we have shown that multi-dimensional receptive field histograms can be matched to provide object recognition which is robust in the face of changes in viewing position and independent of image plane rotation and scale In this paper we extend this method to compute the probability of the presence of an object in an image The paper begins with a review of the method and previously presented experimental results We then extend the method for histogram matching to obtain a genuine probability of the presence of an object We present experimental results on a database of 100 objects showing that the approach is capable recognizing all objects correctly by using only a small portion of the image Our results show that receptive field histograms provide a technique for object recognition which is robust, has low computational cost and a computational complexity which is linear with the number of pixels

194 citations

Journal ArticleDOI
TL;DR: Object perception may involve seeing, recognition, preparation of actions, and emotional responses--functions that human brain imaging and neuropsychology suggest are localized separately; perhaps because of this specialization, object perception is remarkably rapid and efficient.

192 citations


"Object recognition from local scale..." refers background in this paper

  • ...It is also known that object recognition in the brain depends on a serial process of attention to bind features to object interpretations, determine pose, and segment an object from a cluttered background [22]....

    [...]

Journal ArticleDOI
TL;DR: A method for recognizing partially occluded objects for bin-picking tasks using eigenspace analysis, referred to as the "eigen window" method, that stores multiple partial appearances of an object in an eIGenspace to reduce memory requirements.
Abstract: This paper describes a method for recognizing partially occluded objects for bin-picking tasks using eigenspace analysis, referred to as the "eigen window" method, that stores multiple partial appearances of an object in an eigenspace. Such partial appearances require a large amount of memory space. Three measurements, detectability, uniqueness, and reliability, on windows are developed to eliminate redundant windows and thereby reduce memory requirements. Using a pose clustering technique, the method determines the pose of an object and the object type itself. We have implemented the method and verified its validity.

179 citations


"Object recognition from local scale..." refers background in this paper

  • ...Ohba & Ikeuchi [15] successfully apply the eigenspace approach to cluttered images by using many small local eigen-windows, but this then requires expensive search for each window in a new image, as with template matching....

    [...]

01 Jan 1997
TL;DR: Nearest-neighbor correlation-based similarity computation in the space of outputs of complex-type receptive fields can support robust recognition of 3D objects and has interesting implications for the design of a front end to an artificial object recognition system and the understanding of the faculty of object recognition in primate vision.
Abstract: Nearest-neighbor correlation-based similarity computation in the space of outputs of complex-type receptive fields can support robust recognition of 3D objects. Our experiments with four collections of objects resulted in mean recognition rates between 84% (for subordinate-level discrimination among 15 quadruped animal shapes) and 94% (for basic-level recognition of 20 everyday objects), over a 40deg X 40deg range of viewpoints, centered on a stored canonical view and related to it by rotations in depth (comparable figures were obtained for image-plane translations). This result has interesting implications for the design of a front end to an artificial object recognition system, and for the understanding of the faculty of object recognition in primate vision.

110 citations


"Object recognition from local scale..." refers background in this paper

  • ...The feature responses have been shown to depend on previous visual learning from exposure to specific objects containing the features (Logothetis, Pauls & Poggio [10])....

    [...]

  • ...Edelman, Intrator & Poggio [5] have performed experiments that simulated the responses of complex neurons to different 3D views of computer graphic models, and found that the complex cell outputs provided much better discrimination than simple correlation-based matching....

    [...]