scispace - formally typeset
Search or ask a question
Proceedings Article•DOI•

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Patent•
Steven Bathiche1, William J. Westerinen1, Miller T. Abel1, Julio Estrada1, Charles J. Migos1 •
09 Jun 2010
TL;DR: In this paper, a mobile device such as a cell phone is used to remotely control an electronic appliance such as television or personal computer, and the associated user interface configuration and communication protocol are implemented to control the electronic appliance.
Abstract: A mobile device such as a cell phone is used to remotely control an electronic appliance such as a television or personal computer. In a setup phase, the mobile device captures an image of the electronic appliance and identifies and stores scale-invariant features of the image. A user interface configuration such as a virtual keypad configuration, and a communication protocol, can be associated with the stored data. Subsequently, in an implementation phase, another image of the electronic appliance is captured and compared to the stored features in a library to identify a match. In response, the associated user interface configuration and communication protocol are implemented to control the electronic appliance. In a polling and reply process, the mobile device captures a picture of a display of the electronic device and compares it to image data which is transmitted by the electronic appliance.

156 citations

Proceedings Article•
Piotr Bojanowski1, Armand Joulin1•
06 Aug 2017
TL;DR: In this article, the authors propose to fix a set of target representations, called Noise As Targets (NAT), and constrain the deep features to align to them to avoid trivial solutions and collapsing of features.
Abstract: Convolutional neural networks provide visual features that perform well in many computer vision applications. However, training these networks requires large amounts of supervision; this paper introduces a generic framework to train such networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and PASCAL VOC.

155 citations

Journal Article•DOI•
TL;DR: This survey provides a comprehensive review of multimodal image matching methods from handcrafted to deep methods for each research field according to their imaging nature, including medical, remote sensing and computer vision.

155 citations


Cites methods from "Object recognition from local scale..."

  • ...LoG is approximated by the difference of Gaussians (DoG) in the famous SIFT method [121,160] to reduce computation....

    [...]

Journal Article•DOI•
TL;DR: A review of the literature in vehicle detection under varying environments as well as appearance-based and motion-based methods for robust vehicle detection approaches for various on-road conditions is provided.

155 citations

Book Chapter•DOI•
TL;DR: Experimental results indicate that this generic model of local shape and deformation is applicable across a wide variety of object categories, providing good performance for object recognition and localization on a difficult object recognition benchmark.
Abstract: We address comparing related, but not identical shapes in images following a deformable template strategy. At the heart of this is the notion of an alignment between the shapes to be matched. The transformation necessary for alignment and the remaining differences after alignment are then used to make a comparison. A model determines what kind of deformations or alignments are acceptable, and what variation in appearance should remain after alignment. This ties strongly with the idea that the difference in shape is the residual difference, after some family of transformations has been applied for alignment. Finding an alignment of a model to a novel object involves search through the space of possible alignments. In many settings this search is quite difficult. This work shows that the search can be approximated by an easier discrete matching problem between key points on a model and a novel object. This is a departure from traditional approaches to deformable template matching that concentrate on analyzing differential models. This thesis presents theories and experiments on searching for, identifying, and using alignments found via discrete matchings. In particular we present a mathematical and ecological motivation for a medium scale descriptor of shape, geometric blur. Geometric blur is an average over transformations of a sparse signal or feature channel, and can be computed using a spatially varying convolution. The resulting shape descriptors are useful for evaluating local shape similarity. Experiments demonstrate their efficacy for image classification and shape correspondence. Finding alignments between shapes is formulated as an optimization problem over discrete matchings between feature points in images. Similarity between putative correspondences is measured using geometric blur, and the deformation in the configuration of points is measured by summing over deformations in pairwise relationships. The snatching problem is formulated as an integer quadratic programming problem and approximated with a simple technique. Experimental results indicate that this generic model of local shape and deformation is applicable across a wide variety of object categories, providing good (currently the best known) performance for object recognition and localization on a difficult object recognition benchmark. Furthermore this generic object alignment strategy can be used to model variation in images of an object category, identifying the repeated object structures and providing automatic localization of the objects.

155 citations


Cites background from "Object recognition from local scale..."

  • ...SIFT is usually used in conjunction with a region of interest operator based on finding local maxima of a scale space operator based on the difference of Gaussians applied to pixel values....

    [...]

  • ...We will use SIFT as an example to illustrate the differences.5 The first difference is the region of interest operator....

    [...]

  • ...Generally the SIFT type descriptors are suited to identifying parts of the same object from multiple views, while the geometric blur based descriptor described here is designed to evaluate potential similarity under intra-class variation exploiting larger support and spatially varying blur....

    [...]

  • ...Finally the underlying features for the geometric blur based descriptor described in this chapter and SIFT are both based on oriented edge maps, with slightly different details in engineering....

    [...]

  • ...It is worth noting that in general the scale relative to the edge features is much larger that commonly found with the SIFT region of interest operator....

    [...]

References
More filters
Journal Article•DOI•
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal Article•DOI•
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal Article•DOI•
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal Article•DOI•
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal Article•DOI•
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]