scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
27 Jun 2004
TL;DR: In this article, the authors introduce a new class of distinguished regions based on detecting the most salient convex local arrangements of contours in the image, which are used in a similar way to the local interest points extracted from gray-level images.
Abstract: We introduce a new class of distinguished regions based on detecting the most salient convex local arrangements of contours in the image. The regions are used in a similar way to the local interest points extracted from gray-level images, but they capture shape rather than texture. Local convexity is characterized by measuring the extent to which the detected image contours support circle or arc-like local structures at each position and scale in the image. Our saliency measure combines two cost functions defined on the tangential edges near the circle: a tangential-gradient energy term, and an entropy term that ensures local support from a wide range of angular positions around the circle. The detected regions are invariant to scale changes and rotations, and robust against clutter, occlusions and spurious edge detections. Experimental results show very good performance for both shape matching and recognition of object categories.

218 citations


Cites background from "Object recognition from local scale..."

  • ...Local invariant features based on gray-level patches have proven very successful for matching and recognition of textured objects [14, 15, 20]....

    [...]

Proceedings ArticleDOI
07 Jul 2001
TL;DR: A texture region descriptor is described and demonstrated which is invariant to affine geometric and photometric transformations, and insensitive to the shape of the texture region, resulting in richer and more stable descriptors than those computed at a point.
Abstract: We describe and demonstrate a texture region descriptor which is invariant to affine geometric and photometric transformations, and insensitive to the shape of the texture region. It is applicable to texture patches which are locally planar and have stationary statistics. The novelty of the descriptor is that it is based on statistics aggregated over the region, resulting in richer and more stable descriptors than those computed at a point. Two texture matching applications of this descriptor are demonstrated: (1) it is used to automatically identify, regions of the same type of texture, but with varying surface pose, within a single image; (2) it is used to support wide baseline stereo, i.e. to enable the automatic computation of the epipolar geometry between two images acquired from quite separated viewpoints. Results are presented on several sets of real images.

218 citations


Additional excerpts

  • ...Thewidebaselineapplicationis describedin section3....

    [...]

Journal ArticleDOI
TL;DR: It is demonstrated that high precision can be achieved by combining multiple sources of information, both visual and textual, by automatic generation of time stamped character annotation by aligning subtitles and transcripts.

218 citations


Cites methods from "Object recognition from local scale..."

  • ...In the current exemplar framework slightly worse results on the naming task were obtained by using SIFT (compared to the simple pixel-based descriptor), but this might reasonably be attributed to the SIFT descriptor incorporatingoo muchinvariance to slight appearance changes relevant for discriminating faces....

    [...]

  • ...It is natural to consider the use of more established image representations commonly used in face recognition, for example socalled Eigenfaces [26] or Fisherfaces [27], or alternative local feature representations such as SIFT [28] which have successfully been used in feature-matching tasks including face matching [4], especially considering the simplicity of the descriptor proposed here....

    [...]

  • ...For example, replacing the pixel-based descriptor with a SIFT [28] descriptor or using Eigen facial-features would give some robustness to image deformation....

    [...]

  • ...It is natural to consider the use of more established image repres ntations commonly used in face recognition, for example so-called Eigenfaces [26] or Fisherfaces [27], or alternative local feature representations such as SIFT [28] which have successfully been used in feature-matching tasks including face matching [4], especially considering the simplicity of the descriptor proposed here....

    [...]

Journal ArticleDOI
TL;DR: The results indicate that the registration accuracy of ARRSI is comparable to that produced by a human expert and improvement over the baseline and multimodal sum of squared differences registration techniques tested.
Abstract: This paper presents the Automatic Registration of Remote-Sensing Images (ARRSI); an automatic registration system built to register satellite and aerial remotely sensed images. The system is designed specifically to address the problems associated with the registration of remotely sensed images obtained at different times and/or from different sensors. The ARRSI system is capable of handling remotely sensed images geometrically distorted by various transformations such as translation, rotation, and shear. Global and local contrast issues associated with remotely sensed images are addressed in ARRSI using control-point detection and matching processes based on a phase-congruency model. Intensity-difference issues associated with multimodal registration of remotely sensed images are addressed in ARRSI through the use of features that are invariant to intensity mappings during the control-point matching process. An adaptive control-point matching scheme is employed in ARRSI to reduce the performance issues associated with the registration of large remotely sensed images. Finally, a variation on the Random Sample and Consensus algorithm called Maximum Distance Sample Consensus is introduced in ARRSI to improve the accuracy of the transformation model between two remotely sensed images while minimizing computational overhead. The ARRSI system has been tested using various satellite and aerial remotely sensed images and evaluated based on its accuracy and computational performance. The results indicate that the registration accuracy of ARRSI is comparable to that produced by a human expert and improvement over the baseline and multimodal sum of squared differences registration techniques tested

218 citations

Journal ArticleDOI
TL;DR: A semantic allocation level (SAL) multifeature fusion strategy based on PTM, namely, SAL-PTM (S AL-pLSA and SAL-LDA) for HSR imagery is proposed, and the experimental results confirmed that SAL- PTM is superior to the single-feature methods and CAT-PTm in the scene classification of H SR imagery.
Abstract: Scene classification has been proved to be an effective method for high spatial resolution (HSR) remote sensing image semantic interpretation. The probabilistic topic model (PTM) has been successfully applied to natural scenes by utilizing a single feature (e.g., the spectral feature); however, it is inadequate for HSR images due to the complex structure of the land-cover classes. Although several studies have investigated techniques that combine multiple features, the different features are usually quantized after simple concatenation (CAT-PTM). Unfortunately, due to the inadequate fusion capacity of $\boldsymbol{k}$ -means clustering, the words of the visual dictionary obtained by CAT-PTM are highly correlated. In this paper, a semantic allocation level (SAL) multifeature fusion strategy based on PTM, namely, SAL-PTM (SAL-pLSA and SAL-LDA) for HSR imagery is proposed. In SAL-PTM: 1) the complementary spectral, texture, and scale-invariant-feature-transform features are effectively combined; 2) the three features are extracted and quantized separately by $\boldsymbol{k}$ -means clustering, which can provide appropriate low-level feature descriptions for the semantic representations; and 3)the latent semantic allocations of the three features are captured separately by PTM, which follows the core idea of PTM-based scene classification. The probabilistic latent semantic analysis (pLSA) and latent Dirichlet allocation (LDA) models were compared to test the effect of different PTMs for HSR imagery. A U.S. Geological Survey data set and the UC Merced data set were utilized to evaluate SAL-PTM in comparison with the conventional methods. The experimental results confirmed that SAL-PTM is superior to the single-feature methods and CAT-PTM in the scene classification of HSR imagery.

217 citations


Additional excerpts

  • ...The SIFT feature descriptor can overcome affine transformations, changes in the illumination, and changes in the 3-D viewpoint and has thus been widely applied in image analysis [14], [44], [51]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]