scispace - formally typeset
Search or ask a question
Proceedings Article•DOI•

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Journal Article•DOI•
TL;DR: Experimental results show that by introduction of TLS derived point clouds as GCPs, the accuracy of geo-positioning based on UAV imagery can be improved, and the results also show that the TLSderived point clouds can be used as G CPs in areas such as in mountainous or high-risk environments where it is difficult to conduct a GPS survey.
Abstract: This paper presents a practical framework for the integration of unmanned aerial vehicle (UAV) based photogrammetry and terrestrial laser scanning (TLS) with application to open-pit mine areas, which includes UAV image and TLS point cloud acquisition, image and cloud point processing and integration, object-oriented classification and three-dimensional (3D) mapping and monitoring of open-pit mine areas. The proposed framework was tested in three open-pit mine areas in southwestern China. (1) With respect to extracting the conjugate points of the stereo pair of UAV images and those points between TLS point clouds and UAV images, some feature points were first extracted by the scale-invariant feature transform (SIFT) operator and the outliers were identified and therefore eliminated by the RANdom SAmple Consensus (RANSAC) approach; (2) With respect to improving the accuracy of geo-positioning based on UAV imagery, the ground control points (GCPs) surveyed from global positioning systems (GPS) and the feature points extracted from TLS were integrated in the bundle adjustment, and three scenarios were designed and compared; (3) With respect to monitoring and mapping the mine areas for land reclamation, an object-based image analysis approach was used for the classification of the accuracy improved UAV ortho-image. The experimental results show that by introduction of TLS derived point clouds as GCPs, the accuracy of geo-positioning based on UAV imagery can be improved. At the same time, the accuracy of geo-positioning based on GCPs form the TLS derived point clouds is close to that based on GCPs from the GPS survey. The results also show that the TLS derived point clouds can be used as GCPs in areas such as in mountainous or high-risk environments where it is difficult to conduct a GPS survey. The proposed framework achieved a decimeter-level accuracy for the generated digital surface model (DSM) and digital orthophoto map (DOM), and an overall accuracy of 90.67% for classification of the land covers in the open-pit mine.

120 citations


Cites methods from "Object recognition from local scale..."

  • ...The conjugate points of a stereo pair of images are extracted by the scale-invariant feature transform (SIFT) operator [38] and the outliers are identified and eliminated by the RANdom SAmple Consensus (RANSAC) approach [39]....

    [...]

Journal Article•DOI•
TL;DR: The proposed Q-MI has been validated and applied to the rigid registrations of clinical brain images, such as MR, CT and PET images, and can provide a smoother registration function with a relatively larger capture range.

120 citations

Journal Article•DOI•
TL;DR: This paper investigates the applicability of a deep learning based matching concept for the generation of precise and accurate GCPs from SAR satellite images by matching optical and SAR images and validate that a NCC, SIFT, and BRISK-based matching greatly benefit, in terms of matching accuracy and precision.
Abstract: Tasks such as the monitoring of natural disasters or the detection of change highly benefit from complementary information about an area or a specific object of interest The required information is provided by fusing high accurate coregistered and georeferenced datasets Aligned high-resolution optical and synthetic aperture radar (SAR) data additionally enable an absolute geolocation accuracy improvement of the optical images by extracting accurate and reliable ground control points (GCPs) from the SAR images In this paper, we investigate the applicability of a deep learning based matching concept for the generation of precise and accurate GCPs from SAR satellite images by matching optical and SAR images To this end, conditional generative adversarial networks (cGANs) are trained to generate SAR-like image patches from optical images For training and testing, optical and SAR image patches are extracted from TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe The artificially generated patches are then used to improve the conditions for three known matching approaches based on normalized cross-correlation (NCC), scale-invariant feature transform (SIFT), and binary robust invariant scalable key (BRISK), which are normally not usable for the matching of optical and SAR images The results validate that a NCC-, SIFT-, and BRISK-based matching greatly benefit, in terms of matching accuracy and precision, from the use of the artificial templates The comparison with two state-of-the-art optical and SAR matching approaches shows the potential of the proposed method but also revealed some challenges and the necessity for further developments

120 citations


Cites background or methods from "Object recognition from local scale..."

  • ...Therefore, it is more difficult to preserve image features, which TABLE II INFLUENCE OF THE ARTIFICIAL GENERATED TEMPLATES ON THE MATCHING ACCURACY AND PRECISION OF NCC [16], SIFT [17], BRISK [18], AND A COMPARISON WITH TWO BASELINE METHODS...

    [...]

  • ...The evaluation focuses on one intensity-based, NCC [16], and on two feature-based matching approaches, SIFT [17] and binary robust invariant scalable key (BRISK) [18]....

    [...]

  • ...TABLE III INFLUENCE OF LOSS FUNCTION ON THE MATCHING ACCURACY AND PRECISION OF NCC [16], SIFT [17], BRISK [18]...

    [...]

  • ...The two feature detectors utilized in this paper are the SIFT [17] and the BRISK [18]....

    [...]

Journal Article•DOI•
TL;DR: An algorithm for the on-board vision vehicle detection problem using a cascade of boosted classifiers shows that the fusion combines the advantages of the other two detectors: generative classifier eliminate "easily" negative examples in the early layers of the cascade, while in the later layers, the discriminative classifiers generate a fine decision boundary removing the negative examples near the vehicle model.
Abstract: We present an algorithm for the on-board vision vehicle detection problem using a cascade of boosted classifiers. Three families of features are compared: the rectangular filters (Haar-like features), the histograms of oriented gradient (HoG), and their combination (a concatenation of the two preceding features). A comparative study of the results of the generative (HoG features), discriminative (Haar-like features) detectors, and of their fusion is presented. These results show that the fusion combines the advantages of the other two detectors: generative classifiers eliminate "easily" negative examples in the early layers of the cascade, while in the later layers, the discriminative classifiers generate a fine decision boundary removing the negative examples near the vehicle model. The best algorithm achieves good performances on a test set containing some 500 vehicle images: the detection rate is about 94% and the false-alarm rate per image is 0.0003.

119 citations

Dissertation•
01 Jan 2012

119 citations

References
More filters
Journal Article•DOI•
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal Article•DOI•
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal Article•DOI•
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal Article•DOI•
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal Article•DOI•
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]