scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Evaluated UAS/SfM for multitemporal 3D crop modelling and developed and assessed a methodology for estimating plant height data from point clouds generated using SfM, showing a potential path to reducing laborious manual height measurement and enhancing plant research programs through UAS and S fM.

136 citations

01 Jan 2008
TL;DR: This report contains a review of visual tracking in monocular video sequences and provides a commentary on the current state of the field as well as a comparative analysis of the various approaches.
Abstract: This report contains a review of visual tracking in monocular video sequences. For the purpose of this review, the majority of the visual trackers in the literature are divided into three tracking categories: discrete feature trackers, contour trackers, and region-based trackers. This categorization was performed based on the features used and the algorithms employed by the various visual trackers. The first class of trackers represents targets as discrete features (e.g. points, sets of points, lines) and performs data association using a distance metric that accommodates the particular feature. Contour trackers provide precise outlines of the target boundaries, meaning that they must not only uncover the position of the target, but its shape as well. Contour trackers often make use of gradient edge information during the tracking process. Region trackers represent the target with area-based descriptors that define its support and attempt to locate the image region in the current frame that best matches an object template. Trackers that are not in agreement with the abovementioned categorization, including those that combine methods from the three defined classes, are also considered in this review. In addition to categorizing and describing the various visual trackers in the literature, this review also provides a commentary on the current state of the field as well as a comparative analysis of the various approaches. The paper concludes with an outline of open problems in visual tracking.

136 citations


Cites background from "Object recognition from local scale..."

  • ...Many trackers that rely on these discrete features leverage additional information from other sources, such as 3D object models (Verghese, Gale & Dyer, 1990; Koller, Daniilidis & Nagel, 1993; Lowe, 1991, 1992; Gennery, 1992; Vacchetti, Lepetit & Fua, 2004)....

    [...]

  • ...Furthermore, as demonstrated in (Lowe, 1999), SIFT features...

    [...]

  • ...Another popular type of interest point used throughout the field of computer vision are scale invariant feature transform (SIFT) points presented by Lowe (Lowe, 1999)....

    [...]

  • ...This need for efficiency has contributed to the popularity of Adaboostbased detectors (Okuma, Taleghani, Freitas, Little & Lowe, 2004; Viola & Jones, 2001)....

    [...]

  • ...In this case, a module is typically trained to detect a certain class of object, such as a face (Viola & Jones, 2001), a vehicle (Zhu, Zhao & Lu, 2007), a pedestrian (Wu, Yu & Hua, 2005), a hockey player (Okuma, Taleghani, Freitas, Little & Lowe, 2004), etc....

    [...]

Proceedings ArticleDOI
10 Oct 2009
TL;DR: A combination of the Harris corner detector and the SIFT descriptor, which computes features with a high repeatability and very good matching properties within approx.
Abstract: In the recent past, the recognition and localization of objects based on local point features has become a widely accepted and utilized method. Among the most popular features are currently the SIFT features, the more recent SURF features, and region-based features such as the MSER. For time-critical application of object recognition and localization systems operating on such features, the SIFT features are too slow (500–600 ms for images of size 640×480 on a 3GHz CPU). The faster SURF achieve a computation time of 150–240 ms, which is still too slow for active tracking of objects or visual servoing applications. In this paper, we present a combination of the Harris corner detector and the SIFT descriptor, which computes features with a high repeatability and very good matching properties within approx. 20 ms. While just computing the SIFT descriptors for computed Harris interest points would lead to an approach that is not scale-invariant, we will show how scale-invariance can be achieved without a time-consuming scale space analysis. Furthermore, we will present results of successful application of the proposed features within our system for recognition and localization of textured objects. An extensive experimental evaluation proves the practical applicability of our approach.

136 citations


Cites background or methods from "Object recognition from local scale..."

  • ...Among the most popular features are currently the SIFT features (Scale Invariant Feature Transform) [1], [2], the more recent SURF features (Speeded Up Robust Features) [3], and region-based features such as the MSER (Maximally Stable Extremal Regions)[4]....

    [...]

  • ...The approach is a variant of Lowe’s framework [1]; the main differences are the voting formula for the Hough transform and the final optimization step using a full homography....

    [...]

  • ...Note that, in practice, often the limiting factor is the effective resolution of the object in the image, and not the theoretical scale invariance of the features....

    [...]

Book ChapterDOI
Albert Haque1, Boya Peng1, Zelun Luo1, Alexandre Alahi1, Serena Yeung1, Li Fei-Fei1 
08 Oct 2016
TL;DR: In this paper, a discriminative model embeds local regions into a learned viewpoint invariant feature space to selectively predict partial poses in the presence of noise and occlusion, which achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints.
Abstract: We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100 K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints.

136 citations

Journal ArticleDOI
02 Apr 2020
TL;DR: Using systems of multiple UAVs is the next obvious step in the process of applying this technology for variety of applications.
Abstract: Nowadays, Unmanned Aerial Vehicles (UAVs) are used in many different applications. Using systems of multiple UAVs is the next obvious step in the process of applying this technology for variety of ...

135 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]