scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A discriminative latent variable model for classification problems in structured domains where inputs can be represented by a graph of local observations and a hidden-state conditional random field framework learns a set of latent variables conditioned on local features.
Abstract: We present a discriminative latent variable model for classification problems in structured domains where inputs can be represented by a graph of local observations. A hidden-state conditional random field framework learns a set of latent variables conditioned on local features. Observations need not be independent and may overlap in space and time.

578 citations


Cites methods from "Object recognition from local scale..."

  • ...In the object recognition domain, patches xi;j in each image are obtained using the SIFT detector [13], each patch xi;j is then represented by a feature vector ðxi;jÞ that incorporates a combination of SIFT descriptor and relative location and scale features....

    [...]

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper proposes a model based approach, which detects humans using a 2-D head contour model and a 3-DHead surface model and proposes a segmentation scheme to segment the human from his/her surroundings and extract the whole contours of the figure based on the authors' detection point.
Abstract: Conventional human detection is mostly done in images taken by visible-light cameras. These methods imitate the detection process that human use. They use features based on gradients, such as histograms of oriented gradients (HOG), or extract interest points in the image, such as scale-invariant feature transform (SIFT), etc. In this paper, we present a novel human detection method using depth information taken by the Kinect for Xbox 360. We propose a model based approach, which detects humans using a 2-D head contour model and a 3-D head surface model. We propose a segmentation scheme to segment the human from his/her surroundings and extract the whole contours of the figure based on our detection point. We also explore the tracking algorithm based on our detection result. The methods are tested on our database taken by the Kinect in our lab and present superior results.

574 citations


Cites methods from "Object recognition from local scale..."

  • ...Some methods involve statistical training based on local features, e.g. gradient-based features such as HOG [1], EOH [8], and some involve extracting interest points in the image, such as scale-invariant feature transform (SIFT) [9], etc....

    [...]

Book
31 Mar 2015
TL;DR: This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR and provides an overview of the common definitions of AR, and shows how AR fits into taxonomies of other related technologies.
Abstract: This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR. From early research in the1960's until widespread availability by the 2010's there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched.

573 citations

Journal ArticleDOI
TL;DR: To increase the robustness of the system, two semi-local constraints on combinations of region correspondences are derived (one geometric, the other photometric) allow to test the consistency of correspondences and hence to reject falsely matched regions.
Abstract: ‘Invariant regions’ are self-adaptive image patches that automatically deform with changing viewpoint as to keep on covering identical physical parts of a scene. Such regions can be extracted directly from a single image. They are then described by a set of invariant features, which makes it relatively easy to match them between views, even under wide baseline conditions. In this contribution, two methods to extract invariant regions are presented. The first one starts from corners and uses the nearby edges, while the second one is purely intensity-based. As a matter of fact, the goal is to build an opportunistic system that exploits several types of invariant regions as it sees fit. This yields more correspondences and a system that can deal with a wider range of images. To increase the robustness of the system, two semi-local constraints on combinations of region correspondences are derived (one geometric, the other photometric). They allow to test the consistency of correspondences and hence to reject falsely matched regions. Experiments on images of real-world scenes taken from substantially different viewpoints demonstrate the feasibility of the approach.

568 citations


Cites background or methods from "Object recognition from local scale..."

  • ...In summary, our system differs from other wide baseline stereo methods in that we do not apply a search between images but process each image and each local feature individually (Gruen, 1985; Super and Klarquist, 1997; Schaffalitzky and Zisserman, 2001), in that we fully take into account the affine deformations caused by the change in viewpoint (Lowe, 1999; Montesinos et al., 2000; Schmid and Mohr, 1997; Dufournaud et al., 2000) and in that we can deal with general 3D objects without assuming specific structures to be present in the image (Pritchett and Zisserman, 1998; Tell and Carlsson, 2000)....

    [...]

  • ...For instance, Lowe (1999) uses extrema of a difference of Gaussians filter....

    [...]

  • ...The consistency of the matches found is tested using semi-local constraints, followed by a test on the epipolar geometry using RANSAC. As shown in the experimental results, the feasibility of affine invariance even on a local scale has been demonstrated. Robust matching is quite a generic problem in vision and several other applications can be considered. Object recognition is one, where images of an object can be matched against a small set of reference images of the same object. The sample set can be kept small because of the invariance. Moreover, as the features are local, recognition against variable backgrounds and under occlusion is supported by this method. Another application is grouping, where symmetries can be found as repeated structures. Image database retrieval can also benefit from these regions, where other pictures of the same scene or object can be found. Here, the viewpoint and illumination invariance gives the system the capacity to generalize to a great extent from a single query image. Finally, being able to match a current view against learned views can allow robots to roam extended spaces, without the need for a 3D model. Initial results for such applications can be found in Tuytelaars and Van Gool (1999), Tuytelaars et al....

    [...]

  • ...…Klarquist, 1997; Schaffalitzky and Zisserman, 2001), in that we fully take into account the affine deformations caused by the change in viewpoint (Lowe, 1999; Montesinos et al., 2000; Schmid and Mohr, 1997; Dufournaud et al., 2000) and in that we can deal with general 3D objects without assuming…...

    [...]

  • ...Lowe (1999) has extended these ideas to real scale-invariance, using circular regions that maximize the output of a difference of gaussian filters in scale space, while Hall et al. (1999) not only applied automatic scale selection (based on Lindeberg (1998)), but also retrieved the orientation of…...

    [...]

Journal ArticleDOI
TL;DR: The typical workflow applied by SfM-MVS software packages is detailed, practical details of implementing S fM- MVS are reviewed, existing validation studies to assess practically achievable data quality are combined, and the range of applications in physical geography are reviewed.
Abstract: Accurate, precise and rapid acquisition of topographic data is fundamental to many sub-disciplines of physical geography. Technological developments over the past few decades have made fully distributed data sets of centimetric resolution and accuracy commonplace, yet the emergence of Structure from Motion (SfM) with Multi-View Stereo (MVS) in recent years has revolutionised three-dimensional topographic surveys in physical geography by democratising data collection and processing. SfM-MVS originates from the fields of computer vision and photogrammetry, requires minimal expensive equipment or specialist expertise and, under certain conditions, can produce point clouds of comparable quality to existing survey methods (e.g. Terrestrial Laser Scanning). Consequently, applications of SfM-MVS in physical geography have multiplied rapidly. There are many practical options available to physical geographers when planning a SfM-MVS survey (e.g. platforms, cameras, software), yet, many SfM-MVS end-users are uncert...

565 citations


Cites methods from "Object recognition from local scale..."

  • ...…(Morel and Yu, 2009), BRIEF (Calonder et al., 2010) and LDAHash (Strecha et al., 2012)), the Scale Invariant Feature Transform (SIFT) object recognition system is used most widely in SfM (Lowe, 1999, 2001, 2004) and has been shown by Lowe (2004) to perform well for changes in viewpoint of <40 ....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]