scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
07 Sep 2015
TL;DR: A new 60GHz imaging algorithm, {\em RSS Series Analysis}, which images an object using only RSS measurements recorded along the device's trajectory, and provides a basic primitive towards the construction of detailed environmental mapping systems.
Abstract: The future of mobile computing involves autonomous drones, robots and vehicles. To accurately sense their surroundings in a variety of scenarios, these mobile computers require a robust environmental mapping system. One attractive approach is to reuse millimeterwave communication hardware in these devices, e.g. 60GHz networking chipset, and capture signals reflected by the target surface. The devices can also move while collecting reflection signals, creating a large synthetic aperture radar (SAR) for high-precision RF imaging. Our experimental measurements, however, show that this approach provides poor precision in practice, as imaging results are highly sensitive to device positioning errors that translate into phase errors. We address this challenge by proposing a new 60GHz imaging algorithm, {\em RSS Series Analysis}, which images an object using only RSS measurements recorded along the device's trajectory. In addition to object location, our algorithm can discover a rich set of object surface properties at high precision, including object surface orientation, curvature, boundaries, and surface material. We tested our system on a variety of common household objects (between 5cm--30cm in width). Results show that it achieves high accuracy (cm level) in a variety of dimensions, and is highly robust against noises in device position and trajectory tracking. We believe that this is the first practical mobile imaging system (re)using 60GHz networking devices, and provides a basic primitive towards the construction of detailed environmental mapping systems.

115 citations


Additional excerpts

  • ...Camera is widely used for object recognition [16, 32, 33]....

    [...]

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This approach is among the first to propose a generative probabilistic framework for 3D object categorization and shows promising results in both the detection and viewpoint classification tasks on these two challenging datasets.
Abstract: We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.

115 citations


Cites methods from "Object recognition from local scale..."

  • ...A featurecodebookof size 1000is obtained by vector quantizing the SIFT descriptors computed over these detected regions [ 18 ]....

    [...]

Patent
09 May 2013
TL;DR: In this article, a method of mixing or compositing in real-time, computer generated 3D objects and a video feed from a film camera in which the body of the film camera can be moved in 3D and sensors in or attached to the camera provide realtime positioning data defining the 3D position and 3D orientation of the camera.
Abstract: A method of mixing or compositing in real-time, computer generated 3D objects and a video feed from a film camera in which the body of the film camera can be moved in 3D and sensors in or attached to the camera provide real-time positioning data defining the 3D position and 3D orientation of the camera, or enabling the 3D position to be calculated.

115 citations

Journal ArticleDOI
TL;DR: To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed.
Abstract: To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust M-estimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel matching accuracy is achieved under large projective distortions significantly exceeding the performance of any of the two components alone. As an application, the correspondence algorithm is employed in oriented tracking of objects.

115 citations


Cites background from "Object recognition from local scale..."

  • ...Index Terms—Correspondence problem, optical flow, color distribution matching, motion tracking, wide-baseline stereo....

    [...]

Journal ArticleDOI
TL;DR: An efficient pattern recognition algorithm to support automated detection and classification of pipe defects in images obtained from conventional CCTV inspection videos and is applied to the problem of detecting tree root intrusions is presented.

114 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]