scispace - formally typeset
Search or ask a question
Proceedings Article•DOI•

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Journal Article•DOI•
TL;DR: Experimental results show that the proposed deformable model is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.
Abstract: This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.

143 citations


Cites background from "Object recognition from local scale..."

  • ...Therefore, the complex local descriptors, such as SIFT [7], might be suitable to characterize the image features around each point along the boundaries of lung fields....

    [...]

  • ...SIFT, as detailed in [7], consists of four major steps: (1) scale-space peak selection; (2) key point localization; (3) orientation assignment; (4) key point description....

    [...]

Journal Article•DOI•
TL;DR: A new effective noise level estimation method is proposed on the basis of the study of singular values of noise-corrupted images, which can reliably infer noise levels and show robust behavior over a wide range of visual content and noise conditions.
Abstract: Accurate estimation of Gaussian noise level is of fundamental interest in a wide variety of vision and image processing applications as it is critical to the processing techniques that follow. In this paper, a new effective noise level estimation method is proposed on the basis of the study of singular values of noise-corrupted images. Two novel aspects of this paper address the major challenges in noise estimation: 1) the use of the tail of singular values for noise estimation to alleviate the influence of the signal on the data basis for the noise estimation process and 2) the addition of known noise to estimate the content-dependent parameter, so that the proposed scheme is adaptive to visual signals, thereby enabling a wider application scope of the proposed scheme. The analysis and experiment results demonstrate that the proposed algorithm can reliably infer noise levels and show robust behavior over a wide range of visual content and noise conditions, and that is outperforms relevant existing methods.

143 citations


Cites background from "Object recognition from local scale..."

  • ...Apart from denoising, other algorithms that can benefit from noise level estimates include motion estimation [25], super-resolution [26], shapefrom-shading [27], and feature extraction [28]....

    [...]

Journal Article•DOI•
TL;DR: Experimental results show that positioning in the three-dimensional space with a centimeter accuracy can be achieved, thus allowing the possibility to build high-resolution digital elevation maps, and work on terrain mapping with lowaltitude stereovision.
Abstract: In this paper we provide a progress report of the LAAS-CNRS project of autonomous blimp robot development, in the context of field robotics. Hardware developments aimed at designing a generic and versatile experimental platform are first presented. On this base, the flight control and terrain mapping issues, which constitute the main thrust of the research work, are presented in two parts. The first part, devoted to the automatic control study, is based on a rigorous modeling of the airship dynamics. Considering the decoupling of the lateral and longitudinal dynamics, several flight phases are identified for which appropriate control strategies are proposed. The description focuses on the lateral steady navigation. In the second part of the paper, we present work on terrain mapping with lowaltitude stereovision. A simultaneous localization and map building approach based on an extended Kalman filter is depicted, with details on the identification of the various errors involved in the process. Experimental...

143 citations

Journal Article•DOI•
TL;DR: A robust location-aware activity recognition approach for establishing ambient intelligence applications in a smart home that infers a single resident's interleaved activities by utilizing a generalized and enhanced Bayesian Network fusion engine with inputs from a set of the most informative features.
Abstract: This paper presents a robust location-aware activity recognition approach for establishing ambient intelligence applications in a smart home. With observations from a variety of multimodal and unobtrusive wireless sensors seamlessly integrated into ambient-intelligence compliant objects (AICOs), the approach infers a single resident's interleaved activities by utilizing a generalized and enhanced Bayesian Network fusion engine with inputs from a set of the most informative features. These features are collected by ranking their usefulness in estimating activities of interest. Additionally, each feature reckons its corresponding reliability to control its contribution in cases of possible device failure, therefore making the system more tolerant to inevitable device failure or interference commonly encountered in a wireless sensor network, and thus improving overall robustness. This work is part of an interdisciplinary Attentive Home pilot project with the goal of fulfilling real human needs by utilizing context-aware attentive services. We have also created a novel application called ldquoActivity Maprdquo to graphically display ambient-intelligence-related contextual information gathered from both humans and the environment in a more convenient and user-accessible way. All experiments were conducted in an instrumented living lab and their results demonstrate the effectiveness of the system.

142 citations

Proceedings Article•DOI•
06 Nov 2011
TL;DR: This is the first work showing that RBMs can be trained with almost no hyperparameter tuning to provide classification performance similar to or significantly better than mixture models (e.g., Gaussian mixture models).
Abstract: Informative image representations are important in achieving state-of-the-art performance in object recognition tasks. Among feature learning algorithms that are used to develop image representations, restricted Boltzmann machines (RBMs) have good expressive power and build effective representations. However, the difficulty of training RBMs has been a barrier to their wide use. To address this difficulty, we show the connections between mixture models and RBMs and present an efficient training method for RBMs that utilize these connections. To the best of our knowledge, this is the first work showing that RBMs can be trained with almost no hyperparameter tuning to provide classification performance similar to or significantly better than mixture models (e.g., Gaussian mixture models). Along with this efficient training, we evaluate the importance of convolutional training that can capture a larger spatial context with less redundancy, as compared to non-convolutional training. Overall, our method achieves state-of-the-art performance on both Caltech 101 / 256 datasets using a single type of feature.

142 citations


Cites background or methods from "Object recognition from local scale..."

  • ...In the last decades, many efforts have been made to develop feature representations that can provide useful low-level information from images (e.g., [1, 2])....

    [...]

  • ...To address this difficulty, we show the connections between mixture models and RBMs and present an efficient training method for RBMs that utilize these connections....

    [...]

References
More filters
Journal Article•DOI•
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal Article•DOI•
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal Article•DOI•
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal Article•DOI•
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal Article•DOI•
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]