scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
10 Apr 2007
TL;DR: A principled probabilistic framework for navigation using only appearance data, which is particularly suitable for online loop closure detection in mobile robotics.
Abstract: This paper describes a probabilistic framework for navigation using only appearance data. By learning a generative model of appearance, we can compute not only the similarity of two observations, but also the probability that they originate from the same location, and hence compute a pdf over observer location. We do not limit ourselves to the kidnapped robot problem (localizing in a known map), but admit the possibility that observations may come from previously unvisited places. The principled probabilistic approach we develop allows us to explicitly account for the perceptual aliasing in the environment - identical but indistinctive observations receive a low probability of having come from the same place. Our algorithm complexity is linear in the number of places, and is particularly suitable for online loop closure detection in mobile robotics.

199 citations


Cites methods from "Object recognition from local scale..."

  • ...In our implementation we extract regions of interest from the images using the Harris-affine detector [10] then compute SIFT descriptors [9] around the regions of interest....

    [...]

Journal ArticleDOI
TL;DR: In this article, an automated methodology is presented to orient a set of close-range images captured with a calibrated camera, and to extract dense and accurate point clouds starting from the estimated orientation parameters.
Abstract: In this paper an automated methodology is presented (i) to orient a set of close-range images captured with a calibrated camera, and (ii) to extract dense and accurate point clouds starting from the estimated orientation parameters. The whole procedure combines different algorithms and techniques in order to obtain accurate 3D reconstructions in an automatic way. The exterior orientation parameters are estimated using a photogrammetric bundle adjustment with the image correspondences detected using area- and feature-based matching algorithms. Surface measurements are then performed using advanced multi-image matching techniques based on multiple image primitives. To demonstrate the reliability, precision and robustness of the procedure, several tests on different kinds of free-form objects are illustrated and discussed in the paper. Three-dimensional comparisons with range-based data are also carried out. Resume Dans cet article une methodologie automatique est presentee (i) pour orienter un ensemble d’images acquises en plan rapproche avec une camera etalonnee, et (ii) pour extraire des nuages de points denses et precis en utilisant les parametres d’orientation estimes. L’ensemble de la procedure combine plusieurs algorithmes et techniques destines a fournir une reconstruction 3D precise de maniere automatique. Les parametres d’orientation externe sont estimes au moyen d’une compensation par faisceaux, les correspondances entre images etant detectees avec des algorithmes d’appariement de voisinages ou de primitives. Les surfaces sont alors mesurees avec des techniques avancees d’appariement multi-images basees sur des primitives multi-images. Afin de demontrer la fiabilite, la precision et la robustesse de la procedure, plusieurs tests sur des objets de differentes formes libres sont presentes et discutes dans l’article. Des comparaisons en 3D avec des donnees issues de mesures laser sont egalement effectuees. Zusammenfassung Dieser Beitrag stellt eine automatische Methode vor, um (i) einen Satz von Nahbereichsaufnahmen einer kalibrierten Kamera zu orientieren, und (ii) dichte und genaue Punktwolken auf der Basis der geschatzten Orientierungsparameter zu extrahieren. Die gesamte Prozedur kombiniert verschiedene Algorithmen und Techniken, um eine genaue 3D Rekonstruktion auf automatische Weise zu erhalten. Die Parameter der auseren Orientierung werden mit einer photogrammetrischen Bundelausgleichung bestimmt, wobei die Bildverknupfungen mit intensitatsbasierten und merkmalsgestutzten Bildzuordnungsverfahren erzeugt werden. Die Messung von Oberflachenpunkten nutzt neuartige Mehrbildzuordnungstechniken, die sich auf Mehrfachprimitive stutzen. Die Zuverlassigkeit, Genauigkeit und Robustheit der Prozeduren wird durch mehrere Tests mit unterschiedlichen Freiformoberflachen illustriert und diskutiert. Weiterhin werden 3D Vergleiche zur Erfassung mit aktiven Systemen durchgefuhrt. Resumen En este articulo se describe una metodologia automatica para (i) orientar un conjunto de imagenes de objeto cercano obtenidas con una camara calibrada, y (ii) extraer nubes de puntos densas y exactas partiendo de parametros de orientacion estimados. Todo el procedimiento combina diferentes algoritmos y tecnicas para conseguir reconstrucciones tridimensionales de forma automatica. Los parametros de orientacion exterior se estiman con un ajuste fotogrametrico por haces y las correspondencias de imagen determinadas con algoritmos de correspondencia de objetos caracteristicos y correspondencia de areas. A continuacion se toman medidas de superficie utilizando tecnicas avanzadas de correspondencia multiimagen basadas en primitivas de multiples imagenes. Para demostrar la fiabilidad, precision y robustez del procedimiento se han realizado varias pruebas con distintas clases de objetos con formas libres cuyos resultados se presentan y discuten en el articulo. Finalmente, tambien se hacen comparaciones tridimensionales con medidas de rango.

198 citations


Cites background or methods from "Object recognition from local scale..."

  • ...For more details the reader is referred to Lowe (1999, 2004)....

    [...]

  • ...In the field of CV, the surface measurement is generally performed using stereopair depth maps (Strecha et al., 2003; Pollefeys et al., 2004; Pénard et al., 2005) or multi-view methods aiming at reconstructing a surface which minimises a global photometric discrepancy function, regularised by…...

    [...]

Journal ArticleDOI
TL;DR: A graph-cut-based detection approach is given to accurately extract a specified road region during the initialization stage and in the middle of tracking process, and a fast homography-based road-tracking scheme is developed to automatically track road areas.
Abstract: An unmanned aerial vehicle (UAV) has many applications in a variety of fields. Detection and tracking of a specific road in UAV videos play an important role in automatic UAV navigation, traffic monitoring, and ground–vehicle tracking, and also is very helpful for constructing road networks for modeling and simulation. In this paper, an efficient road detection and tracking framework in UAV videos is proposed. In particular, a graph-cut–based detection approach is given to accurately extract a specified road region during the initialization stage and in the middle of tracking process, and a fast homography-based road-tracking scheme is developed to automatically track road areas. The high efficiency of our framework is attributed to two aspects: the road detection is performed only when it is necessary and most work in locating the road is rapidly done via very fast homography-based tracking. Experiments are conducted on UAV videos of real road scenes we captured and downloaded from the Internet. The promising results indicate the effectiveness of our proposed framework, with the precision of 98.4% and processing 34 frames per second for 1046 $\times$ 595 videos on average.

198 citations


Additional excerpts

  • ...7 shows the pipelines of road tracking....

    [...]

Posted Content
TL;DR: In this paper, a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses is proposed.
Abstract: We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task (Kehl et al., ICCV'17) that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by the YOLO network design that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches when they are all used without post-processing. During post-processing, a pose refinement step can be used to boost the accuracy of the existing methods, but at 10 fps or less, they are much slower than our method.

198 citations

01 Jan 2005
TL;DR: A novel system for autonomous mobile robot navigation with only an omnidirectional camera as sensor is presented, able to build automatically and robustly accurate topologically organised environment maps of a complex, natural environment.
Abstract: In this work we present a novel system for autonomous mobile robot navigation. With only an omnidirectional camera as sensor, this system is able to build automatically and robustly accurate topologically organised environment maps of a complex, natural environment. It can localise itself using such a map at each moment, including both at startup (kidnapped robot) or using knowledge of former localisations. The topological nature of the map is similar to the intuitive maps humans use, is memory-efficient and enables fast and simple path planning towards a specified goal. We developed a real-time visual servoing technique to steer the system along the computed path. A key technology making this all possible is the novel fast wide baseline feature matching, which yields an efficient description of the scene, with a focus on man-made environments.

198 citations


Cites methods from "Object recognition from local scale..."

  • ...(Lowe, 1999) extended these ideas to real scale-invariance....

    [...]

  • ...We use a combination of two different kinds of wide baseline features, namely a rotation reduced and colour enhanced form of Lowe’s SIFT features (Lowe, 1999), and the invariant column segments we developed (Goedemé et al., 2004)....

    [...]

  • ...David Lowe presented the Scale Invariant Feature Transform (Lowe, 1999), which finds interest points around local extrema in a scale-space of difference-of-Gaussian (DoG) images....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]