scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters is proposed and a Siamese network is designed to define the loss of each weak tracker for the proposed hedge method.
Abstract: Convolutional Neural Networks (CNNs) have been applied to visual tracking with demonstrated success in recent years. Most CNN-based trackers utilize hierarchical features extracted from a certain layer to represent the target. However, features from a certain layer are not always effective for distinguishing the target object from the backgrounds especially in the presence of complicated interfering factors (e.g., heavy occlusion, background clutter, illumination variation, and shape deformation). In this work, we propose a CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters. Correlation filters are applied to feature maps of each CNN layer to construct a weak tracker, and all weak trackers are hedged into a strong one. For robust visual tracking, we propose a hedge method to adaptively determine weights of weak classifiers by considering both the difference between the historical as well as instantaneous performance, and the difference among all weak trackers over time. In addition, we design a Siamese network to define the loss of each weak tracker for the proposed hedge method. Extensive experiments on large benchmark datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art tracking methods.

114 citations


Cites methods from "Object recognition from local scale..."

  • ...Empirical studies using a large object tracking benchmark [22] show that the CNN-based trackers perform well against methods using hand-crafted features such as SIFT [23], HOG [16], and color histogram [24]....

    [...]

Proceedings ArticleDOI
17 Jun 2006
TL;DR: A novel approach named ReSPEC (Re-ranking Sets of Pictures by Exploiting Consistency), that is a hybrid of the two methods for content based image retrieval that first retrieves the results of a keyword query from an existing image search engine, clusters the results based on extracted image features, and ranks the remaining results in order of relevance.
Abstract: Current image search engines on the web rely purely on the keywords around the images and the filenames, which produces a lot of garbage in the search results. Alternatively, there exist methods for content based image retrieval that require a user to submit a query image, and return images that are similar in content. We propose a novel approach named ReSPEC (Re-ranking Sets of Pictures by Exploiting Consistency), that is a hybrid of the two methods. Our algorithm first retrieves the results of a keyword query from an existing image search engine, clusters the results based on extracted image features, and returns the cluster that is inferred to be the most relevant to the search query. Furthermore, it ranks the remaining results in order of relevance.

114 citations


Cites background from "Object recognition from local scale..."

  • ...” In particular, it would be interesting to see how our approach would perform if feature selection were changed into interest point detection, or maximally stable extremal region (MSER)[11] detection in combination with SIFT region descriptors [10]....

    [...]

Proceedings ArticleDOI
01 Jan 2019
TL;DR: ToDayGAN as mentioned in this paper uses a modified image-translation model to alter nighttime driving images to a more useful daytime representation, and then compares the translated night images to obtain a pose estimate for the night image using the known 6-DOF position of the closest day image.
Abstract: Visual localization is a key step in many robotics pipelines, allowing the robot to (approximately) determine its position and orientation in the world. An efficient and scalable approach to visual localization is to use image retrieval techniques. These approaches identify the image most similar to a query photo in a database of geo-tagged images and approximate the query’s pose via the pose of the retrieved database image. However, image retrieval across drastically different illumination conditions, e.g. day and night, is still a problem with unsatisfactory results, even in this age of powerful neural models. This is due to a lack of a suitably diverse dataset with true correspondences to perform end-to-end learning. A recent class of neural models allows for realistic translation of images among visual domains with relatively little training data and, most importantly, without ground-truth pairings.In this paper, we explore the task of accurately localizing images captured from two traversals of the same area in both day and night. We propose ToDayGAN – a modified image-translation model to alter nighttime driving images to a more useful daytime representation. We then compare the daytime and translated night images to obtain a pose estimate for the night image using the known 6-DOF position of the closest day image. Our approach improves localization performance by over 250% compared the current state-of-the-art, in the context of standard metrics in multiple categories.

114 citations

Proceedings Article
02 Jun 2010
TL;DR: Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account and a model of multimodal meaning representation which is based on the linguistic and visual context is developed.
Abstract: The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science Existing semantic representation models are primarily amodal based on information provided by the linguistic input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the linguistic and visual context Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account

114 citations


Cites methods from "Object recognition from local scale..."

  • ...We compute local image descriptors using the the Scale Invariant Feature Transform (SIFT) algorithm (Lowe, 1999)....

    [...]

  • ...Importantly, SIFT descriptors are designed to be invariant to small shifts in position, changes in illumination, noise, and viewpoint and can be used to perform reliable matching between different views of an object or scene (Mikolajczyk and Schmid, 2003; Lowe, 1999)....

    [...]

Proceedings ArticleDOI
29 Sep 2014
TL;DR: Sequence Matching Across Route Traversals (SMART) is presented; a generally applicable sequence-based place recognition algorithm that provides invariance to changes in illumination and vehicle speed while also providing moderate pose invariance and robustness to environmental aliasing.
Abstract: This paper presents Sequence Matching Across Route Traversals (SMART); a generally applicable sequence-based place recognition algorithm. SMART provides invariance to changes in illumination and vehicle speed while also providing moderate pose invariance and robustness to environmental aliasing. We evaluate SMART on vehicles travelling at highly variable speeds in two challenging environments; firstly, on an all-terrain vehicle in an off-road, forest track and secondly, using a passenger car traversing an urban environment across day and night. We provide comparative results to the current state-of-the-art SeqSLAM algorithm and investigate the effects of altering SMART’s image matching parameters. Additionally, we conduct an extensive study of the relationship between image sequence length and SMART’s matching performance. Our results show viable place recognition performance in both environments with short 10-metre sequences, and up to 96% recall at 100% precision across extreme day-night cycles when longer image sequences are used.

114 citations


Cites methods from "Object recognition from local scale..."

  • ...The use of features in such systems, such as SIFT [15] and SURF [16] allows invariance to scale and rotation, but leads to poor performance across extreme perceptual change [9]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]