scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
19 May 2015
TL;DR: This work addresses the problem of facial spoofing detection against replay attacks based on the analysis of aliasing in spoof face videos and shows that the proposed approach is very effective in face spoof detection for both cross-database, and intra-database testing scenarios.
Abstract: With the wide deployment of face recognition systems in applications from border control to mobile device unlocking, the combat of face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays and 3D masks. We address the problem of facial spoofing detection against replay attacks based on the analysis of aliasing in spoof face videos. The application domain of interest is mobile phone unlock. We analyze the moire pattern aliasing that commonly appears during the recapture of video or photo replays on a screen in different channels (R, G, B and grayscale) and regions (the whole frame, detected face, and facial component between the nose and chin). Multi-scale LBP and DSIFT features are used to represent the characteristics of moire patterns that differentiate a replayed spoof face from a live face (face present). Experimental results on Idiap replay-attack and CASIA databases as well as a database collected in our laboratory (RAFS), which is based on the MSU-FSD database, shows that the proposed approach is very effective in face spoof detection for both cross-database, and intra-database testing scenarios.

99 citations


Cites methods from "Object recognition from local scale..."

  • ...The feature vectors used in our experiments include MLBP, DSIFT, and the concatenation of MLBP and DSIFT extracted from three different regions....

    [...]

  • ...In most FR literature, the MLBP and SIFT features are usually extracted from the grayscale (intensity) images....

    [...]

  • ...A concatenation of the MLBP and DSIFT features extracted from the intensity face image was used....

    [...]

  • ...To show the robustness of the proposed approach against different texture descriptors, we also used densely sampled SIFT (DSIFT) features in our experiments....

    [...]

  • ...This inspired us to capture moiré patterns using a number of well known texture descriptors, such as MLBP [12] and SIFT [10] to use for spoof detection....

    [...]

Journal ArticleDOI
TL;DR: A low-cost technique for the analysis of cut mark micromorphology from a tri-dimensional perspective is introduced which provides a high-resolution approach to cut mark characterisation such as morphology, depth, width, and angle estimation as well as section determination, measured directly on the marks on bones.

99 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This work proposes a novel approach for estimating the difficulty and transferability of supervised classification tasks using an information theoretic approach, treating training labels as random variables and exploring their statistics, and provides results showing that these hardness andTransferability estimates are strongly correlated with empirical hardness andtransferability.
Abstract: We propose a novel approach for estimating the difficulty and transferability of supervised classification tasks. Unlike previous work, our approach is solution agnostic and does not require or assume trained models. Instead, we estimate these values using an information theoretic approach: treating training labels as random variables and exploring their statistics. When transferring from a source to a target task, we consider the conditional entropy between two such variables (i.e., label assignments of the two tasks). We show analytically and empirically that this value is related to the loss of the transferred model. We further show how to use this value to estimate task hardness. We test our claims extensively on three large scale data sets---CelebA (40 tasks), Animals with Attributes~2 (85 tasks), and Caltech-UCSD Birds~200 (312 tasks)---together representing 437 classification tasks. We provide results showing that our hardness and transferability estimates are strongly correlated with empirical hardness and transferability. As a case study, we transfer a learned face recognition model to CelebA attribute classification tasks, showing state of the art accuracy for highly transferable attributes.

98 citations


Cites methods from "Object recognition from local scale..."

  • ...Our result even holds when the features are fixed, as when using tailored representations such as SIFT [35]....

    [...]

Journal ArticleDOI
TL;DR: A spatially weighted pooling (SWP) strategy is proposed, which considerably improves the robustness and effectiveness of the feature representation of most dominant DCNNs and can achieve better performance than recent approaches in the literature.
Abstract: Fine-grained car recognition aims to recognize the category information of a car, such as car make, car model, or even the year of manufacture. A number of recent studies have shown that a deep convolutional neural network (DCNN) trained on a large-scale data set can achieve impressive results at a range of generic object classification tasks. In this paper, we propose a spatially weighted pooling (SWP) strategy, which considerably improves the robustness and effectiveness of the feature representation of most dominant DCNNs. More specifically, the SWP is a novel pooling layer, which contains a predefined number of spatially weighted masks or pooling channels. The SWP pools the extracted features of DCNNs with the guidance of its learnt masks, which measures the importance of the spatial units in terms of discriminative power. As the existing methods that apply uniform grid pooling on the convolutional feature maps of DCNNs, the proposed method can extract the convolutional features and generate the pooling channels from a single DCNN. Thus minimal modification is needed in terms of implementation. Moreover, the parameters of the SWP layer can be learned in the end-to-end training process of the DCNN. By applying our method to several fine-grained car recognition data sets, we demonstrate that the proposed method can achieve better performance than recent approaches in the literature. We advance the state-of-the-art results by improving the accuracy from 92.6% to 93.1% on the Stanford Cars-196 data set and 91.2% to 97.6% on the recent CompCars data set. We have also tested the proposed method on two additional large-scale data sets with impressive results observed.

98 citations


Cites background from "Object recognition from local scale..."

  • ...Another line of research focuses on the robust feature representation of images, such as the VLAD [9], Fisher vector [10] with SIFT features [11]....

    [...]

Journal ArticleDOI
TL;DR: This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach.
Abstract: Zernike moments constitute a powerful shape descriptor in terms of robustness and description capability. However the classical way of comparing two Zernike descriptors only takes into account the magnitude of the moments and loses the phase information. The novelty of our approach is to take advantage of the phase information in the comparison process while still preserving the invariance to rotation. This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach. This angle information is particularly of interest for many applications, including 3D scene understanding through images. Experiments demonstrate that our comparator outperforms the classical one in terms of similarity measure. In particular the robustness of the retrieval against noise and geometric deformation is greatly improved. Moreover, the rotation angle estimation is also more accurate than state of the art algorithms.

98 citations


Cites background or methods from "Object recognition from local scale..."

  • ...A lot of work has been done for angle/similarity recognition using keypoint-based local descriptors like SIFT [12], however, this kind of tools works only on textured objects and fails to describe smooth shapes or drawings (i....

    [...]

  • ...• For the geometric hashing, we need feature points, hence we have extracted Harris [20] and DoG keypoints [21], [12] in each pattern....

    [...]

  • ...A lot of work has been done for angle/similarity recognition using keypoint-based local descriptors like SIFT [12], however, this kind of tools works only on textured objects and fails to describe smooth shapes or drawings (i.e. sketch) for instance....

    [...]

  • ...The computation of Zernike moments is made faster by precomputing a set of 100×100 Zernike filters and by applying a fast smoothing approach along scales (like pyramids of Gaussian in [12]) to quickly sample each 100×100 window....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]