scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
17 Dec 2015
TL;DR: Experimental evaluation shows that the proposed method yields a recognition rate comparable to the state of the art, while its complexity is sub-linear in the number of templates.
Abstract: Despite their ubiquitous presence, texture-less objects present significant challenges to contemporary visual object detection and localization algorithms. This paper proposes a practical method for the detection and accurate 3D localization of multiple texture-less and rigid objects depicted in RGB-D images. The detection procedure adopts the sliding window paradigm, with an efficient cascade-style evaluation of each window location. A simple pre-filtering is performed first, rapidly rejecting most locations. For each remaining location, a set of candidate templates (i.e. trained object views) is identified with a voting procedure based on hashing, which makes the method's computational complexity largely unaffected by the total number of known objects. The candidate templates are then verified by matching feature points in different modalities. Finally, the approximate object pose associated with each detected template is used as a starting point for a stochastic optimization procedure that estimates accurate 3D pose. Experimental evaluation shows that the proposed method yields a recognition rate comparable to the state of the art, while its complexity is sub-linear in the number of templates.

112 citations


Cites background from "Object recognition from local scale..."

  • ...Instead, research in object recognition concentrated on approaches based on viewpointinvariant local features obtained from objects rich in texture [5]....

    [...]

Proceedings ArticleDOI
14 Apr 2008
TL;DR: The result shows the improved parallel SIFT implementation can process general video images in super-real-time on a dual-socket, quad-core system, and the speed is much faster than the implementation on GPUs.
Abstract: Scale invariant feature transform (SIFT) is an approach for extracting distinctive invariant features from images, and it has been successfully applied to many computer vision problems (e.g. face recognition and object detection). However, the SIFT feature extraction is compute-intensive, and a real-time or even super-real-time processing capability is required in many emerging scenarios. Nowadays, with the multi- core processor becoming mainstream, SIFT can be accelerated by fully utilizing the computing power of available multi-core processors. In this paper, we propose two parallel SIFT algorithms and present some optimization techniques to improve the implementation 's performance on multi-core systems. The result shows our improved parallel SIFT implementation can process general video images in super-real-time on a dual-socket, quad-core system, and the speed is much faster than the implementation on GPUs. We also conduct a detailed scalability and memory performance analysison the 8-core system and on a 32-core chip multiprocessor (CMP) simulator. The analysis helps us identify possible causes of bottlenecks, and we suggest avenues for scalability improvement to make this application more powerful on future large-scale multi- core systems.

112 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel method, called Multi-view Dimensionality co-Reduction, which explores the correlations within each view independently, and maximizes the dependence among different views with kernel matching jointly, and flexibly exploits the complementarity of multiple views during the dimensionality reduction.
Abstract: Dimensionality reduction aims to map the high-dimensional inputs onto a low-dimensional subspace, in which the similar points are close to each other and vice versa. In this paper, we focus on unsupervised dimensionality reduction for the data with multiple views, and propose a novel method, called Multi-view Dimensionality co-Reduction . Our method flexibly exploits the complementarity of multiple views during the dimensionality reduction and respects the similarity relationships between data points across these different views. The kernel matching constraint based on Hilbert-Schmidt Independence Criterion enhances the correlations and penalizes the disagreement of different views. Specifically, our method explores the correlations within each view independently, and maximizes the dependence among different views with kernel matching jointly. Thus, the locality within each view and the consistence between different views are guaranteed in the subspaces corresponding to different views. More importantly, benefiting from the kernel matching, our method need not depend on a common low-dimensional subspace, which is critical to reduce the influence of the unbalanced dimensionalities of multiple views. Specifically, our method explicitly produces individual low-dimensional projections for individual views, which could be applied for new coming data in the out-of-sample manner. Experiments on both clustering and recognition tasks demonstrate the advantages of the proposed method over the state-of-the-art approaches.

112 citations


Cites methods from "Object recognition from local scale..."

  • ...Take images/videos for example, they are often described with different visual descriptors, such as SIFT [1], Gabor [2] and LBP [3]....

    [...]

Journal ArticleDOI
01 Aug 2020
TL;DR: A methodology is proposed based on color image visualization and deep convolution neural network for in-depth analysis of malware detection, which indicates that the proposed method's predictive time and detection accuracy are higher than that of previous machine learning and deep learning methods.
Abstract: Now the Industrial Internet of Things (IIoT) devices can be deployed to monitor the flow of data, the source of collection and supervision on a large scale of complex networks. It implements large networks for sending and receiving data connected by smart devices. Malware threats, which are primarily targeted at conventional computers linked to the Internet, can also be targeted at IoT machines. Therefore, a smart protection approach is needed to protect millions of IIoT users against malicious attacks. On the other hand, existing state-of - the-art malware identification methods are not better in terms of computational complexity. In this paper, we design architecture to detect malware attacks on the Industrial Internet of Things (MD-IIOT). For an in-depth analysis of malware, a methodology is proposed based on color image visualization and deep convolution neural network. The findings of the proposed method are compared to former approaches to malware detection. The experimental results indicate that the proposed method's predictive time and detection accuracy are higher than that of previous machine learning and deep learning methods.

112 citations

Journal ArticleDOI
TL;DR: The authors' land use classification is a 2-step approach that uses RGB and NIR images for an initial classification and the panchromatic images as well as a digital surface model (DSM) for a refined classification.
Abstract: This paper describes the fusion of information extracted from multispectral digital aerial images for highly automatic 3D map generation. The proposed approach integrates spectral classification and 3D reconstruction techniques. The multispectral digital aerial images consist of a high resolution panchromatic channel as well as lower resolution RGB and near infrared (NIR) channels and form the basis for information extraction. Our land use classification is a 2-step approach that uses RGB and NIR images for an initial classification and the panchromatic images as well as a digital surface model (DSM) for a refined classification. The DSM is generated from the high resolution panchromatic images of a specific photo mission. Based on the aerial triangulation using area and feature-based points of interest the algorithms are able to generate a dense DSM by a dense image matching procedure. Afterwards a true ortho image for classification, panchromatic or color input images can be computed. In a last step specific layers for buildings and vegetation are generated and the classification is updated.

112 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]