scispace - formally typeset
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

Reads0
Chats0
TLDR
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Object Detection in Optical Remote Sensing Images Based on Weakly Supervised Learning and High-Level Feature Learning

TL;DR: A novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning by jointly integrating saliency, intraclass compactness, and interclass separability in a Bayesian framework.
Book ChapterDOI

The generalized patchmatch correspondence algorithm

TL;DR: This paper generalizes PatchMatch in three ways: to find k nearest neighbors, as opposed to just one, to search across scales and rotations, in addition to just translations, and to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors.
Proceedings ArticleDOI

SuperGlue: Learning Feature Matching With Graph Neural Networks

TL;DR: SuperGlue is introduced, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points and introduces a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly.
Journal ArticleDOI

LDAHash: Improved Matching with Smaller Descriptors

TL;DR: This work reduces the size of the descriptors by representing them as short binary strings and learn descriptor invariance from examples, and shows extensive experimental validation, demonstrating the advantage of the proposed approach.
Proceedings ArticleDOI

Do deep features generalize from everyday objects to remote sensing and aerial scenes domains

TL;DR: ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC.
References
More filters
Proceedings ArticleDOI

Object recognition from local scale-invariant features

TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Book

Multiple view geometry in computer vision

TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.

Multiple View Geometry in Computer Vision.

TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Proceedings ArticleDOI

A Combined Corner and Edge Detector

TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Journal ArticleDOI

Robust wide-baseline stereo from maximally stable extremal regions

TL;DR: The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes.
Related Papers (5)
Trending Questions (1)
How can distinctive features theory be applied to elision?

The provided information does not mention anything about the application of distinctive features theory to elision.