scispace - formally typeset
Open Access

Distinctive Image Features from Scale-Invariant Keypoints

Reads0
Chats0
TLDR
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.
Abstract
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.

read more

Citations
More filters
Journal ArticleDOI

Rotational Projection Statistics for 3D Local Surface Description and Object Recognition

TL;DR: Rotational Projection Statistics (RoPS) as discussed by the authors is a feature descriptor that is obtained by rotationally projecting the neighboring points of a feature point onto 2D planes and calculating a set of statistics including low-order central moments and entropy of the distribution of these projected points.
Proceedings ArticleDOI

Better Exploiting Motion for Better Action Recognition

TL;DR: It is established that adequately decomposing visual motion into dominant and residual motions, both in the extraction of the space-time trajectories and for the computation of descriptors, significantly improves action recognition algorithms.
Proceedings ArticleDOI

Recognition using regions

TL;DR: This paper presents a unified framework for object detection, segmentation, and classification using regions using a generalized Hough voting scheme to generate hypotheses of object locations, scales and support, followed by a verification classifier and a constrained segmenter on each hypothesis.
Book ChapterDOI

Learning to Navigate for Fine-grained Classification

TL;DR: In this paper, a self-supervision mechanism is proposed to locate informative regions without the need of bounding-box/part annotations, which consists of a navigator agent, a teacher agent and a scrutinizer agent.
Proceedings ArticleDOI

Multimodal semi-supervised learning for image classification

TL;DR: This work considers a scenario where keywords are associated with the training images, e.g. as found on photo sharing websites, and learns a strong Multiple Kernel Learning (MKL) classifier using both the image content and keywords, and uses it to score unlabeled images.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings ArticleDOI

Object recognition from local scale-invariant features

TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Proceedings ArticleDOI

A Combined Corner and Edge Detector

TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Journal ArticleDOI

A performance evaluation of local descriptors

TL;DR: It is observed that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best and Moments and steerable filters show the best performance among the low dimensional descriptors.
Journal ArticleDOI

Robust wide-baseline stereo from maximally stable extremal regions

TL;DR: The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes.
Related Papers (5)
Trending Questions (1)
How can distinctive features theory be applied to elision?

The provided information does not mention anything about the application of distinctive features theory to elision.