Open Access
Distinctive Image Features from Scale-Invariant Keypoints
Reads0
Chats0
TLDR
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.Abstract:
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.read more
Citations
More filters
Journal ArticleDOI
Benchmarking HEp-2 Cells Classification Methods
TL;DR: The first edition of the HEp-2 Cells Classification contest aimed to bring together researchers interested in the performance evaluation of algorithms for IIF image analysis and evaluated 28 different recognition systems able to automatically recognize the staining pattern of cells within IIF images.
Proceedings ArticleDOI
Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization?
Torsten Sattler,Akihiko Torii,Josef Sivic,Marc Pollefeys,Hajime Taira,Masatoshi Okutomi,Tomas Pajdla +6 more
TL;DR: It is demonstrated experimentally that large-scale 3D models are not strictly necessary for accurate visual localization, and it is shown that combining image-based methods with local reconstructions results in a pose accuracy similar to the state-of-the-art structure- based methods.
Proceedings ArticleDOI
FAB-MAP + RatSLAM: Appearance-based SLAM for multiple times of day
TL;DR: In this paper, the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM is combined to perform appearance-based mapping and localisation.
Book ChapterDOI
Object-Centric Spatial Pooling for Image Classification
TL;DR: A framework that learns object detectors using only image-level class labels, or so-called weak labels is proposed, comparable in accuracy with state-of-the-art weakly supervised detection methods and significantly outperforms SPM-based pooling in image classification.
Journal ArticleDOI
A survey on heterogeneous transfer learning
Oscar Day,Taghi M. Khoshgoftaar +1 more
TL;DR: This paper contributes a comprehensive survey and analysis of current methods designed for performing heterogeneous transfer learning tasks to provide an updated, centralized outlook into current methodologies.
References
More filters
Journal ArticleDOI
Distinctive Image Features from Scale-Invariant Keypoints
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings ArticleDOI
Object recognition from local scale-invariant features
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Proceedings ArticleDOI
A Combined Corner and Edge Detector
Chris Harris,Mike Stephens +1 more
TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Journal ArticleDOI
A performance evaluation of local descriptors
TL;DR: It is observed that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best and Moments and steerable filters show the best performance among the low dimensional descriptors.
Journal ArticleDOI
Robust wide-baseline stereo from maximally stable extremal regions
TL;DR: The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes.