Open Access
Distinctive Image Features from Scale-Invariant Keypoints
Reads0
Chats0
TLDR
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.Abstract:
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.read more
Citations
More filters
Proceedings ArticleDOI
Multiple One-Shots for Utilizing Class Label Information.
TL;DR: This paper presents a system utilizing identity and pose information to improve facial image pair-matching performance using multiple One-Shot scores, and shows how separating pose and identity may lead to better face recognition rates in unconstrained, “wild” facial images.
Proceedings ArticleDOI
Boosting Binary Keypoint Descriptors
TL;DR: A novel framework to learn an extremely compact binary descriptor called Bin Boost that is very robust to illumination and viewpoint changes and significantly outperforms the state-of-the-art binary descriptors and performs similarly to the best floating-point descriptors at a fraction of the matching time and memory footprint.
Proceedings ArticleDOI
Understanding Indoor Scenes Using 3D Geometric Phrases
TL;DR: A hierarchical scene model for learning and reasoning about complex indoor scenes which is computationally tractable, can be learned from a reasonable amount of training data, and avoids oversimplification is presented.
Journal ArticleDOI
Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching
TL;DR: Zhang et al. as mentioned in this paper proposed a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching, which showed superior performance than other state-of-the-art segmentation methods.
Book ChapterDOI
Disentangling factors of variation for facial expression recognition
TL;DR: A semi-supervised approach to solve the task of emotion recognition in 2D face images using recent ideas in deep learning for handling the factors of variation present in data, beating the state-of-the-art on a recently proposed dataset for facial expression recognition.
References
More filters
Journal ArticleDOI
Distinctive Image Features from Scale-Invariant Keypoints
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings ArticleDOI
Object recognition from local scale-invariant features
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Proceedings ArticleDOI
A Combined Corner and Edge Detector
Chris Harris,Mike Stephens +1 more
TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Journal ArticleDOI
A performance evaluation of local descriptors
TL;DR: It is observed that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best and Moments and steerable filters show the best performance among the low dimensional descriptors.
Journal ArticleDOI
Robust wide-baseline stereo from maximally stable extremal regions
TL;DR: The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes.