Open Access
Distinctive Image Features from Scale-Invariant Keypoints
TLDR
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.Abstract:
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.read more
Citations
More filters
Book ChapterDOI
Kernel sparse representation for image classification and face recognition
TL;DR: KSR is essentially the sparse coding technique in a high dimensional feature space mapped by implicit mapping function that outperforms sparse coding and EMK, and achieves state-of-the-art performance for image classification and face recognition on publicly available datasets.
Journal ArticleDOI
Handcrafted vs. non-handcrafted features for computer vision classification
TL;DR: A generic computer vision system designed for exploiting trained deep Convolutional Neural Networks as a generic feature extractor and mixing these features with more traditional hand-crafted features is presented, demonstrating the generalizability of the proposed approach.
Journal ArticleDOI
An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency
Lilian Zhang,Reinhard Koch +1 more
TL;DR: A line matching algorithm which utilizes both the local appearance of lines and their geometric attributes to solve the problem of segment fragmentation and geometric variation and is accurate even for low-texture images because of the pairwise geometric consistency evaluation.
Journal ArticleDOI
Object Detection Networks on Convolutional Feature Maps
TL;DR: In this article, a network on convolutional feature maps (NoC) is proposed for object detection, which uses shared, region-independent CNN features to improve the performance of object detection.
Book ChapterDOI
Violence detection in video using computer vision techniques
TL;DR: A new video database containing 1000 sequences divided in two groups: fights and non-fights is introduced and experiments show that fights can be detected with near 90% accuracy.
References
More filters
Journal ArticleDOI
Distinctive Image Features from Scale-Invariant Keypoints
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings ArticleDOI
Object recognition from local scale-invariant features
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Proceedings ArticleDOI
A Combined Corner and Edge Detector
Chris Harris,Mike Stephens +1 more
TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Journal ArticleDOI
A performance evaluation of local descriptors
TL;DR: It is observed that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best and Moments and steerable filters show the best performance among the low dimensional descriptors.
Journal ArticleDOI
Robust wide-baseline stereo from maximally stable extremal regions
TL;DR: The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes.