scispace - formally typeset
Search or ask a question
Author

Nabeel Younus Khan

Bio: Nabeel Younus Khan is an academic researcher from University of Otago. The author has contributed to research in topics: Pattern recognition (psychology) & Feature extraction. The author has an hindex of 2, co-authored 2 publications receiving 192 citations.

Papers
More filters
Proceedings ArticleDOI
06 Dec 2011
TL;DR: This paper summarizes the performance of two robust feature detection algorithms namely Scale Invariant Feature Transform (SIFT) and Speeded up Robust Features (SURF) on several classification datasets.
Abstract: Scene classification in indoor and outdoor environments is a fundamental problem to the vision and robotics community. Scene classification benefits from image features which are invariant to image transformations such as rotation, illumination, scale, viewpoint, noise etc. Selecting suitable features that exhibit such invariances plays a key part in classification performance. This paper summarizes the performance of two robust feature detection algorithms namely Scale Invariant Feature Transform (SIFT) and Speeded up Robust Features (SURF) on several classification datasets. In this paper, we have proposed three shorter SIFT descriptors. Results show that the proposed 64D and 96D SIFT descriptors perform as well as traditional 128D SIFT descriptors for image matching at a significantly reduced computational cost. SURF has also been observed to give good classification results on different datasets.

163 citations

Journal ArticleDOI
01 Aug 2015
TL;DR: This study compares the performance of several state-of-the art image descriptors including several recent binary descriptors and finds that SIFT is still the most accurate performer in both application settings.
Abstract: Independent evaluation of the performance of feature descriptors is an important part of the process of developing better computer vision systems. In this paper, we compare the performance of several state-of-the art image descriptors including several recent binary descriptors. We test the descriptors on an image recognition application and a feature matching application. Our study includes several recently proposed methods and, despite claims to the contrary, we find that SIFT is still the most accurate performer in both application settings. We also find that general purpose binary descriptors are not ideal for image recognition applications but perform adequately in a feature matching application.

40 citations


Cited by
More filters
Proceedings ArticleDOI
03 Mar 2018
TL;DR: SIFT and BRISK are found to be the most accurate algorithms while ORB and BRK are most efficient and a benchmark for researchers, regardless of any particular area is set.
Abstract: Image registration is the process of matching, aligning and overlaying two or more images of a scene, which are captured from different viewpoints. It is extensively used in numerous vision based applications. Image registration has five main stages: Feature Detection and Description; Feature Matching; Outlier Rejection; Derivation of Transformation Function; and Image Reconstruction. Timing and accuracy of feature-based Image Registration mainly depend on computational efficiency and robustness of the selected feature-detector-descriptor, respectively. Therefore, choice of feature-detector-descriptor is a critical decision in feature-matching applications. This article presents a comprehensive comparison of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK algorithms. It also elucidates a critical dilemma: Which algorithm is more invariant to scale, rotation and viewpoint changes? To investigate this problem, image matching has been performed with these features to match the scaled versions (5% to 500%), rotated versions (0° to 360°), and perspective-transformed versions of standard images with the original ones. Experiments have been conducted on diverse images taken from benchmark datasets: University of OXFORD, MATLAB, VLFeat, and OpenCV. Nearest-Neighbor-Distance-Ratio has been used as the feature-matching strategy while RANSAC has been applied for rejecting outliers and fitting the transformation models. Results are presented in terms of quantitative comparison, feature-detection-description time, feature-matching time, time of outlier-rejection and model fitting, repeatability, and error in recovered results as compared to the ground-truths. SIFT and BRISK are found to be the most accurate algorithms while ORB and BRISK are most efficient. The article comprises rich information that will be very useful for making important decisions in vision based applications and main aim of this work is to set a benchmark for researchers, regardless of any particular area.

339 citations

Posted Content
TL;DR: This paper compares the performance of three different image matching techniques, i.e., SIFT, SURF, and ORB, against different kinds of transformations and deformations such as scaling, rotation, noise, fish eye distortion, and shearing and shows that which algorithm is the best more robust against each kind of distortion.
Abstract: Fast and robust image matching is a very important task with various applications in computer vision and robotics. In this paper, we compare the performance of three different image matching techniques, i.e., SIFT, SURF, and ORB, against different kinds of transformations and deformations such as scaling, rotation, noise, fish eye distortion, and shearing. For this purpose, we manually apply different types of transformations on original images and compute the matching evaluation parameters such as the number of key points in images, the matching rate, and the execution time required for each algorithm and we will show that which algorithm is the best more robust against each kind of distortion. Index Terms-Image matching, scale invariant feature transform (SIFT), speed up robust feature (SURF), robust independent elementary features (BRIEF), oriented FAST, rotated BRIEF (ORB).

261 citations

Journal Article
TL;DR: Two different methods for scale and rotation invariant interest point/feature detector and descriptor are presented: Scale Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF).
Abstract: Accurate, robust and automatic image registration is critical task in many applications. To perform image registration/alignment, required steps are: Feature detection, Feature matching, derivation of transformation function based on corresponding features in images and reconstruction of images based on derived transformation function. Accuracy of registered image depends on accurate feature detection and matching. So these two intermediate steps are very important in many image applications: image registration, computer vision, image mosaic etc. This paper presents two different methods for scale and rotation invariant interest point/feature detector and descriptor: Scale Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF). It also presents a way to extract distinctive invariant features from images that can be used to perform reliable matching between different views of an object/scene.

166 citations

Proceedings ArticleDOI
27 Jun 2016
TL;DR: This work shows how to train a Convolutional Neural Network to assign a canonical orientation to feature points given an image patch centered on the feature point, and proposes a new type of activation function for Neural Networks that generalizes the popular ReLU, maxout, and PReLU activation functions.
Abstract: We show how to train a Convolutional Neural Network to assign a canonical orientation to feature points given an image patch centered on the feature point. Our method improves feature point matching upon the state-of-the art and can be used in conjunction with any existing rotation sensitive descriptors. To avoid the tedious and almost impossible task of finding a target orientation to learn, we propose to use Siamese networks which implicitly find the optimal orientations during training. We also propose a new type of activation function for Neural Networks that generalizes the popular ReLU, maxout, and PReLU activation functions. This novel activation performs better for our task. We validate the effectiveness of our method extensively with four existing datasets, including two non-planar datasets, as well as our own dataset. We show that we outperform the state-of-the-art without the need of retraining for each dataset.

128 citations

Journal ArticleDOI
17 Jun 2016-PLOS ONE
TL;DR: This paper presents a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF), which adds the robustness of both features to image retrieval.
Abstract: With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

95 citations