scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Smartphone based visible iris recognition using deep sparse filtering

TL;DR: A new segmentation scheme is proposed and adapted to smartphone based visible iris images for approximating the radius of the iris to achieve robust segmentation and a new feature extraction method based on deepsparsefiltering is proposed to obtain robust features for unconstrained iris image images.
Abstract: Good biometric performance of iris recognition motivates it to be used for many large scale security and access control applications. Recent works have identified visible spectrum iris recognition as a viable option with considerable performance. Key advantages of visible spectrum iris recognition include the possibility of iris imaging in on-the-move and at-a-distance scenarios as compared to fixed range imaging in near-infra-red light. The unconstrained iris imaging captures the images with largely varying radius of iris and pupil. In this work, we propose a new segmentation scheme and adapt it to smartphone based visible iris images for approximating the radius of the iris to achieve robust segmentation. The proposed technique has shown the improved segmentation accuracy up to 85% with standard OSIRIS v4.1. This work also proposes a new feature extraction method based on deepsparsefiltering to obtain robust features for unconstrained iris images. To evaluate the proposed segmentation scheme and feature extraction scheme, we employ a publicly available database and also compose a new iris image database. The newly composed iris image database (VSSIRIS) is acquired using two different smartphones - iPhone 5S and Nokia Lumia 1020 under mixed illumination with unconstrained conditions in visible spectrum. The biometric performance is benchmarked based on the equal error rate (EER) obtained from various state-of-art schemes and proposed feature extraction scheme. An impressive EER of 1.62% is obtained on our VSSIRIS database and an average gain of around 2% in EER is obtained on the public database as compared to the well-known state-of-art schemes.
Citations
More filters
Journal ArticleDOI
TL;DR: A two-stage learning method inspired by the idea of unsupervised feature learning that uses artificial intelligence techniques to learn features from raw data for intelligent diagnosis of machines that reduces the need of human labor and makes intelligent fault diagnosis handle big data more easily.
Abstract: Intelligent fault diagnosis is a promising tool to deal with mechanical big data due to its ability in rapidly and efficiently processing collected signals and providing accurate diagnosis results. In traditional intelligent diagnosis methods, however, the features are manually extracted depending on prior knowledge and diagnostic expertise. Such processes take advantage of human ingenuity but are time-consuming and labor-intensive. Inspired by the idea of unsupervised feature learning that uses artificial intelligence techniques to learn features from raw data, a two-stage learning method is proposed for intelligent diagnosis of machines. In the first learning stage of the method, sparse filtering, an unsupervised two-layer neural network, is used to directly learn features from mechanical vibration signals. In the second stage, softmax regression is employed to classify the health conditions based on the learned features. The proposed method is validated by a motor bearing dataset and a locomotive bearing dataset, respectively. The results show that the proposed method obtains fairly high diagnosis accuracies and is superior to the existing methods for the motor bearing dataset. Because of learning features adaptively, the proposed method reduces the need of human labor and makes intelligent fault diagnosis handle big data more easily.

915 citations


Cites background from "Smartphone based visible iris recog..."

  • ...Therefore, sparse filtering does not necessarily include the parameter tuning and easily converges to an optimal solution [30]....

    [...]

Journal ArticleDOI
TL;DR: This article surveys 100 different approaches that explore deep learning for recognizing individuals using various biometric modalities and discusses how deep learning methods can benefit the field of biometrics and the potential gaps that deep learning approaches need to address for real-world biometric applications.
Abstract: In the recent past, deep learning methods have demonstrated remarkable success for supervised learning tasks in multiple domains including computer vision, natural language processing, and speech processing. In this article, we investigate the impact of deep learning in the field of biometrics, given its success in other domains. Since biometrics deals with identifying people by using their characteristics, it primarily involves supervised learning and can leverage the success of deep learning in other related domains. In this article, we survey 100 different approaches that explore deep learning for recognizing individuals using various biometric modalities. We find that most deep learning research in biometrics has been focused on face and speaker recognition. Based on inferences from these approaches, we discuss how deep learning methods can benefit the field of biometrics and the potential gaps that deep learning approaches need to address for real-world biometric applications.

201 citations

Proceedings ArticleDOI
01 Sep 2016
TL;DR: Experimental analysis reveal that proposed DeepIrisNet can model the micro-structures of iris very effectively and provides robust, discriminative, compact, and very easy-to-implement iris representation that obtains state-of-the-art accuracy.
Abstract: Despite significant advances in iris recognition (IR), the efficient and robust IR at scale and in non-ideal conditions presents serious performance issues and is still ongoing research topic. Deep Convolution Neural Networks (DCNN) are powerful visual models that have reported state-of-the-art performance in several domains. In this paper, we propose deep learning based method termed as DeepIrisNet for iris representation. The proposed approach bases on very deep architecture and various tricks from recent successful CNNs. Experimental analysis reveal that proposed DeepIrisNet can model the micro-structures of iris very effectively and provides robust, discriminative, compact, and very easy-to-implement iris representation that obtains state-of-the-art accuracy. Furthermore, we evaluate our iris representation for cross-sensor IR. The experimental results demonstrate that DeepIrisNet models obtain a significant improvement in cross-sensor recognition accuracy too.

200 citations


Cites background from "Smartphone based visible iris recog..."

  • ...Index Terms— CNN, iris recognition, cross-sensor iris recognition, deep iris representation, deep learning...

    [...]

Journal ArticleDOI
TL;DR: This survey aims to provide a more comprehensive introduction to Sensor-based human activity recognition (HAR) in terms of sensors, activities, data pre-processing, feature learning and classification, including both conventional approaches and deep learning methods.
Abstract: Increased life expectancy coupled with declining birth rates is leading to an aging population structure. Aging-caused changes, such as physical or cognitive decline, could affect people's quality of life, result in injuries, mental health or the lack of physical activity. Sensor-based human activity recognition (HAR) is one of the most promising assistive technologies to support older people's daily life, which has enabled enormous potential in human-centred applications. Recent surveys in HAR either only focus on the deep learning approaches or one specific sensor modality. This survey aims to provide a more comprehensive introduction for newcomers and researchers to HAR. We first introduce the state-of-art sensor modalities in HAR. We look more into the techniques involved in each step of wearable sensor modality centred HAR in terms of sensors, activities, data pre-processing, feature learning and classification, including both conventional approaches and deep learning methods. In the feature learning section, we focus on both hand-crafted features and automatically learned features using deep networks. We also present the ambient-sensor-based HAR, including camera-based systems, and the systems which combine the wearable and ambient sensors. Finally, we identify the corresponding challenges in HAR to pose research problems for further improvement in HAR.

195 citations

Journal ArticleDOI
TL;DR: A novel approach for iris normalization, based on a non geometric parameterization of contours is proposed in the latest version: OSIRISV4.1 and is detailed in particular here.
Abstract: In this paper, we present the evolution of the open source iris recognition system OSIRIS through its more relevant versions: OSIRISV2, OSIRISV4, and OSIRISV4.1. We developed OSIRIS in the framework of BioSecure Association as an open source software aiming at providing a reference for the scientific community. The software is mainly composed of four key modules, namely segmentation, normalization, feature extraction and template matching, which are described in detail for each version. A novel approach for iris normalization, based on a non geometric parameterization of contours is proposed in the latest version: OSIRISV4.1 and is detailed in particular here. Improvements in performance through the different versions of OSIRIS are reported on two public databases commonly used, ICE2005 and CASIA-IrisV4-Thousand. We note the high verification rates obtained by the last version. For this reason, OSIRISV4.1 can be proposed as a baseline system for comparison to other algorithms, this way supplying a helpful research tool for the iris recognition community.

141 citations

References
More filters
Journal ArticleDOI
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Abstract: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

15,055 citations

Journal ArticleDOI
01 Mar 1973
TL;DR: This paper gives a tutorial exposition of the Viterbi algorithm and of how it is implemented and analyzed, and increasing use of the algorithm in a widening variety of areas is foreseen.
Abstract: The Viterbi algorithm (VA) is a recursive optimal solution to the problem of estimating the state sequence of a discrete-time finite-state Markov process observed in memoryless noise. Many problems in areas such as digital communications can be cast in this form. This paper gives a tutorial exposition of the algorithm and of how it is implemented and analyzed. Applications to date are reviewed. Increasing use of the algorithm in a widening variety of areas is foreseen.

5,995 citations


"Smartphone based visible iris recog..." refers methods in this paper

  • ...The diffused image is used to detect the coarse boundaries by employing Viterbi search algorithm [8]....

    [...]

Proceedings Article
04 Dec 2006
TL;DR: These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Abstract: Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

4,385 citations

Journal ArticleDOI
TL;DR: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests.
Abstract: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests. The recognition principle is the failure of a test of statistical independence on iris phase structure encoded by multi-scale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 b/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. The high confidence levels are important because they allow very large databases to be searched exhaustively (one-to-many "identification mode") without making false matches, despite so many chances. Biometrics that lack this property can only survive one-to-one ("verification") or few comparisons. The paper explains the iris recognition algorithms and presents results of 9.1 million comparisons among eye images from trials in Britain, the USA, Japan, and Korea.

2,829 citations


"Smartphone based visible iris recog..." refers methods in this paper

  • ...Under NIR imaging, iris features re generally obtained using 1D Gabor wavelets or 2D Gabor wavelets ased features for successful recognition [15,6]....

    [...]

  • ...The segmented iris is further normalized using the Daugman’s rubber sheet model [6]....

    [...]

  • ...iven the iris image, the normalization technique unwraps the circuar iris region into a rectangular image using Daugman’s rubber sheet odel [6]....

    [...]

Proceedings ArticleDOI
10 Dec 2002
TL;DR: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests.
Abstract: The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan.

2,437 citations