Author
James S. Doyle
Bio: James S. Doyle is an academic researcher from University of Notre Dame. The author has contributed to research in topics: Iris recognition & Lens (optics). The author has an hindex of 6, co-authored 8 publications receiving 408 citations.
Papers
More filters
TL;DR: This paper presents a novel lens detection algorithm that can be used to reduce the effect of contact lenses and outperforms other lens detection algorithms on the two databases and shows improved iris recognition performance.
Abstract: The presence of a contact lens, particularly a textured cosmetic lens, poses a challenge to iris recognition as it obfuscates the natural iris patterns. The main contribution of this paper is to present an in-depth analysis of the effect of contact lenses on iris recognition. Two databases, namely, the IIIT-D Iris Contact Lens database and the ND-Contact Lens database, are prepared to analyze the variations caused due to contact lenses. We also present a novel lens detection algorithm that can be used to reduce the effect of contact lenses. The proposed approach outperforms other lens detection algorithms on the two databases and shows improved iris recognition performance.
149 citations
TL;DR: This paper shows that a novel textured lens type may have a significant impact on the performance of textured lenses detection, and suggests that using a novel iris sensor can significantly degrade the correct classification rate of a detection algorithm trained with the images from a different sensor.
Abstract: This paper considers three issues that arise in creating an algorithm for the robust detection of textured contact lenses in iris recognition images. The first issue is whether the accurate segmentation of the iris region is required in order to achieve the accurate detection of textured contact lenses. Our experimental results suggest that accurate iris segmentation is not required. The second issue is whether an algorithm trained on the images acquired from one sensor will well generalize to the images acquired from a different sensor. Our results suggest that using a novel iris sensor can significantly degrade the correct classification rate of a detection algorithm trained with the images from a different sensor. The third issue is how well a detector generalizes to a brand of textured contact lenses, not seen in the training data. This paper shows that a novel textured lens type may have a significant impact on the performance of textured lens detection.
94 citations
TL;DR: The goal for the Liveness Detection (LivDet) competitions is to compare software-based iris liveness detection methodologies using a standardized testing protocol and large quantities of spoof and live images.
Abstract: The use of an artificial replica of a biometric characteristic in an attempt to circumvent a system is an example of a biometric presentation attack. Liveness detection is one of the proposed countermeasures, and has been widely implemented in fingerprint and iris recognition systems in recent years to reduce the consequences of spoof attacks. The goal for the Liveness Detection (LivDet) competitions is to compare software-based iris liveness detection methodologies using a standardized testing protocol and large quantities of spoof and live images. Three submissions were received for the competition Part 1; Biometric Recognition Group de Universidad Autonoma de Madrid, University of Naples Federico II, and Faculdade de Engenharia de Universidade do Porto. The best results from across all three datasets was from Federico with a rate of falsely rejected live samples of 28.6% and the rate of falsely accepted fake samples of 5.7%.
71 citations
01 Sep 2013
TL;DR: Experimental results in this work show that accuracy of textured lens detection can drop dramatically when tested on a manufacturer of lenses not seen in the training data, or when the iris sensor in use varies between the training and test data.
Abstract: Automatic detection of textured contact lenses in images acquired for iris recognition has been studied by several researchers. However, to date, the experimental results in this area have all been based on the same manufacturer of contact lenses being represented in both the training data and the test data and only one previous work has considered images from more than one iris sensor. Experimental results in this work show that accuracy of textured lens detection can drop dramatically when tested on a manufacturer of lenses not seen in the training data, or when the iris sensor in use varies between the training and test data. These results suggest that the development of a fully general approach to textured lens detection is a problem that still requires attention.
66 citations
04 Jun 2013
TL;DR: This work presents results of the first attempt that is aware of to solve the three-class classification problem for iris recognition, and shows that it is possible to identify with high accuracy the images in which a textured cosmetic contact lens is present.
Abstract: Textured cosmetic lenses have long been known to present a problem for iris recognition. It was once believed that clear, soft contact lenses did not impact iris recognition accuracy. However, it has recently been shown that persons wearing clear, soft contact lenses experience an increased false non-match rate relative to persons not wearing contact lenses. Iris recognition systems need the ability to automatically determine if a person is (a) wearing no contact lens, (b) wearing a clear prescription lens, or (c), wearing a textured cosmetic lens. This work presents results of the first attempt that we are aware of to solve this three-class classification problem. Results show that it is possible to identify with high accuracy (96.5%) the images in which a textured cosmetic contact lens is present, but that correctly distinguishing between no lenses and soft lenses is a challenging problem.
39 citations
Cited by
More filters
01 May 2017
TL;DR: This work introduces a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions, acquisition devices and presentation attack instruments.
Abstract: The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and videoreplay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database.
416 citations
TL;DR: The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality.
Abstract: In recent decades, we have witnessed the evolution of biometric technology from the first pioneering works in face and voice recognition to the current state of development wherein a wide spectrum of highly accurate systems may be found, ranging from largely deployed modalities, such as fingerprint, face, or iris, to more marginal ones, such as signature or hand. This path of technological evolution has naturally led to a critical issue that has only started to be addressed recently: the resistance of this rapidly emerging technology to external attacks and, in particular, to spoofing. Spoofing, referred to by the term presentation attack in current standards, is a purely biometric vulnerability that is not shared with other IT security solutions. It refers to the ability to fool a biometric system into recognizing an illegitimate user as a genuine one by means of presenting a synthetic forged version of the original biometric trait to the sensor. The entire biometric community, including researchers, developers, standardizing bodies, and vendors, has thrown itself into the challenging task of proposing and developing efficient protection methods against this threat. The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality. The work covers theories, methodologies, state-of-the-art techniques, and evaluation databases and also aims at providing an outlook into the future of this very active field of research.
366 citations
TL;DR: This work assumes a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches based on convolutional networks.
Abstract: Biometrics systems have significantly improved person identification and authentication, playing an important role in personal, national, and global security. However, these systems might be deceived (or spoofed) and, despite the recent advances in spoofing detection, current solutions often rely on domain knowledge, specific biometric reading systems, and attack types. We assume a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches. The first approach consists of learning suitable convolutional network architectures for each domain, whereas the second approach focuses on learning the weights of the network via back propagation. We consider nine biometric spoofing benchmarks—each one containing real and fake samples of a given biometric modality and attack type—and learn deep representations for each benchmark by combining and contrasting the two learning approaches. This strategy not only provides better comprehension of how these approaches interplay, but also creates systems that exceed the best known results in eight out of the nine benchmarks. The results strongly indicate that spoofing detection systems based on convolutional networks can be robust to attacks already known and possibly adapted, with little effort, to image-based attacks that are yet to come.
353 citations
TL;DR: In this paper, the authors proposed two deep learning approaches for spoofing detection of iris, face, and fingerprint modalities based on a very limited knowledge about biometric spoofing at the sensor.
Abstract: Biometrics systems have significantly improved person identification and authentication, playing an important role in personal, national, and global security. However, these systems might be deceived (or "spoofed") and, despite the recent advances in spoofing detection, current solutions often rely on domain knowledge, specific biometric reading systems, and attack types. We assume a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches. The first approach consists of learning suitable convolutional network architectures for each domain, while the second approach focuses on learning the weights of the network via back-propagation. We consider nine biometric spoofing benchmarks --- each one containing real and fake samples of a given biometric modality and attack type --- and learn deep representations for each benchmark by combining and contrasting the two learning approaches. This strategy not only provides better comprehension of how these approaches interplay, but also creates systems that exceed the best known results in eight out of the nine benchmarks. The results strongly indicate that spoofing detection systems based on convolutional networks can be robust to attacks already known and possibly adapted, with little effort, to image-based attacks that are yet to come.
325 citations