scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Synthetic iris presentation attack using iDCGAN

TL;DR: In this paper, a novel iris presentation attack using deep learning based synthetic iris generation is presented. But, the attack is limited to textured contact lenses and print attacks.
Abstract: Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.
Citations
More filters
Proceedings ArticleDOI
15 Mar 2018
TL;DR: This work proposes a multi-task convolutional neural network learning approach that can simultaneously perform iris localization and presentation attack detection (PAD) and is believed to be the first work that performs iris detection and iris presentation attack Detection simultaneously.
Abstract: In this work, we propose a multi-task convolutional neural network learning approach that can simultaneously perform iris localization and presentation attack detection (PAD). The proposed multi-task PAD (MT-PAD) is inspired by an object detection method which directly regresses the parameters of the iris bounding box and computes the probability of presentation attack from the input ocular image. Experiments involving both intra-sensor and cross-sensor scenarios suggest that the proposed method can achieve state-of-the-art results on publicly available datasets. To the best of our knowledge, this is the first work that performs iris detection and iris presentation attack detection simultaneously.

67 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: A novel algorithm for detecting iris presentation attacks using a combination of handcrafted and deep learning based features in multi-level Redundant Discrete Wavelet Transform domain with VGG features to encode the textural variations between real and attacked iris images.
Abstract: Iris recognition systems may be vulnerable to presentation attacks such as textured contact lenses, print attacks, and synthetic iris images. Increasing applications of iris recognition have raised the importance of efficient presentation attack detection algorithms. In this paper, we propose a novel algorithm for detecting iris presentation attacks using a combination of handcrafted and deep learning based features. The proposed algorithm combines local and global Haralick texture features in multi-level Redundant Discrete Wavelet Transform domain with VGG features to encode the textural variations between real and attacked iris images. The proposed algorithm is extensively tested on a large iris dataset comprising more than 270,000 real and attacked iris images and yields a total error of 1.01%. The experimental evaluation demonstrates the superior presentation attack detection performance of the proposed algorithm as compared to state-of-the-art algorithms.

55 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the different aspects related to inverse biometrics: development of reconstruction algorithms for different characteristics; proposal of methodologies to assess the vulnerabilities of biometric systems to the aforementioned algorithms; development of countermeasures to reduce the possible effects of attacks.

54 citations

Proceedings ArticleDOI
16 Jun 2019
TL;DR: A new technique for generating synthetic iris images is designed and its potential for presentation attack detection (PAD) is demonstrated and the viability of using these synthetic images to train a PAD system that can generalize well to "unseen" attacks is demonstrated.
Abstract: In this work we design a new technique for generating synthetic iris images and demonstrate its potential for presentation attack detection (PAD). The proposed technique utilizes the generative capability of a Relativistic Average Standard Generative Adversarial Network (RaSGAN) to synthesize high quality images of the iris. Unlike traditional GANs, RaSGAN enhances the generative power of the network by introducing a "relativistic" discriminator (and generator), which aims to maximize the probability that the real input data is more realistic than the synthetic data (and vice-versa, respectively). The resultant generated images are observed to be very similar to real iris images. Furthermore, we demonstrate the viability of using these synthetic images to train a PAD system that can generalize well to "unseen" attacks, i.e., the PAD system is able to detect attacks that were not used during the training phase.

39 citations

Proceedings ArticleDOI
01 Mar 2020
TL;DR: This research proposes a presentation attack detection (PAD) method that utilizes a discriminator that is trained to distinguish between bonafide iris images and synthetically generated irIS images and hypothesizes that such a discriminators will generate a tight boundary around the bonafides to better separate theBonafide samples from all types of PA samples.
Abstract: Iris based recognition systems are vulnerable to presentation attacks (PAs) where artifacts such as cosmetic contact lenses, artificial eyes and printed eyes can be used to fool the system. While many learning-based algorithms have been proposed to detect such attacks, very few are equipped to handle previously unseen or newly constructed PAs. In this research, we propose a presentation attack detection (PAD) method that utilizes a discriminator that is trained to distinguish between bonafide iris images and synthetically generated iris images. We hypothesize that such a discriminator will generate a tight boundary around the bonafide samples. This would allow the discriminator to better separate the bonafide samples from all types of PA samples. For generating synthetic irides, we train the Relativistic Average Standard Generative Adversarial Network (RaSGAN) that has been shown to generate higher resolution and better quality images than standard GANs. The relativistic discriminator (RD) component of the trained RaS-GAN is then appropriated for PA detection and is referred to as RD-PAD. Experimental results convey the efficacy of the RD-PAD as a one-class anomaly detector.

19 citations

References
More filters
Journal ArticleDOI
08 Dec 2014
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Abstract: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.

38,211 citations

Proceedings Article
01 Jan 2014
TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Abstract: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.

20,769 citations

Posted Content
TL;DR: The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Abstract: Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.

7,987 citations

Posted Content
TL;DR: SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.
Abstract: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.

4,404 citations

Journal ArticleDOI
TL;DR: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests.
Abstract: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests. The recognition principle is the failure of a test of statistical independence on iris phase structure encoded by multi-scale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 b/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. The high confidence levels are important because they allow very large databases to be searched exhaustively (one-to-many "identification mode") without making false matches, despite so many chances. Biometrics that lack this property can only survive one-to-one ("verification") or few comparisons. The paper explains the iris recognition algorithms and presents results of 9.1 million comparisons among eye images from trials in Britain, the USA, Japan, and Korea.

2,829 citations