scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Ocular biometrics

01 Nov 2015-Information Fusion (Elsevier)-Vol. 26, pp 1-35
TL;DR: A path forward is proposed to advance the research on ocular recognition by improving the sensing technology, heterogeneous recognition for addressing interoperability, utilizing advanced machine learning algorithms for better representation and classification, and developing algorithms for ocular Recognition at a distance.
Abstract: Display Omitted A literature review of ocular modalities such as iris and periocular is presented.Information fusion approaches that combine ocular modalities with other modalities are reviewed.Future research directions are presented on sensing technologies, algorithms, and fusion approaches. Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India's Aadhaar Program and the United Arab Emirate's border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development.
Citations
More filters
Journal ArticleDOI
TL;DR: It is shown that the off-the-shelf CNN features, while originally trained for classifying generic objects, are also extremely good at representing iris images, effectively extracting discriminative visual features and achieving promising recognition results on two iris datasets: ND-CrossSensor-2013 and CASIA-Iris-Thousand.
Abstract: Iris recognition refers to the automated process of recognizing individuals based on their iris patterns. The seemingly stochastic nature of the iris stroma makes it a distinctive cue for biometric recognition. The textural nuances of an individual’s iris pattern can be effectively extracted and encoded by projecting them onto Gabor wavelets and transforming the ensuing phasor response into a binary code - a technique pioneered by Daugman. This textural descriptor has been observed to be a robust feature descriptor with very low false match rates and low computational complexity. However, recent advancements in deep learning and computer vision indicate that generic descriptors extracted using convolutional neural networks (CNNs) are able to represent complex image characteristics. Given the superior performance of CNNs on the ImageNet large scale visual recognition challenge and a large number of other computer vision tasks, in this paper, we explore the performance of state-of-the-art pre-trained CNNs on iris recognition. We show that the off-the-shelf CNN features, while originally trained for classifying generic objects, are also extremely good at representing iris images, effectively extracting discriminative visual features and achieving promising recognition results on two iris datasets: ND-CrossSensor-2013 and CASIA-Iris-Thousand. We also discuss the challenges and future research directions in leveraging deep learning methods for the problem of iris recognition.

291 citations

Journal ArticleDOI
TL;DR: This paper reviews the state-of-the-art design and implementation of iris-recognition-at-a-distance (IAAD) systems and presents a complete solution to the design problem of an IAAD system, from both hardware and algorithmic perspectives.
Abstract: The term iris refers to the highly textured annular portion of the human eye that is externally visible. An iris recognition system exploits the richness of these textural patterns to distinguish individuals. Iris recognition systems are being used in a number of human recognition applications such as access control, national ID schemes, border control, etc. To capture the rich textural information of the iris pattern regardless of the eye color, traditional iris recognition systems utilize near-infrared (NIR) sensors to acquire images of the iris. This, however, restricts the iris image acquisition distance to close quarters (less than 1m). Over the last several years, there have been numerous attempts to design and implement iris recognition systems that operate at longer standoff distances ranging from 1m to 60m. Such long range iris acquisition and recognition systems can provide high user convenience and improved throughput. This paper reviews the state-of-the-art design and implementation of iris-recognition-at-a-distance (IAAD) systems. In this regard, the design of such a system from both the image acquisition (hardware) and image processing (algorithms) perspectives are presented. The major contributions of this paper include: (1) discussing the significance and applications of IAAD systems in the context of human recognition, (2) providing a review of existing IAAD systems, (3) presenting a complete solution to the design problem of an IAAD system, from both hardware and algorithmic perspectives, (4) discussing the use of additional ocular information, along with iris, for improving IAAD accuracy, and (5) discussing the current research challenges and providing recommendations for future research in IAAD.

133 citations

Journal ArticleDOI
TL;DR: This study proposed a novel multi-modality segmentation method based on a 3D fully convolutional neural network (FCN), which is capable of taking account of both PET and CT information simultaneously for tumor segmentation and achieved significantly performance gain over CNN-based methods and traditional methods.
Abstract: Automatic tumor segmentation from medical images is an important step for computer-aided cancer diagnosis and treatment. Recently, deep learning has been successfully applied to this task, leading to state-of-the-art performance. However, most of existing deep learning segmentation methods only work for a single imaging modality. PET/CT scanner is nowadays widely used in the clinic, and is able to provide both metabolic information and anatomical information through integrating PET and CT into the same utility. In this study, we proposed a novel multi-modality segmentation method based on a 3D fully convolutional neural network (FCN), which is capable of taking account of both PET and CT information simultaneously for tumor segmentation. The network started with a multi-task training module, in which two parallel sub-segmentation architectures constructed using deep convolutional neural networks (CNNs) were designed to automatically extract feature maps from PET and CT respectively. A feature fusion module was subsequently designed based on cascaded convolutional blocks, which re-extracted features from PET/CT feature maps using a weighted cross entropy minimization strategy. The tumor mask was obtained as the output at the end of the network using a softmax function. The effectiveness of the proposed method was validated on a clinic PET/CT dataset of 84 patients with lung cancer. The results demonstrated that the proposed network was effective, fast and robust and achieved significantly performance gain over CNN-based methods and traditional methods using PET or CT only, two V-net based co-segmentation methods, two variational co-segmentation methods based on fuzzy set theory and a deep learning co-segmentation method using W-net.

129 citations

Journal ArticleDOI
TL;DR: Several systems and architectures related to the combination of biometric systems, both unimodal and multimodal, are overviews, classifying them according to a given taxonomy, and a case study for the experimental evaluation of methods for biometric fusion at score level is presented.
Abstract: The paper presents the methodologies on information fusion in the biometric field.The methodologies, architectures, and benchmarks related to unimodal and multimodal fusion of biometric systems are discussed.The state of the art in the combination of biometric matchers is provided.A case study for the experimental evaluation of methods for biometric fusion at score level is presented.Some possible directions for future research are suggested. Biometric identity verification refers to technologies used to measure human physical or behavioral characteristics, which offer a radical alternative to passports, ID cards, driving licenses or PIN numbers in authentication. Since biometric systems present several limitations in terms of accuracy, universality, distinctiveness, acceptability, methods for combining biometric matchers have attracted increasing attention of researchers with the aim of improving the ability of systems to handle poor quality and incomplete data, achieving scalability to manage huge databases of users, ensuring interoperability, and protecting user privacy against attacks. The combination of biometric systems, also known as "biometric fusion", can be classified into unimodal biometric if it is based on a single biometric trait and multimodal biometric if it uses several biometric traits for person authentication.The main goal of this study is to analyze different techniques of information fusion applied in the biometric field. This paper overviews several systems and architectures related to the combination of biometric systems, both unimodal and multimodal, classifying them according to a given taxonomy. Moreover, we deal with the problem of biometric system evaluation, discussing both performance indicators and existing benchmarks.As a case study about the combination of biometric matchers, we present an experimental comparison of many different approaches of fusion of matchers at score level, carried out on three very different benchmark databases of scores. Our experiments show that the most valuable performance is obtained by mixed approaches, based on the fusion of scores. The source code of all the method implemented for this research is freely available for future comparisons11www.dei.unipd.it/node/2357.After a detailed analysis of pros and cons of several existing approaches for the combination of biometric matchers and after an experimental evaluation of some of them, we draw our conclusion and suggest some future directions of research, hoping that this work could be a useful start point for newer research. Display Omitted

123 citations


Cites background from "Ocular biometrics"

  • ...ocular biometric traits such as iris, periocular, retina, and eye movement [11], finger traits such as finger veins and fingerprints [12][13]....

    [...]

Journal ArticleDOI
TL;DR: This work is expected to provide an insight of the most relevant issues in periocular biometrics, giving a comprehensive coverage of the existing literature and current state of the art.
Abstract: Review of state of the art in periocular biometrics research, with a comprehensive coverage of the existing literature.Summary of databases employed in periocular research.Summary of works proposed for detection and segmentation of the periocular region.Taxonomy of features used for recognition, brief description of each feature, and its application to periocular recognition.Fusion of periocular with other modalities, use for soft-biometrics, impact of gender transformation and plastic surgery. Periocular refers to the facial region in the vicinity of the eye, including eyelids, lashes and eyebrows. While face and irises have been extensively studied, the periocular region has emerged as a promising trait for unconstrained biometrics, following demands for increased robustness of face or iris systems. With a surprisingly high discrimination ability, this region can be easily obtained with existing setups for face and iris, and the requirement of user cooperation can be relaxed, thus facilitating the interaction with biometric systems. It is also available over a wide range of distances even when the iris texture cannot be reliably obtained (low resolution) or under partial face occlusion (close distances). Here, we review the state of the art in periocular biometrics research. A number of aspects are described, including: (i) existing databases, (ii) algorithms for periocular detection and/or segmentation, (iii) features employed for recognition, (iv) identification of the most discriminative regions of the periocular area, (v) comparison with iris and face modalities, (vi) soft-biometrics (gender/ethnicity classification), and (vii) impact of gender transformation and plastic surgery on the recognition accuracy. This work is expected to provide an insight of the most relevant issues in periocular biometrics, giving a comprehensive coverage of the existing literature and current state of the art.

120 citations

References
More filters
Journal ArticleDOI
TL;DR: A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence, which implies a theoretical "cross-over" error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates.
Abstract: A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person's face is the detailed texture of each eye's iris. The visible texture of a person's iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte "iris code". Statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical "cross-over" error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about 10/sup 31/. >

3,399 citations


"Ocular biometrics" refers background or methods in this paper

  • ...Daugman’s IrisCode algorithm [14] has served as the basis for a number of efforts made by researchers in the biometrics community....

    [...]

  • ...Daugman [14] proposed the use of the Near Infrared (NIR) spectrum in the wavelength range 750– 950 nm for iris acquisition....

    [...]

  • ...Daugman [14] initially proposed circular edge detection using an integro-differential operator to segment the iris boundaries....

    [...]

  • ...Daugman [14] proposed the use of 2D Gabor filters to capture the textural information present in iris codes....

    [...]

  • ...The principle driving the algorithm is the failure of a test of statistical independence on the iris sample image encoded by multi-scale quadrature wavelets as discussed in [14]....

    [...]

Journal ArticleDOI
TL;DR: This paper introduces the database, describes the recording procedure, and presents results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.
Abstract: A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.

1,333 citations

Proceedings Article
05 Dec 2005
TL;DR: A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in die primate visual cortex.
Abstract: A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in die primate visual cortex. It is further shown that the proposed salicney measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.

1,201 citations


"Ocular biometrics" refers methods in this paper

  • ...Experiments performed on the Eye Movement Dataset [198] datasets show an accuracy of 91.5% for the proposed algorithm....

    [...]

  • ...The effectiveness of proposed algorithm is established based on extensive qualitative and quantitative experimental results performed on the York University Eye Tracking Dataset [206]....

    [...]

  • ...Examine dynamic aspects of eye behaviors to assess eye movement patterns as soft biometric trait Sun et al. [205] York University Eye Tracking Dataset [206] [Emerging] Model saccadic eye movements and visual saliency based on SGC. Obtain SGC using projection pursuit and generate eye movements by selecting location with maximum SGC response with information from the pupil, the periocular region, and the sclera....

    [...]

  • ...UBIRIS and NICE Datasets: Proença and Alexandre collected the UBIRIS v1 dataset [18] using a Nikon E5700 sensor....

    [...]

  • ...Face Datasets used for Ocular Recognition: The data for the Face Recognition Grand Challenge v1 (FRGC) [64] consists of 50,000 recordings....

    [...]

Proceedings Article
01 Sep 2008
TL;DR: The CMU Multi-PIE database as mentioned in this paper contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions, with a limited number of subjects, a single recording session and only few expressions captured.
Abstract: A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.

1,181 citations

Book
01 Jan 2006
TL;DR: Details multi-modal biometrics and its exceptional utility for increasingly reliable human recognition systems and the substantial advantages of multimodal systems over conventional identification methods.
Abstract: Details multimodal biometrics and its exceptional utility for increasingly reliable human recognition systems. Reveals the substantial advantages of multimodal systems over conventional identification methods.

1,068 citations


"Ocular biometrics" refers background in this paper

  • ...To address such instances, researchers and practitioners have proposed multi-modal fusion or selection of biometric modalities to improve the recognition performance [8]....

    [...]