scispace - formally typeset
Search or ask a question

Showing papers on "Biometrics published in 2018"


Journal ArticleDOI
TL;DR: It is shown that the off-the-shelf CNN features, while originally trained for classifying generic objects, are also extremely good at representing iris images, effectively extracting discriminative visual features and achieving promising recognition results on two iris datasets: ND-CrossSensor-2013 and CASIA-Iris-Thousand.
Abstract: Iris recognition refers to the automated process of recognizing individuals based on their iris patterns. The seemingly stochastic nature of the iris stroma makes it a distinctive cue for biometric recognition. The textural nuances of an individual’s iris pattern can be effectively extracted and encoded by projecting them onto Gabor wavelets and transforming the ensuing phasor response into a binary code - a technique pioneered by Daugman. This textural descriptor has been observed to be a robust feature descriptor with very low false match rates and low computational complexity. However, recent advancements in deep learning and computer vision indicate that generic descriptors extracted using convolutional neural networks (CNNs) are able to represent complex image characteristics. Given the superior performance of CNNs on the ImageNet large scale visual recognition challenge and a large number of other computer vision tasks, in this paper, we explore the performance of state-of-the-art pre-trained CNNs on iris recognition. We show that the off-the-shelf CNN features, while originally trained for classifying generic objects, are also extremely good at representing iris images, effectively extracting discriminative visual features and achieving promising recognition results on two iris datasets: ND-CrossSensor-2013 and CASIA-Iris-Thousand. We also discuss the challenges and future research directions in leveraging deep learning methods for the problem of iris recognition.

291 citations


Journal ArticleDOI
TL;DR: This article surveys 100 different approaches that explore deep learning for recognizing individuals using various biometric modalities and discusses how deep learning methods can benefit the field of biometrics and the potential gaps that deep learning approaches need to address for real-world biometric applications.
Abstract: In the recent past, deep learning methods have demonstrated remarkable success for supervised learning tasks in multiple domains including computer vision, natural language processing, and speech processing. In this article, we investigate the impact of deep learning in the field of biometrics, given its success in other domains. Since biometrics deals with identifying people by using their characteristics, it primarily involves supervised learning and can leverage the success of deep learning in other related domains. In this article, we survey 100 different approaches that explore deep learning for recognizing individuals using various biometric modalities. We find that most deep learning research in biometrics has been focused on face and speaker recognition. Based on inferences from these approaches, we discuss how deep learning methods can benefit the field of biometrics and the potential gaps that deep learning approaches need to address for real-world biometric applications.

201 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, the authors introduce CNN architectures for both binary and multi-way cross-modal face and audio matching, and compare dynamic and static testing with human testing as a baseline to calibrate the difficulty of the task.
Abstract: We introduce a seemingly impossible task: given only an audio clip of someone speaking, decide which of two face images is the speaker. In this paper we study this, and a number of related cross-modal tasks, aimed at answering the question: how much can we infer from the voice about the face and vice versa? We study this task "in the wild", employing the datasets that are now publicly available for face recognition from static images (VGGFace) and speaker identification from audio (VoxCeleb). These provide training and testing scenarios for both static and dynamic testing of cross-modal matching. We make the following contributions: (i) we introduce CNN architectures for both binary and multi-way cross-modal face and audio matching: (ii) we compare dynamic testing (where video information is available, but the audio is not from the same video) with static testing (where only a single still image is available): and (iii) we use human testing as a baseline to calibrate the difficulty of the task. We show that a CNN can indeed be trained to solve this task in both the static and dynamic scenarios, and is even well above chance on 10-way classification of the face given the voice. The CNN matches human performance on easy examples (e.g. different gender across faces) but exceeds human performance on more challenging examples (e.g. faces with the same gender, age and nationality).

199 citations


Journal ArticleDOI
TL;DR: This paper proposes a new general framework for the evaluation of biometric templates’ unlinkability and applies it to assess the un linkability of the four state-of-the-art techniques for biometric template protection: biometric salting, bloom filters, homomorphic encryption, and block re-mapping.
Abstract: The wide deployment of biometric recognition systems in the last two decades has raised privacy concerns regarding the storage and use of biometric data. As a consequence, the ISO/IEC 24745 international standard on biometric information protection has established two main requirements for protecting biometric templates: irreversibility and unlinkability. Numerous efforts have been directed to the development and analysis of irreversible templates. However, there is still no systematic quantitative manner to analyze the unlinkability of such templates. In this paper, we address this shortcoming by proposing a new general framework for the evaluation of biometric templates’ unlinkability. To illustrate the potential of the approach, it is applied to assess the unlinkability of the four state-of-the-art techniques for biometric template protection: biometric salting, bloom filters, homomorphic encryption, and block re-mapping. For the last technique, the proposed framework is compared with other existing metrics to show its advantages.

172 citations


Journal ArticleDOI
TL;DR: An efficient and real-time multimodal biometric system is proposed based on building deep learning representations for images of both the right and left irises of a person, and fusing the results obtained using a ranking-level fusion method.
Abstract: Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. In this paper, an efficient and real-time multimodal biometric system is proposed based on building deep learning representations for images of both the right and left irises of a person, and fusing the results obtained using a ranking-level fusion method. The trained deep learning system proposed is called IrisConvNet whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input image without any domain knowledge where the input image represents the localized iris region and then classify it into one of N classes. In this work, a discriminative CNN training scheme based on a combination of back-propagation algorithm and mini-batch AdaGrad optimization method is proposed for weights updating and learning rate adaptation, respectively. In addition, other training strategies (e.g., dropout method, data augmentation) are also proposed in order to evaluate different CNN architectures. The performance of the proposed system is tested on three public datasets collected under different conditions: SDUMLA-HMT, CASIA-Iris-V3 Interval and IITD iris databases. The results obtained from the proposed system outperform other state-of-the-art of approaches (e.g., Wavelet transform, Scattering transform, Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases and a recognition time less than one second per person.

143 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a ranking based locality sensitive hashing inspired two-factor cancelable biometrics, dubbed "Index-of-Max" (IoM) hashing for biometric template protection.
Abstract: In this paper, we propose a ranking based locality sensitive hashing inspired two-factor cancelable biometrics, dubbed “Index-of-Max” (IoM) hashing for biometric template protection. With externally generated random parameters, IoM hashing transforms a real-valued biometric feature vector into discrete index (max ranked) hashed code. We demonstrate two realizations from IoM hashing notion, namely Gaussian Random Projection based and Uniformly Random Permutation based hashing schemes. The discrete indices representation nature of IoM hashed codes enjoy several merits. Firstly, IoM hashing empowers strong concealment to the biometric information. This contributes to the solid ground of non-invertibility guarantee. Secondly, IoM hashing is insensitive to the features magnitude, hence is more robust against biometric features variation. Thirdly, the magnitude-independence trait of IoM hashing makes the hash codes being scale-invariant, which is critical for matching and feature alignment. The experimental results demonstrate favorable accuracy performance on benchmark FVC2002 and FVC2004 fingerprint databases. The analyses justify its resilience to the existing and newly introduced security and privacy attacks as well as satisfy the revocability and unlinkability criteria of cancelable biometrics.

132 citations


Journal ArticleDOI
TL;DR: A deep review and discussion of 93 state-of-the-art publications on their proposed methods, signal datasets, and publicly available ECG collections is conducted to present the fundamentals and the evolution of ECG biometrics, describe the current state of the art, and draw conclusions on prior art approaches and current challenges.
Abstract: Face and fingerprint are, currently, the most thoroughly explored biometric traits, promising reliable recognition in diverse applications. Commercial products using these traits for biometric identification or authentication are increasingly widespread, from smartphones to border control. However, increasingly smart techniques to counterfeit such traits raise the need for traits that are less vulnerable to stealthy trait measurement or spoofing attacks. This has sparked interest on the electrocardiogram (ECG), most commonly associated with medical diagnosis, whose hidden nature and inherent liveness information make it highly resistant to attacks. In the last years, the topic of ECG-based biometrics has quickly evolved toward the commercial applications, mainly by addressing the reduced acceptability and comfort by proposing new off-the-person, wearable, and seamless acquisition settings. Furthermore, researchers have recently started to address the issues of spoofing prevention and data security in ECG biometrics, as well as the potential of deep learning methodologies to enhance the recognition accuracy and robustness. In this paper, we conduct a deep review and discussion of 93 state-of-the-art publications on their proposed methods, signal datasets, and publicly available ECG collections. The extracted knowledge is used to present the fundamentals and the evolution of ECG biometrics, describe the current state of the art, and draw conclusions on prior art approaches and current challenges. With this paper, we aim to delve into the current opportunities as well as inspire and guide future research in ECG biometrics.

131 citations


Journal ArticleDOI
TL;DR: A novel technique based on the idea of best features selection is introduced in this article for an offline verification system that is based on three accuracy measures as FAR, FRR and AER.

130 citations


Journal ArticleDOI
TL;DR: A deep feature fusion network that exploits the complementary information presented in iris and periocular regions to enhance the performance of mobile identification and requires much fewer storage spaces and computational resources than general CNNs.
Abstract: The quality of iris images on mobile devices is significantly degraded due to hardware limitations and less constrained environments. Traditional iris recognition methods cannot achieve high identification rate using these low-quality images. To enhance the performance of mobile identification, we develop a deep feature fusion network that exploits the complementary information presented in iris and periocular regions. The proposed method first applies maxout units into the convolutional neural networks (CNNs) to generate a compact representation for each modality and then fuses the discriminative features of two modalities through a weighted concatenation. The parameters of convolutional filters and fusion weights are simultaneously learned to optimize the joint representation of iris and periocular biometrics. To promote the iris recognition research on mobile devices under near-infrared (NIR) illumination, we publicly release the CASIA-Iris-Mobile-V1.0 database, which in total includes 11 000 NIR iris images of both eyes from 630 Asians. It is the largest NIR mobile iris database as far as we know. On the newly built CASIA-Iris-M1-S3 data set, the proposed method achieves 0.60% equal error rate and 2.32% false non-match rate at false match rate $=10^{-5}$ , which are obviously better than unimodal biometrics as well as traditional fusion methods. Moreover, the proposed model requires much fewer storage spaces and computational resources than general CNNs.

125 citations


Book
17 Mar 2018
TL;DR: The effects of size of the database on performance for both identification and verification of face recognition algorithms, including face recognition, are examined.
Abstract: Two critical performance characterizations of biometric algorithms, including face recognition, are identification and verification. Identification performance of face recognition algorithms on the FERET tests has been previously reported. We report on verification performance obtained from the Sept96 FERET test. The databases used to develop and test algorithms are usually smaller than the databases that will be encountered in applications. We examine the effects of size of the database on performance for both identification and verification.

116 citations


Journal ArticleDOI
01 Apr 2018
TL;DR: A novel finger vein recognition algorithm by using secure biometric template scheme based on deep learning and random projections, named FVR-DLRP that can maintain the accuracy of biometric identification while enhancing the uncertainty of the transformation, which provides better protection for biometric authentication.
Abstract: Leakage of unprotected biometric authentication data has become a high-risk threat for many applications. Lots of researchers are investigating and designing novel authentication schemes to prevent such attacks. However, the biggest challenge is how to protect biometric data while keeping the practical performance of identity verification systems. For the sake of tackling this problem, this paper presents a novel finger vein recognition algorithm by using secure biometric template scheme based on deep learning and random projections, named FVR-DLRP. FVR-DLRP preserves the core biometric information even with the user’s password cracked, whereas the original biometric information is still safe. The results of experiment show that the algorithm FVR-DLRP can maintain the accuracy of biometric identification while enhancing the uncertainty of the transformation, which provides better protection for biometric authentication.

Journal ArticleDOI
TL;DR: Recent trends and developments in MCS coming from multimodal biometrics that incorporate context information in an adaptive way are presented and methods are described in a general way so they can be applied to other information fusion problems as well.

Journal ArticleDOI
TL;DR: A methodology for the estimation of the main parameters of such schemes, based on a statistical analysis of the unprotected templates, is presented and the soundness of the estimation methodologies is confirmed for face, iris, fingerprint and fingervein over two totally different sets of publicly available databases.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed to combine three types of sensors for gait data collection and gait recognition, which can be used for important identification applications, such as identity recognition to access a restricted building or area.
Abstract: Gait has been considered as a promising and unique biometric for person identification. Traditionally, gait data are collected using either color sensors, such as a CCD camera, depth sensors, such as a Microsoft Kinect, or inertial sensors, such as an accelerometer. However, a single type of sensors may only capture part of the dynamic gait features and make the gait recognition sensitive to complex covariate conditions, leading to fragile gait-based person identification systems. In this paper, we propose to combine all three types of sensors for gait data collection and gait recognition, which can be used for important identification applications, such as identity recognition to access a restricted building or area. We propose two new algorithms, namely EigenGait and TrajGait, to extract gait features from the inertial data and the RGBD (color and depth) data, respectively. Specifically, EigenGait extracts general gait dynamics from the accelerometer readings in the eigenspace and TrajGait extracts more detailed subdynamics by analyzing 3-D dense trajectories. Finally, both extracted features are fed into a supervised classifier for gait recognition and person identification. Experiments on 50 subjects, with comparisons to several other state-of-the-art gait-recognition approaches, show that the proposed approach can achieve higher recognition accuracy and robustness.

Journal ArticleDOI
TL;DR: An overview of steganography techniques applied in the protection of biometric data in fingerprints is presented and the strengths and weaknesses of targeted and blind steganalysis strategies for breaking steganographers techniques are discussed.
Abstract: Identification of persons by way of biometric features is an emerging phenomenon. Over the years, biometric recognition has received much attention due to its need for security. Amongst the many existing biometrics, fingerprints are considered to be one of the most practical ones. Techniques such as watermarking and steganography have been used in attempt to improve security of biometric data. Watermarking is the process of embedding information into a carrier file for the protection of ownership/copyright of music, video or image files, whilst steganography is the art of hiding information. This paper presents an overview of steganography techniques applied in the protection of biometric data in fingerprints. It is novel in that we also discuss the strengths and weaknesses of targeted and blind steganalysis strategies for breaking steganography techniques.

Journal ArticleDOI
TL;DR: Robust thin-plate spline (RTPS) is developed to more accurately model elastic fingerprint deformations using splines and RTPS-based generalized fingerprint deformation correction model (DCM) is proposed, which results in accurate alignment of key minutiae features observed on the contactless and contact-based fingerprints.
Abstract: Vast databases of billions of contact-based fingerprints have been developed to protect national borders and support e-governance programs. Emerging contactless fingerprint sensors offer better hygiene, security, and accuracy. However, the adoption/success of such contactless fingerprint technologies largely depends on advanced capability to match contactless 2D fingerprints with legacy contact-based fingerprint databases. This paper investigates such problem and develops a new approach to accurately match such fingerprint images. Robust thin-plate spline (RTPS) is developed to more accurately model elastic fingerprint deformations using splines. In order to correct such deformations on the contact-based fingerprints, RTPS-based generalized fingerprint deformation correction model (DCM) is proposed. The usage of DCM results in accurate alignment of key minutiae features observed on the contactless and contact-based fingerprints. Further improvement in such cross-matching performance is investigated by incorporating minutiae related ridges. We also develop a new database of 1800 contactless 2D fingerprints and the corresponding contact-based fingerprints acquired from 300 clients which is made publicly accessible for further research. The experimental results presented in this paper, using two publicly available databases, validate our approach and achieve outperforming results for matching contactless 2D and contact-based fingerprint images.

Journal ArticleDOI
15 Jul 2018-Sensors
TL;DR: Finger-vein and finger shape multimodal biometrics using near-infrared (NIR) light camera sensor based on a deep convolutional neural network (CNN) are proposed in this research.
Abstract: Finger-vein recognition, which is one of the conventional biometrics, hinders fake attacks, is cheaper, and it features a higher level of user-convenience than other biometrics because it uses miniaturized devices. However, the recognition performance of finger-vein recognition methods may decrease due to a variety of factors, such as image misalignment that is caused by finger position changes during image acquisition or illumination variation caused by non-uniform near-infrared (NIR) light. To solve such problems, multimodal biometric systems that are able to simultaneously recognize both finger-veins and fingerprints have been researched. However, because the image-acquisition positions for finger-veins and fingerprints are different, not to mention that finger-vein images must be acquired in NIR light environments and fingerprints in visible light environments, either two sensors must be used, or the size of the image acquisition device must be enlarged. Hence, there are multimodal biometrics based on finger-veins and finger shapes. However, such methods recognize individuals that are based on handcrafted features, which present certain limitations in terms of performance improvement. To solve these problems, finger-vein and finger shape multimodal biometrics using near-infrared (NIR) light camera sensor based on a deep convolutional neural network (CNN) are proposed in this research. Experimental results obtained using two types of open databases, the Shandong University homologous multi-modal traits (SDUMLA-HMT) and the Hong Kong Polytechnic University Finger Image Database (version 1), revealed that the proposed method in the present study features superior performance to the conventional methods.

Journal ArticleDOI
TL;DR: This paper proposes the first method in the literature able to extract the coordinates of the pores from touch-based, touchless, and latent fingerprint images, and uses specifically designed and trained Convolutional Neural Networks to estimate and refine the centroid of each pore.

Journal ArticleDOI
01 Feb 2018
TL;DR: Results show that biometrics authentication significantly influences the individual's security concern, perceived usefulness, and trust of online store, and the willingness to continue using the website account associated with the payment authentication method.
Abstract: Biometrics authentication for electronic payment is generally viewed as a quicker, convenient and a more secure means to identify and authenticate users for online payment. This view is mostly anecdotal and conceptual is nature. The aim of the paper is to shed light on the comparison of perceptions and beliefs of different authentication methods for electronic payment (i.e., credit card, credit card with PIN, and fingerprint biometrics authentication) in an e-commerce context. As theoretical foundation, the valence framework is used in understanding and explaining the individual's evaluation of benefit and risk concerning the payment methods. We propose a research model with hypotheses that evaluate and compare the individual's perceptions of the payment authentication methods, trust of the online store, and the willingness to continue using the website account associated with the payment authentication method. An experiment is used to test the hypotheses. The results show that biometrics authentication significantly influences the individual's security concern, perceived usefulness, and trust of online store. Theoretically, through the study's context – biometrics versus credit card authentication – evidence is provided for the importance of the individual's perceptions, concerns, and beliefs in the use of biometrics for electronic payments. Managerial implications include shedding light on the perceptions and concerns of secure authentication and the need for implementing biometrics authentication for electronic payments.

Journal ArticleDOI
TL;DR: A comprehensive survey of state-of-the-art super-resolution approaches for face (2D+3D), iris, fingerprint, and gait recognition can be found in this paper.

Journal ArticleDOI
TL;DR: Comprehensive analysis over a number of subjects, setups, and analysis features demonstrates the feasibility of the proposed ear-EEG biometrics, and its potential in resolving the critical collectability, robustness, and reproducibility issues associated with current EEG biometric systems.
Abstract: The use of electroencephalogram (EEG) as a biometrics modality has been investigated for about a decade; however, its feasibility in real-world applications is not yet conclusively established, mainly due to the issues with collectability and reproducibility. To this end, we propose a readily deployable EEG biometrics system based on a “one-fits-all” viscoelastic generic in-ear EEG sensor (collectability), which does not require skilled assistance or cumbersome preparation. Unlike most existing studies, we consider data recorded over multiple recording days and for multiple subjects (reproducibility) while, for rigour, the training and test segments are not taken from the same recording days. A robust approach is considered based on the resting state with eyes closed paradigm, the use of both parametric (autoregressive model) and non-parametric (spectral) features, and supported by simple and fast cosine distance, linear discriminant analysis, and support vector machine classifiers. Both the verification and identification forensics scenarios are considered and the achieved results are on par with the studies based on impractical on-scalp recordings. Comprehensive analysis over a number of subjects, setups, and analysis features demonstrates the feasibility of the proposed ear-EEG biometrics, and its potential in resolving the critical collectability, robustness, and reproducibility issues associated with current EEG biometrics.

Journal ArticleDOI
TL;DR: This survey first presents biometric demographic analysis from the standpoint of human perception, then provides a comprehensive overview of state-of-the-art advances in automated estimation from both academia and industry.
Abstract: Biometrics is the technique of automatically recognizing individuals based on their biological or behavioral characteristics. Various biometric traits have been introduced and widely investigated, including fingerprint, iris, face, voice, palmprint, gait and so forth. Apart from identity, biometric data may convey various other personal information, covering affect, age, gender, race, accent, handedness, height, weight, etc. Among these, analysis of demographics (age, gender, and race) has received tremendous attention owing to its wide real-world applications, with significant efforts devoted and great progress achieved. This survey first presents biometric demographic analysis from the standpoint of human perception, then provides a comprehensive overview of state-of-the-art advances in automated estimation from both academia and industry. Despite these advances, a number of challenging issues continue to inhibit its full potential. We second discuss these open problems, and finally provide an outlook into the future of this very active field of research by sharing some promising opportunities.

Posted Content
TL;DR: This paper provides a comprehensive and up-to-date literature review of popular face recognition methods including both traditional (geometry-based, holistic, feature-based and hybrid methods) and deep learning methods.
Abstract: Starting in the seventies, face recognition has become one of the most researched topics in computer vision and biometrics. Traditional methods based on hand-crafted features and traditional machine learning techniques have recently been superseded by deep neural networks trained with very large datasets. In this paper we provide a comprehensive and up-to-date literature review of popular face recognition methods including both traditional (geometry-based, holistic, feature-based and hybrid methods) and deep learning methods.

Journal ArticleDOI
TL;DR: This article reviews the various systems proposed over the past few years with a focus on the shortcomings that have prevented wide-scale implementation, including issues pertaining to temporal stability, psychological and physiological changes, protocol design, equipment and performance evaluation.
Abstract: The emergence of the digital world has greatly increased the number of accounts and passwords that users must remember. It has also increased the need for secure access to personal information in the cloud. Biometrics is one approach to person recognition, which can be used in identification as well as authentication. Among the various modalities that have been developed, electroencephalography (EEG)-based biometrics features unparalleled universality, distinctiveness and collectability, while minimizing the risk of circumvention. However, commercializing EEG-based person recognition poses a number of challenges. This article reviews the various systems proposed over the past few years with a focus on the shortcomings that have prevented wide-scale implementation, including issues pertaining to temporal stability, psychological and physiological changes, protocol design, equipment and performance evaluation. We also examine several directions for the further development of usable EEG-based recognition systems as well as the niche markets to which they could be applied. It is expected that rapid advancements in EEG instrumentation, on-device processing and machine learning techniques will lead to the emergence of commercialized person recognition systems in the near future.

Proceedings ArticleDOI
24 Apr 2018
TL;DR: Automated morph detection algorithms based on general purpose pattern recognition algorithms are benchmarked for two scenarios relevant in the context of fraud detection for electronic travel documents, i.e. single image (no-reference) and image pair (differential) morph detection.
Abstract: The vulnerability of face recognition systems to attacks based on morphed biometric samples has been established in the recent past. Such attacks pose a severe security threat to a biometric recognition system in particular within the widely deployed border control applications. However, so far a reliable detection of morphed images has remained an unsolved research challenge. In this work, automated morph detection algorithms based on general purpose pattern recognition algorithms are benchmarked for two scenarios relevant in the context of fraud detection for electronic travel documents, i.e. single image (no-reference) and image pair (differential) morph detection. In the latter scenario a trusted live capture from an authentication attempt serves as additional source of information and, hence, the difference between features obtained from this face image and a potential morph can be estimated. A dataset of 2,206 ICAO compliant bona fide face images of the FRGCv2 face database is used to automatically generate 4,808 morphs. It is shown that in a differential scenario morph detectors which utilize a score level-based fusion of detection scores obtained from a single image and differences between image pairs generally outperform no-reference morph detectors with regard to the employed algorithms and used parameters. On average a relative improvement of more than 25% in terms of detection equal error rate is achieved.

Posted Content
TL;DR: A robust general framework for arbitrary biometric matching scenarios without the limitations of alignment as well as the size of inputs is proposed, which is demonstrated by the results from experiments on three person re-identification datasets, two partial person datasets and two partial face datasets.
Abstract: Biometric recognition on partial captured targets is challenging, where only several partial observations of objects are available for matching. In this area, deep learning based methods are widely applied to match these partial captured objects caused by occlusions, variations of postures or just partial out of view in person re-identification and partial face recognition. However, most current methods are not able to identify an individual in case that some parts of the object are not obtainable, while the rest are specialized to certain constrained scenarios. To this end, we propose a robust general framework for arbitrary biometric matching scenarios without the limitations of alignment as well as the size of inputs. We introduce a feature post-processing step to handle the feature maps from FCN and a dictionary learning based Spatial Feature Reconstruction (SFR) to match different sized feature maps in this work. Moreover, the batch hard triplet loss function is applied to optimize the model. The applicability and effectiveness of the proposed method are demonstrated by the results from experiments on three person re-identification datasets (Market1501, CUHK03, DukeMTMC-reID), two partial person datasets (Partial REID and Partial iLIDS) and two partial face datasets (CASIA-NIR-Distance and Partial LFW), on which state-of-the-art performance is ensured in comparison with several state-of-the-art approaches. The code is released online and can be found on the website: this https URL.

Journal ArticleDOI
TL;DR: This paper proposes an efficient matching algorithm that is based on secondary calculation of the Fisher vector and uses three biometric modalities: face, fingerprint, and finger vein and shows that the designed framework can achieve an excellent recognition rate and provide higher security than a unimodal biometric-based system.
Abstract: Biometric systems have been actively emerging in various industries in the past few years and continue to provide higher-security features for access control systems. Many types of unimodal biometric systems have been developed. However, these systems are only capable of providing low- to mid-range security features. Thus, for higher-security features, the combination of two or more unimodal biometrics (multiple modalities) is required. In this paper, we propose a multimodal biometric system for person recognition using face, fingerprint, and finger vein images. Addressing this problem, we propose an efficient matching algorithm that is based on secondary calculation of the Fisher vector and uses three biometric modalities: face, fingerprint, and finger vein. The three modalities are combined and fusion is performed at the feature level. Furthermore, based on the method of feature fusion, the paper studies the fake feature which appears in the practical scene. The liveness detection is append to the system, detect the picture is real or fake based on DCT, then remove the fake picture to reduce the influence of accuracy rate, and increase the robust of system. The experimental results showed that the designed framework can achieve an excellent recognition rate and provide higher security than a unimodal biometric-based system, which are very important for a IoMT platform.

Journal ArticleDOI
TL;DR: A new biometric authentication system for human identification that uses ECG signals as a biometric trait and integrates a generalized S-transformation and a convolutional neural network (CNN) is proposed.

Journal ArticleDOI
TL;DR: This paper proposes a user-centric biometric authentication scheme (PassBio) that enables end-users to encrypt their own templates with the proposed light-weighted encryption scheme, and shows that TPE can be utilized as a flexible building block to evaluate different distance metrics, such as Hamming distance and Euclidean distance over encrypted data.
Abstract: The proliferation of online biometric authentication has necessitated security requirements of biometric templates. The existing secure biometric authentication schemes feature a server-centric model, where a service provider maintains a biometric database and is fully responsible for the security of the templates. The end-users have to fully trust the server in storing, processing, and managing their private templates. As a result, the end-users’ templates could be compromised by outside attackers or even the service provider itself. In this paper, we propose a user-centric biometric authentication scheme (PassBio) that enables end-users to encrypt their own templates with our proposed light-weighted encryption scheme. During authentication, all the templates remain encrypted such that the server will never see them directly. However, the server is able to determine whether the distance of two encrypted templates is within a pre-defined threshold. Our security analysis shows that no critical information of the templates can be revealed under both passive and active attacks. PassBio follows a “compute-then-compare” computational model over encrypted data. More specifically, our proposed threshold predicate encryption (TPE) scheme can encrypt two vectors x and y in such a manner that the inner product of x and y can be evaluated and compared to a pre-defined threshold. TPE guarantees that only the comparison result is revealed and no key information about x and y can be learned. Furthermore, we show that TPE can be utilized as a flexible building block to evaluate different distance metrics, such as Hamming distance and Euclidean distance over encrypted data. Such a compute-then-compare computational model, enabled by TPE, can be widely applied in many interesting applications, such as searching over encrypted data while ensuring data security and privacy.

Journal ArticleDOI
TL;DR: This paper proposes a deep neural network (DNN) for representation learning to predict image quality using very limited knowledge for finger-vein biometrics and significantly outperforms existing approaches in terms of the impact on equal error-rate decrease.
Abstract: Finger-vein biometrics has been extensively investigated for personal authentication. One of the open issues in finger-vein verification is the lack of robustness against image-quality degradation. Spurious and missing features in poor-quality images may degrade the system’s performance. Despite recent advances in finger-vein quality assessment, current solutions depend on domain knowledge. In this paper, we propose a deep neural network (DNN) for representation learning to predict image quality using very limited knowledge. Driven by the primary target of biometric quality assessment, i.e., verification error minimization, we assume that low-quality images are falsely rejected in a verification system. Based on this assumption, the low- and high-quality images are labeled automatically. We then train a DNN on the resulting data set to predict the image quality. To further improve the DNN’s robustness, the finger-vein image is divided into various patches, on which a patch-based DNN is trained. The deepest layers associated with the patches form together a complementary and an over-complete representation. Subsequently, the quality of each patch from a testing image is estimated and the quality scores from the image patches are conjointly input to probabilistic support vector machines (P-SVM) to boost quality-assessment performance. To the best of our knowledge, this is the first proposed work of deep learning-based quality assessment, not only for finger-vein biometrics, but also for other biometrics in general. The experimental results on two public finger-vein databases show that the proposed scheme accurately identifies high- and low-quality images and significantly outperforms existing approaches in terms of the impact on equal error-rate decrease.