scispace - formally typeset
Search or ask a question

Showing papers in "IET Biometrics in 2014"


Journal ArticleDOI
TL;DR: This study is a survey of presentation attack detection methods for fingerprints, both in terms of liveness detection and alteration detection.
Abstract: Nowadays, fingerprint biometrics is widely used in various applications, varying from forensic investigations and migration control to access control as regards security sensitive environments. Any biometric system is potentially vulnerable against a fake biometric characteristic, and spoofing of fingerprint systems is one of the most widely researched areas. The state-of-the-art sensors can often be spoofed by an accurate imitation of the ridge/valley structure of a fingerprint. An individual may also try to avoid identification by altering his own fingerprint pattern. This study is a survey of presentation attack detection methods for fingerprints, both in terms of liveness detection and alteration detection.

183 citations


Journal ArticleDOI
TL;DR: A publicly available PHOTO-ATTACK database is introduced, and a new technique of counter-measure solely based on foreground/background motion correlation using Optical Flow that outperforms all other algorithms achieving nearly perfect scoring with an equal-error rate of 1.52% is proposed.
Abstract: Identity spoofing is a contender for high-security face recognition applications. With the advent of social media and globalized search, our face images and videos are wide-spread on the internet and can be potentially used to attack biometric systems without previous user consent. Yet, research to counter these threats is just on its infancy – we lack public standard databases, protocols to measure spoofing vulnerability and baseline methods to detect these attacks. The contributions of this work to the area are three-fold: firstly we introduce a publicly available PHOTO-ATTACK database with associated protocols to measure the effectiveness of counter-measures. Based on the data available, we conduct a study on current state-of-the-art spoofing detection algorithms based on motion analysis, showing they fail under the light of these new dataset. By last, we propose a new technique of counter-measure solely based on foreground/background motion correlation using Optical Flow that outperforms all other algorithms achieving nearly perfect scoring with an equal-error rate of 1.52% on the available test data. The source code leading to the reported results is made available for the replicability of findings in this article.

153 citations


Journal ArticleDOI
TL;DR: The application of adaptive Bloom filters to binary iris biometric feature vectors achieves rotation-invariant cancellable templates maintaining biometric performance, a compression of templates down to 20-40% of original size and a reduction of bit-comparisons to less than 5% leading to a substantial speed-up of the biometric system in identification mode.
Abstract: In this study, the application of adaptive Bloom filters to binary iris biometric feature vectors, that is, iris-codes, is proposed. Bloom filters, which have been established as a powerful tool in various fields of computer science, are applied in order to transform iris-codes to a rotation-invariant feature representation. Properties of the proposed Bloom filter-based transform concurrently enable (i) biometric template protection, (ii) compression of biometric data and (iii) acceleration of biometric identification, whereas at the same time no significant degradation of biometric performance is observed. According to these fields of application, detailed investigations are presented. Experiments are conducted on the CASIA-v3 iris database for different feature extraction algorithms. Confirming the soundness of the proposed approach, the application of adaptive Bloom filters achieves rotation-invariant cancellable templates maintaining biometric performance, a compression of templates down to 20-40% of original size and a reduction of bit-comparisons to less than 5% leading to a substantial speed-up of the biometric system in identification mode.

115 citations


Journal ArticleDOI
TL;DR: It is found that one of the main causes of performance degradation on handheld devices is the absence of pen-up trajectory information (i.e. data acquired when the pen tip is not in contact with the writing surface).
Abstract: In this study, the effects of using handheld devices on the performance of automatic signature verification systems are studied. The authors compare the discriminative power of global and local signature features between mobile devices and pen tablets, which are the prevalent acquisition device in the research literature. Individual feature discriminant ratios and feature selection techniques are used for comparison. Experiments are conducted on standard signature benchmark databases (BioSecure database) and a state-of-the-art device (Samsung Galaxy Note). Results show a decrease in the feature discriminative power and a higher verification error rate on handheld devices. It is found that one of the main causes of performance degradation on handheld devices is the absence of pen-up trajectory information (i.e. data acquired when the pen tip is not in contact with the writing surface).

92 citations


Journal ArticleDOI
TL;DR: A new face image quality index (FQI) is proposed that combines multiple quality measures, and classifies a face image based on this index, and conducts statistical significance Z-tests that demonstrate the advantages of the proposed FQI in face recognition applications.
Abstract: The performance of an automated face recognition system can be significantly influenced by face image quality. Designing effective image quality index is necessary in order to provide real-time feedback for reducing the number of poor quality face images acquired during enrollment and authentication, thereby improving matching performance. In this study, the authors first evaluate techniques that can measure image quality factors such as contrast, brightness, sharpness, focus and illumination in the context of face recognition. Second, they determine whether using a combination of techniques for measuring each quality factor is more beneficial, in terms of face recognition performance, than using a single independent technique. Third, they propose a new face image quality index (FQI) that combines multiple quality measures, and classifies a face image based on this index. In the author's studies, they evaluate the benefit of using FQI as an alternative index to independent measures. Finally, they conduct statistical significance Z-tests that demonstrate the advantages of the proposed FQI in face recognition applications.

74 citations


Journal ArticleDOI
TL;DR: It is found that the local texture patterns proposed in this study can be adapted to the vein description task for biometric recognition and that the LDP operator consistently outperforms the LBP operator in palm vein recognition.
Abstract: Biometric recognition using the palm vein characteristics is emerging as a touchless and spoof-resistant hand-based means to identify individuals or to verify their identity. One of the open challenges in this field is the creation of fast and modality-dependent feature extractors for recognition. This article investigates features using local texture description methods. The local binary pattern (LBP) operator as well as the local derivative pattern (LDP) operator and the fusion of the two are studied in order to create efficient descriptors for palm vein recognition by systematically adapting their parameters to fit palm vein structures. Results of experiments are reported on the CASIA multi-spectral palm print image database V1.0 (CASIA database). It is found that the local texture patterns proposed in this study can be adapted to the vein description task for biometric recognition and that the LDP operator consistently outperforms the LBP operator in palm vein recognition.

61 citations


Journal ArticleDOI
TL;DR: This study examines what in Denmark may constitute evidence based on forensic anthropological gait analyses, in the sense of pointing to a match (or not) between a perpetrator and a suspect, based on video and photographic imagery.
Abstract: This study examines what in Denmark may constitute evidence based on forensic anthropological gait analyses, in the sense of pointing to a match (or not) between a perpetrator and a suspect, based on video and photographic imagery. Gait and anthropometric measures can be used when direct facial comparison is not possible because of perpetrators masking their faces. The nature of judicial and natural scientific forms of evidence is discussed, and rulings dealing with the admissibility of video footage and forensic evidence in general are given. Technical issues of video materials are discussed, and the study also discusses how such evidence may be presented, both in written statements and in court.

57 citations


Journal ArticleDOI
TL;DR: One of the big challenges of this research was to discover if the handwritten signature modality in mobile devices should be split into two different modalities, one for those cases when the signature is performed with a stylus, and another when the fingertip is used for signing.
Abstract: The utilisation of biometrics in mobile scenarios is increasing remarkably. At the same time, handwritten signature recognition is one of the modalities with highest potential of use for those applications where customers are used to sign in those traditional processes. However, several improvements have to be made in order to reach acceptable levels of performance, reliability and interoperability. The evaluation carried out in this study contributes with multiple results obtained from 43 users signing 60 times, divided in three sessions, in eight different capture devices, being six of them mobile devices and the other two digitisers specially made for signing and used as a baseline. At each session, a total of 20 signatures per user are captured by each device, so that the evaluation here reported a total of 20 640 signatures, stored in ISO/IEC 19794–7 format. The algorithm applied is a DTW-based one, particularly modified for mobile environments. The results analysed include inter-operability, visual feedback and modality tests. One of the big challenges of this research was to discover if the handwritten signature modality in mobile devices should be split into two different modalities, one for those cases when the signature is performed with a stylus, and another when the fingertip is used for signing. Many relevant conclusions have been collected and, over all, multiple improvements have been reached contributing to future deployments of biometrics in mobile environments.

49 citations


Journal ArticleDOI
TL;DR: This study adopts the root mean square value, nonlinear Lyapunov exponent, and correlation dimension to analyse ECG data, and uses a support vector machine (SVM) to classify and identify the best combination and the most appropriate kernel function of a SVM.
Abstract: An electrocardiogram (ECG) records changes in the electric potential of cardiac cells using a noninvasive method. Previous studies have shown that each person's cardiac signal possesses unique characteristics. Thus, researchers have attempted to use ECG signals for personal identification. However, most studies verify results using ECG signals taken from databases which are obtained from subjects under the condition of rest. Therefore, the extraction and analysis of a subject's ECG typically occurs in the resting state. This study presents experiments that involve recording ECG information after the heart rate of the subjects was increased through exercise. This study adopts the root mean square value, nonlinear Lyapunov exponent, and correlation dimension to analyse ECG data, and uses a support vector machine (SVM) to classify and identify the best combination and the most appropriate kernel function of a SVM. Results show that the successful recognition rate exceeds 80% when using the nonlinear SVM with a polynomial kernel function. This study confirms the existence of unique ECG features in each person. Even in the condition of exercise, chaotic theory can be used to extract specific biological characteristics, confirming the feasibility of using ECG signals for biometric verification.

46 citations


Journal ArticleDOI
TL;DR: BGM's matching performance is competitive with state-of-the-art algorithms on the databases despite using small and concise templates and the size of the common subgraph of a pair of graphs is the most significant discriminating measure between genuine and imposter comparisons.
Abstract: This study proposes an automatic dorsal hand vein verification system using a novel algorithm called biometric graph matching (BGM). The dorsal hand vein image is segmented using the K-means technique and the region of interest is extracted based on the morphological analysis operators and normalised using adaptive histogram equalisation. Veins are extracted using a maximum curvature algorithm. The locations and vascular connections between crossovers, bifurcations and terminations in a hand vein pattern define a hand vein graph. The matching performance of BGM for hand vein graphs is tested with two cost functions and compared with the matching performance of two standard point patterns matching algorithms, iterative closest point (ICP) and modified Hausdorff distance. Experiments are conducted on two public databases captured using far infrared and near infrared (NIR) cameras. BGM's matching performance is competitive with state-of-the-art algorithms on the databases despite using small and concise templates. For both databases, BGM performed at least as well as ICP. For the small sized graphs from the NIR database, BGM significantly outperformed point pattern matching. The size of the common subgraph of a pair of graphs is the most significant discriminating measure between genuine and imposter comparisons.

44 citations


Journal ArticleDOI
TL;DR: This study presents a new online signature verification system based on fuzzy modelling of shape and dynamic features extracted from online signature data that is segmented at the points of geometric extrema followed by the feature extraction and fuzzy modelling of each segment thus obtained.
Abstract: This study presents a new online signature verification system based on fuzzy modelling of shape and dynamic features extracted from online signature data. Instead of extracting these features from a signature, it is segmented at the points of geometric extrema followed by the feature extraction and fuzzy modelling of each segment thus obtained. A minimum distance alignment between the two samples is made using dynamic time warping technique that provides a segment to segment correspondence. Fuzzy modelling of the extracted features is carried out in the next step. A user-dependent threshold is used to classify a test sample as either genuine or forged. The accuracy of the proposed system is evaluated using both skilled and random forgeries. For this, several experiments are carried out on two publicly available benchmark databases, SVC2004 and SUSIG. The experimental results obtained on these databases demonstrate the effectiveness of this system.

Journal ArticleDOI
TL;DR: The proposed chord moments coupled with the support vector machine classifier lead to a writer dependent off-line signature verification system that achieves state-of-the-art performance on the noisy Center of Excellence for Document Analysis and Recognition database.
Abstract: Signature is an important and useful behavioural biometric which exhibits significant amount of non-linear variability. In this study, the authors concentrate on finding an envelope shape feature known as ‘chord moments’. Central moments such as the variance, skewness and kurtosis along with the first moment (mean) are computed from sets of chord lengths and angles for each envelope reference point. The proposed chord moments adequately quantify the spatial inter-relationship among upper and lower envelope points. The moment-based approach significantly reduces the dimension of highly detailed chord sets and is experimentally found to be robust in handling non-linear variability from signature images. The proposed chord moments coupled with the support vector machine classifier lead to a writer dependent off-line signature verification system that achieves state-of-the-art performance on the noisy Center of Excellence for Document Analysis and Recognition database.

Journal ArticleDOI
TL;DR: This study shows that by the current state-of-the-art synthetically generated fingerprints can easily be discriminated from real fingerprints, and proposes a method based on second order extended minutiae histograms (MHs) which can distinguish between real and synthetic prints with very high accuracy.
Abstract: In this study we show that by the current state-of-the-art synthetically generated ngerprints can easily be discriminated from real ngerprints. We propose a method based on second order extended minutiae histograms (MHs) which can distinguish between real and synthetic prints with very high accuracy. MHs provide a xed-length feature vector for a ngerprint which are invariant under rotation and translation. This ’test of realness’ can be applied to synthetic ngerprints produced by any method. In this work, tests are conducted on the 12 publicly available databases of FVC2000, FVC2002 and FVC2004 which are well established benchmarks for evaluating the performance of ngerprint recognition algorithms; 3 of these 12 databases consist of articial ngerprints generated by the SFinGe software. Additionally, we evaluate the discriminative performance on a database of synthetic ngerprints generated by the software of Bicz versus real ngerprint images. We conclude with suggestions for the improvement of synthetic ngerprint generation.

Journal ArticleDOI
TL;DR: This study shows the contribution made by each set to the recognition performance and demonstrates the feasibility of achieving 100% correct recognition by combining the three sets, based on the experiments conducted using more than 2000 dorsal hand vein images.
Abstract: This paper presents a biometric identification system based on near-infrared imaging of dorsal hand veins and matching of the keypoints that are extracted from the dorsal hand vein images by the scale-invariant feature transform. The whole system is covered in detail, which includes the imaging device used, image processing methods proposed for geometric correction, region-of-interest extraction, image enhancement and vein pattern segmentation, as well as image classification by extraction and matching of keypoints. In addition to several constraints introduced to minimise incorrectly matched keypoints, a particular focus is placed on the use of multiple training images of each hand class to improve the recognition performance for a large database with more than 200 hand classes. By organising multiple keypoint sets extracted from multiple training images of each hand class into three sets, namely, the union, the intersection and the exclusion, based on their inter-class and intra-class relationships, this study shows the contribution made by each set to the recognition performance and demonstrates the feasibility of achieving 100% correct recognition by combining the three sets, based on the experiments conducted using more than 2000 dorsal hand vein images.

Journal ArticleDOI
TL;DR: The results obtained from the challenging mobile biometrics and surveillance camera face databases indicate that linearly calibrated face recognition scores are less misleading in their likelihood ratio interpretation than uncalibrated scores.
Abstract: An evaluation of the verification and calibration performance of a face recognition system based on inter-session variability modelling is presented. As an extension to calibration through linear transformation of scores, categorical calibration is introduced as a way to include additional information about images for calibration. The cost of likelihood ratio, which is a well-known measure in the speaker recognition field, is used as a calibration performance metric. The results obtained from the challenging mobile biometrics and surveillance camera face databases indicate that linearly calibrated face recognition scores are less misleading in their likelihood ratio interpretation than uncalibrated scores. In addition, the categorical calibration experiments show that calibration can be used not only to improve the likelihood ratio interpretation of scores, but also to improve the verification performance of a face recognition system.

Journal ArticleDOI
TL;DR: A novel pose-invariant face recognition method is proposed by combining curvelet-Invariant moments with curvelet neural network, which achieves higher accuracy for face recognition across pose and converge rapidly than standard back propagation neural networks.
Abstract: A novel pose-invariant face recognition method is proposed by combining curvelet-invariant moments with curvelet neural network. First a special set of statistical coefficients using higher-order moments of curvelet are extracted as the feature vector and then the invariant features are fed into curvelet neural networks. Finally, supervised invariant face recognition is achieved by converging the neural network using curvelet as the activation function of the hidden layer neurons. The experimental results demonstrate that curvelet higher-order moments and curvelet neural networks achieve higher accuracy for face recognition across pose and converge rapidly than standard back propagation neural networks.

Journal ArticleDOI
TL;DR: Here, acquisition device identification is studied using ‘sketches of features’ as intrinsic device characteristics using state-of-the art classifiers, such as a sparse representation-based classifier or support vector machines, which yield an identification accuracy exceeding 94% on a set of eight landline telephone handsets from Lincoln-Labs Handset Database.
Abstract: Speech recordings carry useful information for the devices used to capture them. Here, acquisition device identification is studied using ‘sketches of features’ as intrinsic device characteristics. That is, starting from large-size raw feature vectors obtained by either averaging the log-spectrogram of a speech recording along the time axis or stacking the parameters of each component for a Gaussian mixture model modelling the speech recorded by a specific device, features of reduced size are extracted by mapping these raw feature vectors into a low-dimensional space. The mapping preserves the ‘distance properties’ of the raw feature vectors. It is obtained by taking the inner product of the raw feature vector with a vector of independent identically distributed random variables drawn from a p-stable distribution. State-of-the art classifiers, such as a sparse representation-based classifier or support vector machines, applied to the sketches yield an identification accuracy exceeding 94% on a set of eight landline telephone handsets from Lincoln-Labs Handset Database. Perfect identification is reported for a set of 21 cell-phones of various models from seven different brands.

Journal ArticleDOI
TL;DR: A ‘globally coherent’ variant of EGM (GC-EGM) is proposed that avoids sudden local angular movements of vertices while maintains the ability to faithfully model non-linear distortions in the periocular region.
Abstract: In biometrics research, the periocular region has been regarded as an interesting trade-off between the face and the iris, particularly in unconstrained data acquisition setups. As in other biometric traits, the current challenge is the development of more robust recognition algorithms. Having investigated the suitability of the ‘elastic graph matching’ (EGM) algorithm to handle non-linear distortions in the periocular region because of facial expressions, the authors observed that vertices locations often not correspond to displacements in the biological tissue. Hence, they propose a ‘globally coherent’ variant of EGM (GC-EGM) that avoids sudden local angular movements of vertices while maintains the ability to faithfully model non-linear distortions. Two main adaptations were carried out: (i) a new term for measuring vertices similarity and (ii) a new term in the edges-cost function penalises changes in orientation between the model and test graphs. Experiments were carried out both in synthetic and real data and point for the advantages of the proposed algorithm. Also, the recognition performance when using the EGM and GC-EGM was compared, and statistically significant improvements in the error rates were observed when using the GC-EGM variant. .

Journal ArticleDOI
TL;DR: The authors obtain results on region-to-region comparison which show that the hypothenar and interdigital regions outperform the thenar region and achieve significant performance improvements by regional fusion using regions segmented both manually and automatically.
Abstract: The spectral minutiae representation (SMC) has been recently proposed as a novel method to minutiae-based fingerprint recognition, which is invariant to minutiae translation and rotation and presents low computational complexity. As high-resolution palmprint recognition is also mainly based on minutiae sets, SMC has been applied to palmprints and used in full-to-full palmprint matching. However, the performance of that approach was still limited. As one of the main reasons for this is the much bigger size of a palmprint compared with a fingerprint, the authors propose a division of the palmprint into smaller regions. Then, to further improve the performance of spectral minutiae-based palmprint matching, in this work the authors present anatomically inspired regional fusion while using SMC for palmprints. Firstly, the authors consider three regions of the palm, namely interdigital, thenar and hypothenar, which have inspiration in anatomic cues. Then, the authors apply SMC to region-to-region palmprint comparison and study regional discriminability when using the method. After that, the authors implement regional fusion at score level by combining the scores of different regional comparisons in the palm with two fusion methods, that is, sum rule and logistic regression. The authors evaluate region-to-region comparison and regional fusion based on spectral minutiae matching on a public high-resolution palmprint database, THUPALMLAB. Both manual segmentation and automatic segmentation are performed to obtain the three palm regions for each palm. Essentially using the complex SMC, the authors obtain results on region-to-region comparison which show that the hypothenar and interdigital regions outperform the thenar region. More importantly, the authors achieve significant performance improvements by regional fusion using regions segmented both manually and automatically. One main advantage of the approach the authors took is that human examiners can segment the palm into the three regions without prior knowledge of the system, which makes the segmentation process easy to be incorporated in protocols such as in forensic science.

Journal ArticleDOI
TL;DR: A 10-year effort by Standards Committee 37 to create a systematic vocabulary for the field of `biometrics' based on international standards for vocabulary development is discussed, which conceptualises and defines 121 terms that are most central to the proposed field.
Abstract: This study discusses a 10-year effort by Standards Committee 37 of the International Organisation for Standardisation/International Electrotechnical Commission Joint Technical Committee 1 (ISO/IEC JTC1 SC37) to create a systematic vocabulary for the field of `biometrics' based on international standards for vocabulary development. That process has now produced a new International Standard (ISO/IEC 2382-37:2012), which conceptualises and defines 121 terms that are most central to the proposed field. This study will review some of the philosophical and operational principles of vocabulary development within SC37, present 11 of the most commonly used standardised terms with their definitions and discuss some of the conceptual changes implicit in the new vocabulary.

Journal ArticleDOI
TL;DR: Thorough experiments show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems.
Abstract: In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such an assumption is easily violated in the face verification scenario, where the task is to determine if two faces (where one or both have not been seen before) belong to the same person. In this study, the authors propose an alternative approach to SR-based face verification, where SR encoding is performed on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which then form an overall face descriptor. Owing to the deliberate loss of spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment and various image deformations. Within the proposed framework, they evaluate several SR encoding techniques: l 1-minimisation, Sparse Autoencoder Neural Network (SANN) and an implicit probabilistic technique based on Gaussian mixture models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, on both the traditional closed-set identification task and the more applicable face verification task. The experiments also show that l 1-minimisation-based encoding has a considerably higher computational cost when compared with SANN-based and probabilistic encoding, but leads to higher recognition rates.

Journal ArticleDOI
TL;DR: A relaxation-labelling-based approach on simulated samples, improved by Feng et al. for conventionally developed latent samples is suggested for high-resolution samples of overlapped latent fingerprints, yielding an enhanced separation algorithm with optimised parameters.
Abstract: Overlapped latent fingerprints occurring at crime scenes challenge forensic investigations, as they cannot be properly processed unless separated. Addressing this, Chen et al. proposed a relaxation-labelling-based approach on simulated samples, improved by Feng et al. for conventionally developed latent ones. As the development of advanced contactless nanometre-range sensing technology keeps broadening the vision of forensics, the authors use a chromatic white light sensor for contactless non-invasive acquisition. This preserves the fingerprints for further investigations and enhances existing separation techniques. Motivated by the trend in dactyloscopy that investigations now not only aim at identifications but also retrieving further context of the fingerprints (e.g. chemical composition, age), a context-based separation approach is suggested for high-resolution samples of overlapped latent fingerprints. The author's conception of context-aware data processing is introduced to analyse the context in this forensic scenario, yielding an enhanced separation algorithm with optimised parameters. Two test sets are generated for evaluation, one consisting of 60 authentic overlapped fingerprints on three substrates and the other of 100 conventionally developed latent samples from the work of Feng et al. An equal error rate of 5.7% is achieved on the first test set, which shows improvement over their previous work, and 17.9% on the second.

Journal ArticleDOI
TL;DR: One of the key features of the authors framework is that each classifier in the ensemble can be designed to use a different modality thus providing the advantages of a truly multimodal biometric recognition system.
Abstract: A practically viable multi-biometric recognition system should not only be stable, robust and accurate but should also adhere to real-time processing speed and memory constraints. This study proposes a cascaded classifier-based framework for use in biometric recognition systems. The proposed framework utilises a set of weak classifiers to reduce the enrolled users?? dataset to a small list of candidate users. This list is then used by a strong classifier set as the final stage of the cascade to formulate the decision. At each stage, the candidate list is generated by a Mahalanobis distance-based match score quality measure. One of the key features of the authors framework is that each classifier in the ensemble can be designed to use a different modality thus providing the advantages of a truly multimodal biometric recognition system. In addition, it is one of the first truly multimodal cascaded classifier-based approaches for biometric recognition. The performance of the proposed system is evaluated both for single and multimodalities to demonstrate the effectiveness of the approach.

Journal ArticleDOI
TL;DR: This study proposed a gait recognition method for extremely low-quality videos, which have a frame-rate at one frame per second (1 fps) and resolution of 32 × 22 pixels, and found that the performance improvement is directly proportional to the average disagreement level of weak classifiers.
Abstract: Nowadays, surveillance cameras are widely installed in public places for security and law enforcement, but the video quality may be low because of the limited transmission bandwidth and storage capacity. In this study, the authors proposed a gait recognition method for extremely low-quality videos, which have a frame-rate at one frame per second (1 fps) and resolution of 32 × 22 pixels. Different from popular temporal reconstruction-based methods, the proposed method uses the average gait image (AGI) over the whole sequence as the appearance-based feature description. Based on the AGI description, the authors employed a large number of weak classifiers to reduce the generalisation errors. The performance can be further improved by incorporating the model-based information into the classifier ensemble. The authors found that the performance improvement is directly proportional to the average disagreement level of weak classifiers (i.e. diversity), which can be increased by using the model-based information. The authors evaluated the proposed method on both indoor and outdoor databases (i.e. the low-quality versions of OU-ISIR-D and USF databases), and the results suggest that our method is more general and effective than other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: An improved iris normalisation model applied after a precise iris segmentation process defines a new reference space for iris features and is compared with Daugman's reference system for assessing performance improvement.
Abstract: Iris recognition is among the best biometric systems. Characterised by the iris's uniqueness, universality, distinctiveness, permanence and collectability, the iris recognition system achieves high performance and real time response. In this study, the authors propose an improved iris normalisation model applied after a precise iris segmentation process. The normalisation model defines a new reference space for iris features. It normalises the iris using radial strips with a shape that changes between the pupil's boundary and the circular approximation of the iris's outer boundary. Moreover, the effect of the centres of the normalisation strips is evaluated by assessing the recognition performance when comparing three different centres configurations. The approach is tested on 2491 images from the CASIA V3 database. The system's performance is measured at the matching stage. Higher decidability and recognition accuracy at equal error rate is obtained. Detection error tradeoff curves are estimated by using the proposed model and compared with Daugman's reference system for assessing performance improvement.

Journal ArticleDOI
TL;DR: The authors will observe that the use of a cancellable transformation in the multi-biometric dataset increased accuracy level for the ensemble systems, mainly when using FS methods.
Abstract: The concept of cancellable biometrics has been introduced as a way to overcome privacy concerns surrounding the management of biometric data. The goal is to transform a biometric trait into a new but revocable representation for enrolment and identification/verification. Thus, if compromised, a new representation of original biometric data can be generated. In addition, multi-biometric systems are increasingly being deployed in various biometric-based applications because of their advantages over uni-biometric systems. In this study, the authors specifically investigate the use of ensemble systems and cancellable transformations for the multi-biometric context, and the authors use as examples two different biometric modalities (fingerprint and handwritten signature) separately and in the multi-modal context (multi-biometric). The datasets to be used in this analysis were FVC2004 (fingerprint verification competition) for fingerprint and an in-house database for signature. To increase the effectiveness of the proposed ensemble systems, two feature selection (FS) methods will be used to distribute the attributes among the individual classifiers of an ensemble, increasing diversity and performance of such systems. As a result of this analysis, they will observe that the use of a cancellable transformation in the multi-biometric dataset increased accuracy level for the ensemble systems, mainly when using FS methods.


Journal ArticleDOI
TL;DR: Two likelihood ratio values which differ in the nature of training scores they use and therefore consider slightly different interpretations of the two hypotheses are observed in the context of evidence evaluation from a face, a fingerprint and a speaker recognition system.
Abstract: For an automatic comparison of a pair of biometric specimens, a similarity metric called ‘score’ is computed by the employed biometric recognition system In forensic evaluation, it is desirable to convert this score into a likelihood ratio This process is referred to as calibration A likelihood ratio is the probability of the score given the prosecution hypothesis (which states that the pair of biometric specimens are originated from the suspect) is true divided by the probability of the score given the defence hypothesis (which states that the pair of biometric specimens are not originated from the suspect) is true In practice, a set of scores (called training scores) obtained from the within-source and between-sources comparison is needed to compute a likelihood ratio value for a score In likelihood ratio computation, the within-source and between-sources conditions can be anchored to a specific suspect in a forensic case or it can be generic within-source and between-sources comparisons independent of the suspect involved in the case This results in two likelihood ratio values which differ in the nature of training scores they use and therefore consider slightly different interpretations of the two hypotheses The goal of this study is to quantify the differences in these two likelihood ratio values in the context of evidence evaluation from a face, a fingerprint and a speaker recognition system For each biometric modality, a simple forensic case is simulated by randomly selecting a small subset of biometric specimens from a large database In order to be able to carry out a comparison across the three biometric modalities, the same protocol is followed for training scores set generation It is observed that there is a significant variation in the two likelihood ratio values

Journal ArticleDOI
TL;DR: The first partial countermeasure to targeted impersonation attacks is presented, using client-specific Z-score normalisation to provide a more consistent false acceptance rate across all enrolled subjects.
Abstract: This study is concerned with the reliability of biometric verification systems when used in forensic applications. In particular, when such systems are subjected to targeted impersonation attacks. The authors expand on the existing work in targeted impersonation, focusing on how best to measure the reliability of verification systems in forensic contexts. It identifies two scenarios in which targeted impersonation effects may occur: (i) the forensic investigation of criminal activity involving identity theft; and (ii) implicit targeting as a result of the forensic investigation process. Also, the first partial countermeasure to such attacks is presented. The countermeasure uses client-specific Z-score normalisation to provide a more consistent false acceptance rate across all enrolled subjects. This reduces the effectiveness of targeted impersonation without impairing the systems accuracy under random zero-effort attacks.

Journal ArticleDOI
Hai Yang1, Yunfei Xu1, Houjun Huang1, Ruohua Zhou1, Yonghong Yan1 
TL;DR: A linear Gaussian model-based framework for voice biometrics that worked well on the core-extended conditions of the NIST 2010 Speaker Recognition Evaluation, and is competitive compared with the Gaussian probabilistic linear discriminant analysis, in terms of normalised decision cost function.
Abstract: This study introduces a linear Gaussian model-based framework for voice biometrics. The model works with discrete-time linear dynamical systems. The study motivation is to use the linear Gaussian modelling method in voice biometrics, and show that the accuracy offered by the linear Gaussian modelling method is comparable with other state-of-the-art methods such as Probabilistic Linear Discriminant Analysis and two-covariance model. An expectation-maximisation algorithm is derived to train the model and a Bayesian solution is used to calculate the log-likelihood ratio score of all trials of speakers. This approach performed well on the core-extended conditions of the NIST 2010 Speaker Recognition Evaluation, and is competitive compared with the Gaussian probabilistic linear discriminant analysis, in terms of normalised decision cost function.