scispace - formally typeset
Search or ask a question
Journal ArticleDOI

QFuse: Online learning framework for adaptive biometric system

01 Nov 2015-Pattern Recognition (Pergamon)-Vol. 48, Iss: 11, pp 3428-3439
TL;DR: This paper presents an adaptive context switching algorithm coupled with online learning to address the scalability and accommodate the variations in data distribution of biometrics.
About: This article is published in Pattern Recognition.The article was published on 2015-11-01. It has received 27 citations till now. The article focuses on the topics: Biometrics.
Citations
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is presented in this paper, where the authors provide a comprehensive overview of the role of information fusion in biometrics.

151 citations

Journal ArticleDOI
TL;DR: Several systems and architectures related to the combination of biometric systems, both unimodal and multimodal, are overviews, classifying them according to a given taxonomy, and a case study for the experimental evaluation of methods for biometric fusion at score level is presented.

123 citations


Cites background or methods from "QFuse: Online learning framework fo..."

  • ...Reference Name Year Biometric traits Users [61] WVU 2007 Fingerprint, Face, Iris, Palmprint, Hand geometry, Voice 270 [62] MBGC 2009 Face, Iris >146 [63] BiosecurID 2010 Face, Speech, Iris, Signature, Handwritting, Fingerprints, Hand, Keystroking 400 [60] BMDB 2010 Face, Speech, Signature, Fingerprints, Hand, Iris >600 [64] SDUMLA-HMT 2011 Face, Gait, Iris, Fingerprints 106 [65] MOBio 2012 Face, Speech 152 [66] MMU GASPFA 2013 Gait, Speech, Face 82 [67] MobBIO 2014 Face, Iris, Voice 105 [59] gb2sμMOD 2015 Hand, Iris and Face 60 [40] LEA 2015 Fingerprint, Face, Iris 18,000 Table 2....

    [...]

  • ...[60] BMDB 2010 Face, Speech, Signature, Fingerprints, Hand, Iris >600 [64] SDUMLA-HMT 2011 Face, Gait, Iris, Fingerprints 106 [65] MOBio 2012 Face, Speech 152 [66] MMU GASPFA 2013 Gait, Speech, Face 82 [67] MobBIO 2014 Face, Iris, Voice 105 [59] gb2sμMOD 2015 Hand, Iris and Face 60 [40] LEA 2015 Fingerprint, Face, Iris 18,000 Table 2....

    [...]

  • ...LEA [40] is a multimodal database provided by a Law Enforcement Agency and captured in unconstrained real world conditions with uncooperative users....

    [...]

  • ...QFuse [40] is an online learning algorithm for adaptive biometric fusion that incorporates image quality in the dynamic selection of unimodal classifiers and their fusion....

    [...]

  • ...To deal with this problem, combination methods based on online learning [40] are required, which provide an efficient way to sustain the performance by addressing the variations in data (match score and quality score) distribution introduced by the newly enrolled individuals....

    [...]

Journal ArticleDOI
TL;DR: In this article, a deep learning-based multimodal fusion architecture for classification tasks is proposed, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data.

82 citations

Journal ArticleDOI
TL;DR: The Group Sparse Representation based Classifier (GSRC) is proposed which removes the requirement for a separate feature-level fusion mechanism and integrates multi-feature representation seamlessly into classification.

63 citations


Cites background or methods from "QFuse: Online learning framework fo..."

  • ...8% is obtained using the context switching algorithm [37]....

    [...]

  • ...16 (a) 1 2 3 4 5 20 25 30 35 40 45 50 Rank Id e n ti fi c a ti o n A c c u ra c y ( % ) Face UCLBP Face SURF SRC Face GSRC Face 1 2 3 4 5 40 45 50 55 60 65 70 Rank Id e n ti fi c a ti o n A c c u ra c y ( % ) Finger NBIS Finger VF SRC Finger GSRC Finger (b) 1 2 3 4 5 30 35 40 45 50 55 Rank Id e n ti fi c a ti o n A c c u ra c y ( % ) Iris VASIR Iris LPG SRC Iris GSRC Iris (c) Figure 5: CMC curves for identification on the LEA multimodal database: individual features, SRC and GSRC on (a) face, (b) fingerprint,and (c) iris....

    [...]

  • ...The 18 (a) (b) 1 2 3 4 5 93 94 95 96 97 98 99 100 Rank Id e n ti fi c a ti o n A c c u ra c y ( % ) SRC Face Iris SRC Iris Finger GSRC Face Iris GSRC Face Finger GSRC Iris Finger 1 2 3 4 5 95 95.5 96 96.5 97 97.5 98 98.5 99 99.5 100 Rank Id e n ti fi c a ti o n A c c u ra c y ( % ) Sum Rule Fusion SRC Combined Context Switching GSRC Combined SRC Face Finger Figure 6: CMC curves for identification on the WVU multimodal database....

    [...]

  • ...Identification experiments are performed on both the WVU and LEA databases and the performance of the proposed GSRC algorithm is evaluated in four scenarios and major observations are noted below....

    [...]

  • ...On the LEA database, the context switching algorithm outperforms both sum rule fusion and traditional SRC with an identification performance of 55.8% whereas the GSRC algorithm performs 6.5% better and achieves 62.3% rank-1 accuracy....

    [...]

Journal ArticleDOI
TL;DR: This paper combines ECG with a fingerprint liveness detection algorithm and proposes a stopping criterion that reduces the average waiting time for signal acquisition and examines automatic template updating using ECG and fingerprint.
Abstract: Fingerprints have been extensively used for biometric recognition around the world. However, fingerprints are not secrets, and an adversary can synthesis a fake finger to spoof the biometric system. The mainstream of the current fingerprint spoof detection methods are basically binary classifier trained on some real and fake samples. While they perform well on detecting fake samples created by using the same methods used for training, their performance degrades when encountering fake samples created by a novel spoofing method. In this paper, we approach the problem from a different perspective by incorporating electrocardiogram (ECG). Compared with the conventional biometrics, stealing someone’s ECG is far more difficult if not impossible. Considering that ECG is a vital signal and motivated by its inherent liveness, we propose to combine it with a fingerprint liveness detection algorithm. The combination is natural as both ECG and fingerprints can be captured from fingertips. In the proposed framework, the ECG and fingerprint are combined not only for authentication purpose but also for liveness detection. We also examine automatic template updating using ECG and fingerprint. In addition, we propose a stopping criterion that reduces the average waiting time for signal acquisition. We have performed extensive experiments on the LivDet2015 database which is presently the latest available liveness detection database and compare the proposed method with six liveness detection methods as well as 12 participants of LivDet2015 competition. The proposed system has achieved a liveness detection equal error rate (EER) of 4.2% incorporating only 5 s of ECG. By extending the recording time to 30 s, liveness detection EER reduces to 2.6% which is about 4 times better than the best of six comparison methods. This is also about 2 times better than the best results achieved by the participants of the LivDet2015 competition.

48 citations


Cites background from "QFuse: Online learning framework fo..."

  • ...While multimodal biometric systems based on conventional traits such as face and fingerprint have been extensively investigated in [32] and [33], there exist only a few works about a multimodal biometric system that includes ECG....

    [...]

References
More filters
BookDOI
01 Dec 2001
TL;DR: Learning with Kernels provides an introduction to SVMs and related kernel methods that provide all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms.
Abstract: From the Publisher: In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs—-kernels--for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.

7,880 citations

Journal ArticleDOI
TL;DR: A common theoretical framework for combining classifiers which use distinct pattern representations is developed and it is shown that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision.
Abstract: We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically.

5,670 citations


"QFuse: Online learning framework fo..." refers methods in this paper

  • ...In matcher fusion, all the constituent matchers are used and their evidences are combined using fusion rules [4], [5], [6], [7]....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features that is assessed in the face recognition problem under different challenges.
Abstract: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed

5,563 citations


"QFuse: Online learning framework fo..." refers methods in this paper

  • ...To match two corresponding UCLBP features or SURF descriptors, χ2 distance measure is used....

    [...]

  • ...For face, two matchers are used: Uniform Circular Local Binary Pattern (UCLBP) [44] as face matcher1 and Speeded Up Robust Features (SURF) [45] as face matcher2....

    [...]

  • ...UCLBP is computed with circular encoding of eight neighboring pixels evenly positioned on a circle of radius two....

    [...]

Journal ArticleDOI
TL;DR: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests.
Abstract: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests. The recognition principle is the failure of a test of statistical independence on iris phase structure encoded by multi-scale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 b/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. The high confidence levels are important because they allow very large databases to be searched exhaustively (one-to-many "identification mode") without making false matches, despite so many chances. Biometrics that lack this property can only survive one-to-one ("verification") or few comparisons. The paper explains the iris recognition algorithms and presents results of 9.1 million comparisons among eye images from trials in Britain, the USA, Japan, and Korea.

2,829 citations

Proceedings ArticleDOI
10 Dec 2002
TL;DR: Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests.
Abstract: The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan.

2,437 citations


"QFuse: Online learning framework fo..." refers methods in this paper

  • ...A 8×8 convolution filter from the pre-processing step of Daugman [47] is used to measure the 2D spectral power of the image as given by Equation A....

    [...]

  • ...[47] J. Daugman, How iris recognition works, IEEE Transactions on Circuits and Systems for Video Technology 14 (11) (2004) 21–30....

    [...]

  • ...A 8×8 convolution filter from the pre-processing step of Daugman [47] is used to measure the 2D spectral power of the image as given by Equation A.9. f(x) = 100 x2 (x2 + c2) (A.9) where x is the total spectral power measured by the convolution filter and c is the half power, corresponding to 50%....

    [...]