scispace - formally typeset
Search or ask a question
Book ChapterDOI

Intelligent Biometric Information Fusion using Support Vector Machine

01 Jan 2007-pp 325-349
About: The article was published on 2007-01-01. It has received 27 citations till now. The article focuses on the topics: Support vector machine & Biometrics.
Citations
More filters
Proceedings ArticleDOI
13 Jun 2010
TL;DR: A novel score level transformation technique that helps in fusion of multiple classifiers is proposed, based on a quantile transform of the genuine and impostor score distributions and a power transform which further changes the score distribution to help linear classification.
Abstract: In biometrics authentication systems, it has been shown that fusion of more than one modality (e.g., face and finger) and fusion of more than one classifier (two different algorithms) can improve the system performance. Often a score level fusion is adopted as this approach doesn't require the vendors to reveal much about their algorithms and features. Many score level transformations have been proposed in the literature to normalize the scores which enable fusion of more than one classifier. In this paper, we propose a novel score level transformation technique that helps in fusion of multiple classifiers. The method is based on two components: quantile transform of the genuine and impostor score distributions and a power transform which further changes the score distribution to help linear classification. After the scores are normalized using the novel quantile power transform, several linear classifiers are proposed to fuse the scores of multiple classifiers. Using the NIST BSSR-1 dataset, we have shown that the results obtained by the proposed method far exceed the results published so far in the literature.

10 citations

Proceedings ArticleDOI
01 Jan 2007
TL;DR: This paper formulates an evidence theoretic multimodal fusion approach using belief functions that takes into account the variability in image characteristics that is computationally efficient, and the verification accuracy is not compromised even when conflicting decisions are encountered.
Abstract: This paper formulates an evidence theoretic multimodal fusion approach using belief functions that takes into account the variability in image characteristics. When processing non-ideal images the variation in the quality of features at different levels of abstraction may cause individual classifiers to generate conflicting genuine-impostor decisions. Existing fusion approaches are non-adaptive and do not always guarantee optimum performance improvements. We propose a contextual unification framework to dynamically select the most appropriate evidence theoretic fusion algorithm for a given scenario. The effectiveness of our approach is experimentally validated by fusing match scores from level-2 and level-3 fingerprint features. Compared to existing fusion algorithms, the proposed approach is computationally efficient, and the verification accuracy is not compromised even when conflicting decisions are encountered.

8 citations

Proceedings ArticleDOI
01 Aug 2017
TL;DR: Enhanced performance induced by including the coherence information within a dynamic weighting scheme in comparison to the baseline solution was shown by the reduction of the equal error rate by 45% to 85% over the different test scenarios and proved to maintain high performance when dealing with noisy data.
Abstract: Multi-biometrics aims at building more accurate unified biometric decisions based on the information provided by multiple biometric sources. Information fusion is used to optimize the process of creating this unified decision. In previous works dealing with score-level multi-biometric fusion, the scores of different biometric sources belonging to the comparison of interest are used to create the fused score. This is usually achieved by assigning static weights for the different biometric sources with more advanced solutions considering supplementary dynamic information like sample quality and neighbours distance ratio. This work proposes embedding score coherence information in the fusion process. This is based on our assumption that a minority of biometric sources, which points out towards a different decision than the majority, might have faulty conclusions and should be given relatively smaller role in the final decision. The evaluation was performed on the BioSecure multimodal biometric database with different levels of simulated noise. The proposed solution incorporates, and was compared to, three baseline static weighting approaches. The enhanced performance induced by including the coherence information within a dynamic weighting scheme in comparison to the baseline solution was shown by the reduction of the equal error rate by 45% to 85% over the different test scenarios and proved to maintain high performance when dealing with noisy data.

4 citations


Cites background from "Intelligent Biometric Information F..."

  • ...vector machines (SVM) [4][5][6], neural networks [7], and the...

    [...]

  • ...Different types of classifiers were used to perform multi-biometric fusion, some of those are support ISBN 978-0-9928626-7-1 © EURASIP 2017 2255 vector machines (SVM) [4][5][6], neural networks [7], and the likelihood ratio methods [8]....

    [...]

Reference EntryDOI
16 Jun 2014
TL;DR: The use of biometric technology in forensic science, for the development of new methods and tools, improving the current forensic biometric applications, and allowing for the creation of new ones is described.
Abstract: This article describes the use of biometric technology in forensic science, for the development of new methods and tools, improving the current forensic biometric applications, and allowing for the creation of new ones. The article begins with a definition and a summary of the development of this field. It then describes the data and automated biometric modalities of interest in forensic science and the forensic applications embedding biometric technology. On this basis, it describes the solutions and limitations of the current practice regarding the data, the technology, and the inference models. Finally, it proposes research orientations for the improvement of the current forensic biometric applications and suggests some ideas for the development of some new forensic biometric applications.

4 citations


Additional excerpts

  • ...…combine results at different levels (feature, match score, and decision), using rule-based approaches (majority voting, sum rule, product rules), or algorithms based on Support Vector Machine, fuzzy clustering, radial basis neural networks, or even to fuse information between different levels [36]....

    [...]

Proceedings ArticleDOI
02 Nov 2015
TL;DR: This work presents an approach to integrate biometric source weighting in the calculation of neighbors distance ratios to be used within a classification-based multi-biometric fusion process.
Abstract: This work presents an approach to integrate biometric source weighting in the calculation of neighbors distance ratios to be used within a classification-based multi-biometric fusion process. The neighbors distance ratio represents the elevation of the top ranked identification match to the following ranks. Using biometric source weighing can help achieve more accurate initial identity ranking necessary for neighbors distance ratios. It also influences the effect of each biometric source on the ratios values. The proposed approach is developed and evaluated using the Biometric Scores Set BSSR1 database. The results are presented in the verification scenario as receiver operating curves (ROC). The achieved performance is compared to a number of baseline solutions and a satisfying and stable performance was achieved with a clear benefit of integrating the biometric source weights.

4 citations


Cites methods from "Intelligent Biometric Information F..."

  • ...Different types of classifiers were used to perform multi-biometric fusion, some of those are support vector machines (SVM) [5][6][7], neural networks [8], and the likelihood ratio methods [9]....

    [...]

References
More filters
Book
Vladimir Vapnik1
01 Jan 1995
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Abstract: Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.

40,147 citations


"Intelligent Biometric Information F..." refers background or methods in this paper

  • ...and C is the factor used to control the violation of safety margin rule [ 33 ]....

    [...]

  • ...Support Vector Machine, proposed by [ 33 ], is a powerful methodology for solving problems in nonlinear classification, function estimation and density 330 R. Singh et al....

    [...]

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations


"Intelligent Biometric Information F..." refers methods in this paper

  • ...These plots show that the performance of both the phase and amplitude features are comparable and they outperform the standard PCA and LDA based face recognition algorithms [ 46 ]....

    [...]

Journal ArticleDOI
TL;DR: A common theoretical framework for combining classifiers which use distinct pattern representations is developed and it is shown that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision.
Abstract: We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically.

5,670 citations


"Intelligent Biometric Information F..." refers background or methods or result in this paper

  • ...In [ 1 ], Kittler proposed a set of matching score fusion rules to combine the classifier which includes majority voting, sum rule, and product rule....

    [...]

  • ...Here the parameter C is replaced by another parameter ν� [0, 1 ] which is the lower bound on the fraction of support vectors and upper bound on the number of fraction of margin errors....

    [...]

  • ...Many researchers claim that when two or more biometric information is combined, recognition accuracy increases [ 1 ] - [23]....

    [...]

  • ...It has been suggested that the fusion of match scores of two or more classifiers gives better performance over a single classifier [ 1 , 2]. In general, match score fusion is performed using sum rule, product rule or other statistical rules....

    [...]

  • ...This plot also compares the results with min/max rule based expert fusion [ 1 ]....

    [...]

Journal ArticleDOI
TL;DR: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems.
Abstract: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.

4,816 citations


"Intelligent Biometric Information F..." refers methods in this paper

  • ...To study the performance of various levels of fusion, experiments are performed using two face databases: • Frontal face images from the colored FERET database [ 43 ]....

    [...]

Journal ArticleDOI
TL;DR: A new class of support vector algorithms for regression and classification that eliminates one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case.
Abstract: We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter ν lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter epsilon in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of ν, and report experimental results.

2,737 citations


"Intelligent Biometric Information F..." refers methods in this paper

  • ...One alternative and intuitive approach to solve this problem is the use of ν-SVM of a soft margin variant of the optimal hyperplane which uses the ν-parameterization [ 35 ] and [36]....

    [...]