Integrating image quality in 2nu-SVM biometric match score fusion.
01 Oct 2007-International Journal of Neural Systems (World Scientific Publishing Company)-Vol. 17, Iss: 5, pp 343-351
TL;DR: An intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images by applying redundant discrete wavelet transform.
Abstract: This paper proposes an intelligent 2ν-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2ν-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
Citations
More filters
[...]
TL;DR: The PNN model presented in this paper complements the recurrent neural network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0.
Abstract: A probabilistic neural network (PNN) is presented for predicting the magnitude of the largest earthquake in a pre-defined future time period in a seismic region using eight mathematically computed parameters known as seismicity indicators. The indicators considered are the time elapsed during a particular number (n) of significant seismic events before the month in question, the slope of the Gutenberg-Richter inverse power law curve for the n events, the mean square deviation about the regression line based on the Gutenberg-Richter inverse power law for the n events, the average magnitude of the last n events, the difference between the observed maximum magnitude among the last n events and that expected through the Gutenberg-Richter relationship known as the magnitude deficit, the rate of square root of seismic energy released during the n events, the mean time or period between characteristic events, and the coefficient of variation of the mean time. Prediction accuracies of the model are evaluated using three different statistical measures: the probability of detection, the false alarm ratio, and the true skill score or R score. The PNN model is trained and tested using data for the Southern California region. The model yields good prediction accuracies for earthquakes of magnitude between 4.5 and 6.0. The PNN model presented in this paper complements the recurrent neural network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0.
269 citations
[...]
TL;DR: The methodology employed to extract HOS features from normal, interictal, and epileptic EEG segments and to use significant features in classifiers for the detection of these three classes is presented, establishing the possibility of effective EEG segment classification using the proposed technique.
Abstract: The unpredictability of the occurrence of epileptic seizures makes it difficult to detect and treat this condition effectively. An automatic system that characterizes epileptic activities in EEG signals would allow patients or the people near them to take appropriate precautions, would allow clinicians to better manage the condition, and could provide more insight into these phenomena thereby revealing important clinical information. Various methods have been proposed to detect epileptic activity in EEG recordings. Because of the nonlinear and dynamic nature of EEG signals, the use of nonlinear Higher Order Spectra (HOS) features is a seemingly promising approach. This paper presents the methodology employed to extract HOS features (specifically, cumulants) from normal, interictal, and epileptic EEG segments and to use significant features in classifiers for the detection of these three classes. In this work, 300 sets of EEG data belonging to the three classes were used for feature extraction and classifier development and evaluation. The results show that the HOS based measures have unique ranges for the different classes with high confidence level (p-value < 0.0001). On evaluating several classifiers with the significant features, it was observed that the Support Vector Machine (SVM) presented a high detection accuracy of 98.5% thereby establishing the possibility of effective EEG segment classification using the proposed technique.
164 citations
[...]
TL;DR: A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is presented in this paper, where the authors provide a comprehensive overview of the role of information fusion in biometrics.
Abstract: The performance of a biometric system that relies on a single biometric modality (e.g., fingerprints only) is often stymied by various factors such as poor data quality or limited scalability. Multibiometric systems utilize the principle of fusion to combine information from multiple sources in order to improve recognition accuracy whilst addressing some of the limitations of single-biometric systems. The past two decades have witnessed the development of a large number of biometric fusion schemes. This paper presents an overview of biometric fusion with specific focus on three questions: what to fuse, when to fuse, and how to fuse. A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is also presented. In this regard, the following topics are discussed: (i) incorporating data quality in the biometric recognition pipeline; (ii) combining soft biometric attributes with primary biometric identifiers; (iii) utilizing contextual information to improve biometric recognition accuracy; and (iv) performing continuous authentication using ancillary information. In addition, the use of information fusion principles for presentation attack detection and multibiometric cryptosystems is also discussed. Finally, some of the research challenges in biometric fusion are enumerated. The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics.
72 citations
[...]
TL;DR: The proposed plausible and paradoxical reasoning approach effectively mitigates conflicting decisions obtained from classifiers especially when the evidences are imprecise due to poor image quality or limited fingerprint features.
Abstract: Existing algorithms that fuse level-2 and level-3 fingerprint match scores perform well when the number of features are adequate and the quality of images are acceptable. In practice, fingerprints collected under unconstrained environment neither guarantee the requisite image quality nor the minimum number of features required. This paper presents a novel fusion algorithm that combines fingerprint match scores to provide high accuracy under non-ideal conditions. The match scores obtained from level-2 and level-3 classifiers are first augmented with a quality score that is quantitatively determined by applying redundant discrete wavelet transform to a fingerprint image. We next apply the generalized belief functions of Dezert-Smarandache theory to effectively fuse the quality-augmented match scores obtained from level-2 and level-3 classifiers. Unlike statistical and learning based fusion techniques, the proposed plausible and paradoxical reasoning approach effectively mitigates conflicting decisions obtained from classifiers especially when the evidences are imprecise due to poor image quality or limited fingerprint features. The proposed quality-augmented fusion algorithm is validated using a comprehensive database which comprises of rolled and partial fingerprint images of varying quality with arbitrary number of features. The performance is compared with existing fusion approaches for different challenging realistic scenarios.
50 citations
Cites background or methods from "Integrating image quality in 2nu-SV..."
[...]
[...]
[...]
Posted Content•
[...]
TL;DR: The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics with specific focus on three questions: what to fusion, when to fuse, and how to fuse.
Abstract: The performance of a biometric system that relies on a single biometric modality (e.g., fingerprints only) is often stymied by various factors such as poor data quality or limited scalability. Multibiometric systems utilize the principle of fusion to combine information from multiple sources in order to improve recognition accuracy whilst addressing some of the limitations of single-biometric systems. The past two decades have witnessed the development of a large number of biometric fusion schemes. This paper presents an overview of biometric fusion with specific focus on three questions: what to fuse, when to fuse, and how to fuse. A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is also presented. In this regard, the following topics are discussed: (i) incorporating data quality in the biometric recognition pipeline; (ii) combining soft biometric attributes with primary biometric identifiers; (iii) utilizing contextual information to improve biometric recognition accuracy; and (iv) performing continuous authentication using ancillary information. In addition, the use of information fusion principles for presentation attack detection and multibiometric cryptosystems is also discussed. Finally, some of the research challenges in biometric fusion are enumerated. The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics.
47 citations
References
More filters
Book•
[...]
TL;DR: This paper presents a meta-analyses of the wavelet transforms of Coxeter’s inequality and its applications to multiresolutional analysis and orthonormal bases.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.
16,065 citations
[...]
TL;DR: In this article, the regularity of compactly supported wavelets and symmetry of wavelet bases are discussed. But the authors focus on the orthonormal bases of wavelets, rather than the continuous wavelet transform.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.
14,139 citations
[...]
TL;DR: A common theoretical framework for combining classifiers which use distinct pattern representations is developed and it is shown that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision.
Abstract: We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically.
5,535 citations
[...]
TL;DR: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems.
Abstract: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.
4,690 citations
[...]
TL;DR: A brief overview of the field of biometrics is given and some of its advantages, disadvantages, strengths, limitations, and related privacy concerns are summarized.
Abstract: A wide variety of systems requires reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that the rendered services are accessed only by a legitimate user and no one else. Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. In the absence of robust personal recognition schemes, these systems are vulnerable to the wiles of an impostor. Biometric recognition, or, simply, biometrics, refers to the automatic recognition of individuals based on their physiological and/or behavioral characteristics. By using biometrics, it is possible to confirm or establish an individual's identity based on "who she is", rather than by "what she possesses" (e.g., an ID card) or "what she remembers" (e.g., a password). We give a brief overview of the field of biometrics and summarize some of its advantages, disadvantages, strengths, limitations, and related privacy concerns.
4,384 citations
"Integrating image quality in 2nu-SV..." refers background in this paper
[...]
Related Papers (5)
[...]