scispace - formally typeset
Search or ask a question
Dissertation

An Integrated Soft Computing Approach to a Multi Biometric Security Model

01 Jan 2015-
TL;DR: In this paper, a universal soft computing framework is proposed for adaptively fusing biometric and biographical information by making real-time decisions to determine after consideration of each individual identifier whether a fusion is required.
Abstract: The abstract of the thesis consists of three sections, videlicet, Motivation Chapter Organization Salient Contributions. The complete abstract is included with the thesis. The final section on Salient Contributions is reproduced below. Salient Contributions The research presents the following salient contributions: i. A novel technique has been developed for comparing biographical information, by combining the average impact of Levenshtein, Damerau-Levenshtein, and editor distances. The impact is calculated as the ratio of the edit distance to the maximum possible edit distance between two strings of the same lengths as the given pair of strings. This impact lies in the range [0, 1] and can easily be converted to a similarity (matching) score by subtracting the impact from unity. ii. A universal soft computing framework is proposed for adaptively fusing biometric and biographical information by making real-time decisions to determine after consideration of each individual identifier whether computation of matching scores and subsequent fusion of additional identifiers, including biographical information is required. This proposed framework not only improves the accuracy of the system by fusing less reliable information (e.g. biographical information) only for instances where such a fusion is required, but also improves the efficiency of the system by computing matching scores for various available identifiers only when this computation is considered necessary. iii. A scientific method for comparing efficiency of fusion strategies through a predicted effort to error trade-off curve.
Citations
More filters
Journal Article
TL;DR: This paper proposes an approach for standardization of facial image quality, and develops facial symmetry based methods for the assessment of it by measuring facial asymmetries caused by non-frontal lighting and improper facial pose.
Abstract: Performance of biometric systems is dependent on quality of acquired biometric samples. Poor sample quality is a main reason for matching errors in biometric systems and may be the main weakness of some implementations. This paper proposes an approach for standardization of facial image quality, and develops facial symmetry based methods for the assessment of it by measuring facial asymmetries caused by non-frontal lighting and improper facial pose. Experimental results are provided to illustrate the concepts, definitions and effectiveness.

28 citations

Journal ArticleDOI
01 Jan 1971

4 citations

References
More filters
Journal ArticleDOI
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

37,861 citations

Journal ArticleDOI
Simon Haykin1
TL;DR: Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks: radio-scene analysis, channel-state estimation and predictive modeling, and the emergent behavior of cognitive radio.
Abstract: Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.

12,172 citations

Proceedings Article
Ron Kohavi1
20 Aug 1995
TL;DR: The results indicate that for real-word datasets similar to the authors', the best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
Abstract: We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.

11,185 citations

Journal ArticleDOI
TL;DR: This article provides an overview of progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.
Abstract: Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition benchmarks, sometimes by a large margin. This article provides an overview of this progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.

9,091 citations