scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Incremental granular relevance vector machine

TL;DR: The proposed iGRVM which incorporates incremental and granular learning in RVM can be a good alternative for biometric score classification with faster testing time.
Abstract: This paper focuses on extending the capabilities of relevance vector machine which is a probabilistic, sparse, and linearly parameterized classifier. It has been shown that both relevance vector machine and support vector machine have similar generalization performance but RVM requires significantly fewer relevance vectors. However, RVM has certain limitations which limits its applications in several pattern recognition problems including biometrics such as (1) slow training process, (2) difficult to train with large training samples, and (3) may not be suitable to handle large class imbalance. To address these limitations, we propose iGRVM which incorporates incremental and granular learning in RVM. The proposed classifier is evaluated in context to multimodal biometrics score classification using the NIST BSSR1, CASIA-Iris-Distance V4, and Biosecure DS2 databases. The experimental analysis illustrates that the proposed classifier can be a good alternative for biometric score classification with faster testing time. HighlightsThe proposed iGRVM incorporates incremental and granular learning in RVM.Experiments are performed on NIST BSSR1, CASIA-Iris-Distance V4, and Biosecure DS2 databases.Results illustrate that iGRVM can be a good alternative for biometric score classification.
Citations
More filters
Journal ArticleDOI
TL;DR: A secure multimodal biometric system that uses convolution neural network (CNN) and Q-Gaussian multi support vector machine (QG-MSVM) based on a different level fusion to protect these templates and increase the security of the proposed system.
Abstract: A multimodal biometric system integrates information from more than one biometric modality to improve the performance of each individual biometric system and make the system robust to spoof attacks. In this paper, we propose a secure multimodal biometric system that uses convolution neural network (CNN) and Q-Gaussian multi support vector machine (QG-MSVM) based on a different level fusion. We developed two authentication systems with two different level fusion algorithms: a feature level fusion and a decision level fusion. The feature extraction for individual modalities is performed using CNN. In this step, we selected two layers from CNN that achieved the highest accuracy, in which each layer is regarded as separated feature descriptors. After that, we combined them using the proposed internal fusion to generate the biometric templates. In the next step, we applied one of the cancelable biometric techniques to protect these templates and increase the security of the proposed system. In the authentication stage, we applied QG-MSVM as a classifier for authentication to improve the performance. Our systems were tested on several publicly available databases for ECG and fingerprint. The experimental results show that the proposed multimodal systems are efficient, robust, and reliable than existing multimodal authentication systems.

153 citations


Cites background from "Incremental granular relevance vect..."

  • ...Several multimodal biometric systems based on conventional traits such as fingerprint and iris have been developed during past decades [2]–[4]; there are only a few works about a multimodal biometric system that includes ECG....

    [...]

Journal ArticleDOI
TL;DR: Several systems and architectures related to the combination of biometric systems, both unimodal and multimodal, are overviews, classifying them according to a given taxonomy, and a case study for the experimental evaluation of methods for biometric fusion at score level is presented.
Abstract: The paper presents the methodologies on information fusion in the biometric field.The methodologies, architectures, and benchmarks related to unimodal and multimodal fusion of biometric systems are discussed.The state of the art in the combination of biometric matchers is provided.A case study for the experimental evaluation of methods for biometric fusion at score level is presented.Some possible directions for future research are suggested. Biometric identity verification refers to technologies used to measure human physical or behavioral characteristics, which offer a radical alternative to passports, ID cards, driving licenses or PIN numbers in authentication. Since biometric systems present several limitations in terms of accuracy, universality, distinctiveness, acceptability, methods for combining biometric matchers have attracted increasing attention of researchers with the aim of improving the ability of systems to handle poor quality and incomplete data, achieving scalability to manage huge databases of users, ensuring interoperability, and protecting user privacy against attacks. The combination of biometric systems, also known as "biometric fusion", can be classified into unimodal biometric if it is based on a single biometric trait and multimodal biometric if it uses several biometric traits for person authentication.The main goal of this study is to analyze different techniques of information fusion applied in the biometric field. This paper overviews several systems and architectures related to the combination of biometric systems, both unimodal and multimodal, classifying them according to a given taxonomy. Moreover, we deal with the problem of biometric system evaluation, discussing both performance indicators and existing benchmarks.As a case study about the combination of biometric matchers, we present an experimental comparison of many different approaches of fusion of matchers at score level, carried out on three very different benchmark databases of scores. Our experiments show that the most valuable performance is obtained by mixed approaches, based on the fusion of scores. The source code of all the method implemented for this research is freely available for future comparisons11www.dei.unipd.it/node/2357.After a detailed analysis of pros and cons of several existing approaches for the combination of biometric matchers and after an experimental evaluation of some of them, we draw our conclusion and suggest some future directions of research, hoping that this work could be a useful start point for newer research. Display Omitted

123 citations


Additional excerpts

  • ...[39] 2015 Score level fusion Multimodal (general approach tested on fingerprint and iris) iGRVM is a novel classifier which incorporates incremental and granular learning in relevance vector machines....

    [...]

Journal ArticleDOI
TL;DR: This paper combines ECG with a fingerprint liveness detection algorithm and proposes a stopping criterion that reduces the average waiting time for signal acquisition and examines automatic template updating using ECG and fingerprint.
Abstract: Fingerprints have been extensively used for biometric recognition around the world. However, fingerprints are not secrets, and an adversary can synthesis a fake finger to spoof the biometric system. The mainstream of the current fingerprint spoof detection methods are basically binary classifier trained on some real and fake samples. While they perform well on detecting fake samples created by using the same methods used for training, their performance degrades when encountering fake samples created by a novel spoofing method. In this paper, we approach the problem from a different perspective by incorporating electrocardiogram (ECG). Compared with the conventional biometrics, stealing someone’s ECG is far more difficult if not impossible. Considering that ECG is a vital signal and motivated by its inherent liveness, we propose to combine it with a fingerprint liveness detection algorithm. The combination is natural as both ECG and fingerprints can be captured from fingertips. In the proposed framework, the ECG and fingerprint are combined not only for authentication purpose but also for liveness detection. We also examine automatic template updating using ECG and fingerprint. In addition, we propose a stopping criterion that reduces the average waiting time for signal acquisition. We have performed extensive experiments on the LivDet2015 database which is presently the latest available liveness detection database and compare the proposed method with six liveness detection methods as well as 12 participants of LivDet2015 competition. The proposed system has achieved a liveness detection equal error rate (EER) of 4.2% incorporating only 5 s of ECG. By extending the recording time to 30 s, liveness detection EER reduces to 2.6% which is about 4 times better than the best of six comparison methods. This is also about 2 times better than the best results achieved by the participants of the LivDet2015 competition.

48 citations


Cites background from "Incremental granular relevance vect..."

  • ...While multimodal biometric systems based on conventional traits such as face and fingerprint have been extensively investigated in [32] and [33], there exist only a few works about a multimodal biometric system that includes ECG....

    [...]

Journal ArticleDOI
01 Dec 2016-Energies
TL;DR: The experimental results are superior to those by several state-of-the-art benchmark methods in terms of root mean squared error (RMSE), mean absolute percent error (MAPE), and directional statistic (Dstat), showing that the proposed EEMD-APSO-RVM is promising for forecasting crude oil price.
Abstract: Crude oil, as one of the most important energy sources in the world, plays a crucial role in global economic events. An accurate prediction for crude oil price is an interesting and challenging task for enterprises, governments, investors, and researchers. To cope with this issue, in this paper, we proposed a method integrating ensemble empirical mode decomposition (EEMD), adaptive particle swarm optimization (APSO), and relevance vector machine (RVM)—namely, EEMD-APSO-RVM—to predict crude oil price based on the “decomposition and ensemble” framework. Specifically, the raw time series of crude oil price were firstly decomposed into several intrinsic mode functions (IMFs) and one residue by EEMD. Then, RVM with combined kernels was applied to predict target value for the residue and each IMF individually. To improve the prediction performance of each component, an extended particle swarm optimization (PSO) was utilized to simultaneously optimize the weights and parameters of single kernels for the combined kernel of RVM. Finally, simple addition was used to aggregate all the predicted results of components into an ensemble result as the final result. Extensive experiments were conducted on the crude oil spot price of the West Texas Intermediate (WTI) to illustrate and evaluate the proposed method. The experimental results are superior to those by several state-of-the-art benchmark methods in terms of root mean squared error (RMSE), mean absolute percent error (MAPE), and directional statistic (Dstat), showing that the proposed EEMD-APSO-RVM is promising for forecasting crude oil price.

48 citations

Journal ArticleDOI
TL;DR: A novel ensemble approach based on Switching is introduced with a new technique to select the switched examples based on Nearest Enemy Distance, and the resulting SwitchingNED is compared with five distinctive ensemble-based approaches, with different combinations of sampling techniques.
Abstract: The imbalanced data classification has been deeply studied by the machine learning practitioners over the years and it is one of the most challenging problems in the field. In many real-life situations, the under representation of a class in contrary to the rest commonly produces the tendency to ignore the minority class, this being normally the target of the problem. Consequently, many different techniques have been proposed. Among those, the ensemble approaches have resulted to be very reliable. New ways of generating ensembles have also been studied for standard classification. In particular, Class Switching, as a mechanism to produce training perturbed sets, has been proved to perform well in slightly imbalanced scenarios. In this paper, we analyze its potential to deal with highly imbalanced problems, fighting against its major limitations. We introduce a novel ensemble approach based on Switching with a new technique to select the switched examples based on Nearest Enemy Distance. We compare the resulting SwitchingNED with five distinctive ensemble-based approaches, with different combinations of sampling techniques. With a better performance, SwitchingNED is settled as one of best approaches on the field.

34 citations

References
More filters
Book
Vladimir Vapnik1
01 Jan 1995
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Abstract: Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.

40,147 citations

Journal ArticleDOI
TL;DR: In this article, a method of over-sampling the minority class involves creating synthetic minority class examples, which is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
Abstract: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy.

17,313 citations

Journal ArticleDOI
TL;DR: In this article, a method of over-sampling the minority class involves creating synthetic minority class examples, which is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
Abstract: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.

11,512 citations

BookDOI
01 Dec 2001
TL;DR: Learning with Kernels provides an introduction to SVMs and related kernel methods that provide all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms.
Abstract: From the Publisher: In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs—-kernels--for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.

7,880 citations

Journal ArticleDOI
TL;DR: A critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario is provided.
Abstract: With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications, the problem of learning from imbalanced data (the imbalanced learning problem) is a relatively new challenge that has attracted growing attention from both academia and industry. The imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. In this paper, we provide a comprehensive review of the development of research in learning from imbalanced data. Our focus is to provide a critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario. Furthermore, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potential important research directions for learning from imbalanced data.

6,320 citations