# Performance evaluation of a single-mode biometric access control system

TL;DR: A regression model is used in which the dependent variable is of qualitative type, more precisely, the variable: "the new individual who takes the entrance test is positive or negative", which is a classifier capable of diagnosing whether fingerprints will be accepted or not.

Abstract: The present work evaluates the performance of a fingerprint-based access control system to secure premises. In order to evaluate these performances, we used a regression model in which the dependent variable is of qualitative type, more precisely, the variable: "the new individual who takes the entrance test is positive or negative".Our model is therefore a classifier capable of diagnosing whether fingerprints will be accepted or not. This performance evaluation model is realized by means of the confusion matrix, the calculations of the evaluation parameters (Sensitivity, Specificity, Positive Predictive Value, Negative Predictive Value and False Negative), and finally the plots of the sensitivity values against 1-Specificity (ROC curve).On a sample of six hundred individuals of which 470 enrolled and 130 not enrolled, the access control system obtained the results of which 456 true positives, 14 false negatives, 10 false positives and 120 true negatives which constitute our confusion matrix, which we were able to evaluate the performance of our system by applying the calculations of the evaluation parameters.

##### References

More filters

••

TL;DR: This study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors.

120 citations

•

TL;DR: This paper discussed selection of the most appropriate measures depends on the characteristics of the problem and the various ways it can be implemented.

Abstract: We study the problem of evaluation of diﬀerent classification models that are used in machine learning. The reason of the model evaluation is to find the optimal solution from various classification models generated in an iterated and complex model building process. Depending on the method of observing, there are diﬀerent measures for evaluation the performance of the model. To evaluate classification models the most direct criterion that can be measured quantitatively is the classification accuracy. The main disadvantages of accuracy as a measure for evaluation are as follows: neglects the diﬀerences between the types of errors and it dependent on the distribution of class in the dataset. In this paper we discussed selection of the most appropriate measures depends on the characteristics of the problem and the various ways it can be implemented.

84 citations

•

31 Mar 2018TL;DR: This work develops a general approach for solving constrained classification problems, where the loss and constraints are defined in terms of a general function of the confusion matrix, and reduces the constrained learning problem to a sequence of cost-sensitive learning tasks.

Abstract: We develop a general approach for solving constrained classification problems, where the loss and constraints are defined in terms of a general function of the confusion matrix. We are able to handle complex, non-linear loss functions such as the F-measure, G-mean or H-mean, and constraints ranging from budget limits, to constraints for fairness, to bounds on complex evaluation metrics. Our approach builds on the framework of Narasimhan et al. (2015) for unconstrained classification with complex losses, and reduces the constrained learning problem to a sequence of cost-sensitive learning tasks. We provide algorithms for two broad families of problems, involving convex and fractional-convex losses, subject to convex constraints. Our algorithms are statistically consistent, generalize an existing approach for fair classification, and readily apply to multiclass problems. Experiments on a variety of tasks demonstrate the efficacy of our methods.

77 citations

••

29 Jun 2021TL;DR: In this paper, a method for reducing a multi-class Confusion Matrix into a 2 × 2 version enabling the use of the relevant performance metrics and methods like the Receiver Operator Characteristic and the Area Under the Curve for the assessment of different classification algorithms is presented.

Abstract: The paper presents a novel method for reducing a multi-class Confusion Matrix into a 2 × 2 version enabling the use of the relevant performance metrics and methods like the Receiver Operator Characteristic and the Area Under the Curve for the assessment of different classification algorithms. The reduction method is based on class grouping and leads to a specific Confusion Matrix type. The developed method is then exploited for the assessment of several state-of-the-art machine learning algorithms applied on a customer experience metric.

69 citations

•

11 Apr 2019TL;DR: This work formalizes the problem of metric elicitation by exploiting key geometric properties of the space of confusion matrices and obtains provably query efficient algorithms for eliciting linear and linear-fractional performance metrics from pairwise feedback.

Abstract: Given a binary prediction problem, which performance metric should the classifier optimize? We address this question by formalizing the problem of Metric Elicitation. The goal of metric elicitation is to discover the performance metric of a practitioner, which reflects her innate rewards (costs) for correct (incorrect) classification. In particular, we focus on eliciting binary classification performance metrics from pairwise feedback, where a practitioner is queried to provide relative preference between two classifiers. By exploiting key geometric properties of the space of confusion matrices, we obtain provably query efficient algorithms for eliciting linear and linear-fractional performance metrics. We further show that our method is robust to feedback and finite sample noise.

6 citations