scispace - formally typeset
Open AccessProceedings ArticleDOI

A framework for quality-based biometric classifier selection

Reads0
Chats0
TLDR
Experimental results on different multi-modal databases involving face and fingerprint show that the proposed quality-based classifier selection framework yields good performance even when the quality of the biometric sample is sub-optimal.
Abstract:Ā 
Multibiometric systems fuse the evidence (e.g., match scores) pertaining to multiple biometric modalities or classifiers. Most score-level fusion schemes discussed in the literature require the processing (i.e., feature extraction and matching) of every modality prior to invoking the fusion scheme. This paper presents a framework for dynamic classifier selection and fusion based on the quality of the gallery and probe images associated with each modality with multiple classifiers. The quality assessment algorithm for each biometric modality computes a quality vector for the gallery and probe images that is used for classifier selection. These vectors are used to train Support Vector Machines (SVMs) for decision making. In the proposed framework, the biometric modalities are arranged sequentially such that the stronger biometric modality has higher priority for being processed. Since fusion is required only when all unimodal classifiers are rejected by the SVM classifiers, the average computational time of the proposed framework is significantly reduced. Experimental results on different multi-modal databases involving face and fingerprint show that the proposed quality-based classifier selection framework yields good performance even when the quality of the biometric sample is sub-optimal.

read more

Content maybe subject toĀ copyrightĀ Ā Ā  Report

A Framework for Quality-based Biometric Classiļ¬er Selection
Himanshu S. Bhatt, Samarth Bharadwaj, Mayank Vatsa, Richa Singh
IIIT Delhi, India
{himanshub, samarthb, mayank, rsingh}@iiitd.ac.in
Arun Ross, Afzel Noore
West Virginia University, USA
{arun.ross, afzel.noore}@mail.wvu.edu
Abstract
Multibiometric systems fuse the evidence (e.g., match
scores) pertaining to multiple biometric modalities or clas-
siļ¬ers. Most score-level fusion schemes discussed in the lit-
erature require the processing (i.e., feature extraction and
matching) of every modality prior to invoking the fusion
scheme. This paper presents a framework for dynamic clas-
siļ¬er selection and fusion based on the quality of the gallery
and probe images associated with each modality with mul-
tiple classiļ¬ers. The quality assessment algorithm for each
biometric modality computes a quality vector for the gallery
and probe images that is used for classiļ¬er selection. These
vectors are used to train Support Vector Machines (SVMs)
for decision making. In the proposed framework, the bio-
metric modalities are arranged sequentially such that the
stronger biometric modality has higher priority for being
processed. Since fusion is required only when all unimodal
classiļ¬ers are rejected by the SVM classiļ¬ers, the average
computational time of the proposed framework is signif-
icantly reduced. Experimental results on different multi-
modal databases involving face and ļ¬ngerprint show that
the proposed quality-based classiļ¬er selection framework
yields good performance even when the quality of the bio-
metric sample is sub-optimal.
1. Introduction
Multibiometrics-based veriļ¬cation systems use two or
more classiļ¬ers pertaining to the same biometric modality
or different biometric modalities. As discussed by Woods
et al. [19], there are two general approaches to fusion:
(1) classiļ¬er fusion and (2) dynamic classiļ¬er selection. In
classiļ¬er fusion, all constituent classiļ¬ers are used and their
decisions are combined using fusion rules [10], [14]. On the
other hand, in dynamic selection, the most appropriate clas-
siļ¬er or a subset of speciļ¬c classiļ¬ers is selected [8], [16]
for decision making. In the biometrics literature, classiļ¬er
fusion has been extensively s tudied [14], whereas dynamic
classiļ¬er selection has been relatively less explored. Mar-
cialis et al. [11] designed a serial fusion scheme for com-
bining face and ļ¬ngerprint classiļ¬ers and achieved signiļ¬-
cant reduction in veriļ¬cation time and the required degree
of user cooperation. Alonso-Fernandez et al. [3] proposed
a method where quality information was used to switch
between different system modules depending on the data
source. Veeramachaneni et al. [17] proposed a Bayesian
framework to fuse decisions pertaining to multiple biomet-
ric sensors. Particle Swarm Optimization (PSO) was used
to determine the ā€œoptimalā€ sensor operating points in order
to achieve the desired security level by switching between
different fusion rules. Vatsa et al. [15] proposed a case-
based context switching framework for incorporating bio-
metric image quality. Further, they proposed a sequential
match score fusion and quality-based dynamic selection al-
gorithm to optimize both veriļ¬cation accuracy and compu-
tational cost [16]. Recently, a sequential score fusion strat-
egy was designed using sequential probability ratio test [2].
Though existing approaches improve the performance, in
general, it is necessary to capture all biometric modalities
prior to processing them.
This research focuses on developing a dynamic selec-
tion approach for a multi-classiļ¬er biometric system that
can yield high veriļ¬cation performance even when operat-
ing on moderate-to-poor quality probe images. The case
study considered in this work has two biometric modalities
(face and ļ¬ngerprint) and two classiļ¬ers per modality. It is
generally accepted that the quality of a biometric sample is
an important factor that can affect matching performance.
Therefore, the proposed approach utilizes image quality to
dynamically select one or more classiļ¬ers for verifying if
a given gallery-probe pair belongs to the genuine class or
the impostor class. Experiments on a multimodal database
involving face and ļ¬ngerprint, with variations in probe qual-
1
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011

ity, suggest that the proposed approach provides signiļ¬cant
improvements in recognition accuracy compared to individ-
ual classiļ¬ers and the classical sum-rule fusion scheme.
2. Quantitative Assessment Algorithm
In the proposed approach, different quality assessment
techniques are used to generate a composite quality vector
for a given biometric sample. The quality vector used in
this study comprises of four quality attributes (scores): no-
reference quality, edge spread, spectral energy, and modal-
ity speciļ¬c image quality. Details of each quality attribute
are provided below:
āˆ™ No-reference quality: Wang et al. [18] used blocki-
ness and activity estimation in both horizontal and ver-
tical directions in an image to compute a no-reference
quality score. Blockiness is estimated by the average
intensity difference between block boundaries in the
image. Activity is used to measure the effect of com-
pression and blur on the image. These individual esti-
mates are combined to give a composite no-reference
quality score.
āˆ™ Edge spread: Marziliano et al. [7] used edge spread
to estimate motion and off-focus blurriness in images
based on edges and adjacent regions. Their technique
computes the effect of blur in an image based on the
difference in image intensity with respect to the local
maxima and minima of pixel intensity in every row of
the image.
āˆ™ Spectral energy: It describes abrupt changes in illu-
mination and specular reļ¬‚ection [13]. The image is
tessellated into several non-overlapping blocks and the
spectral energy is computed for each block. The value
is computed as the magnitude of Fourier transform
components in both horizontal and vertical directions.
āˆ™ Modality speciļ¬c image quality: Along with the
above mentioned general image quality attributes, the
quality assessment algorithm also computes ā€œusabil-
ityā€ quality measures speciļ¬c to each biometric modal-
ity.
Face quality: For face images, pose is a major co-
variate that determines the usability of the face im-
age. Even a good quality face image may not be use-
ful during recognition due to pose variations. Pose
is estimated based on the geometric relationship be-
tween face, eyes, and mouth. Depending upon the yaw,
pitch and roll values of the estimated pose, a composite
score is computed for denoting face quality.
Fingerprint quality: For ļ¬ngerprint images, Chen et
al. [5] measured the quality of ridge samples by com-
puting the Fourier energy spectral density concentra-
Table 1. Range of quality attributes over the images used in this
research.
Face images
Quality attribute Range
Spectral Energy [1.09, 1.34]
No reference quality [12.43, 13.50]
Edge spread [8.51, 16.88]
Pose [302.31, 466.12]
Fingerprint images
Quality attribute Range
Spectral Energy [0.96, 1.15]
No reference quality [8.10, 11.50]
Edge spread [3.94, 6.68]
Global entropy [0.91, 1.16]
tion in particular frequency bands. Such a measure is
global in nature and encodes the overall quality of ļ¬n-
gerprint ridges. This quality measure, referred to as
global entropy, is used in this work.
For a given image, a quality vector comprising of the
four aforementioned quality scores is generated. Table 1
shows the range of values obtained by the quality attributes
over the face-ļ¬ngerprint images used in this research (de-
tails are available in Section 4.2). The spectral energy is
considered good if its value is close to 1.Forno refer-
ence quality, higher the value better is the quality of image.
For a frontal face image, the value of pose attribute is 400.
Therefore, a face is right aligned if pose is less than 400,
otherwise, the face is aligned to the left. For edge spread,
lower the value better is the quality of image. For global
entropy, higher the value better is the quality of the ļ¬nger-
print image. For a given gallery-probe pair, the quality vec-
tor of both gallery and probe images are concatenated to
form a quality vector of eight quality scores represented as
š‘„ =[š‘„
š‘”
,š‘„
š‘
], where š‘„
š‘”
and š‘„
š‘
are the quality vectors of
gallery and probe images, respectively.
3. Quality Driven Classiļ¬er Selection Frame-
work
The proposed framework utilizes the quality vector for
classiļ¬er selection. As shown in Figure 1, in a face-
ļ¬ngerprint bimodal setting, the individual modalities are
processed sequentially. It starts from the strongest modality
such that the system has higher chances of correctly classi-
fying the gallery-probe pair using the ļ¬rst biometric modal-
ity and obviating the need for processing the second modal-
ity. Since classiļ¬er selection can also be posed as a clas-
siļ¬cation problem, Support Vector Machine (SVM) is used
for classiļ¬cation. One SVM is trained for each biometric
modality to select the best classiļ¬er for that modality using
quality vectors. In this paper, the classiļ¬er selection frame-
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011

Figure 1. Illustrating the proposed quality based classiļ¬er selection
framework for face-ļ¬ngerprint biometrics.
work is presented for a two-classiļ¬er two-modality setting
involving face and ļ¬ngerprint. However, the framework can
be easily extended to accommodate more choices as it pro-
vides the ļ¬‚exibility to add new biometric modalities and to
add/remove classiļ¬ers for each modality. The framework
is divided into two stages: (1) training the SVMs and (2)
dynamic classiļ¬er selection f or probe veriļ¬cation.
3.1. SVM Training
The SVM corresponding to each biometric modality is
trained independently using a labeled training database.
Training SVM for Fingerprints: SVM1 is trained for three
classes using the labeled training data {š‘„
1š‘–
, š‘¦
1š‘–
}. Here, in-
put š‘„
1š‘–
=[š‘„
š‘”
, š‘„
š‘
] is the quality vector of the š‘–
š‘”ā„Ž
gallery-
probe ļ¬ngerprint image pair in the training set and the out-
put š‘¦
1š‘–
āˆˆ{āˆ’1, 0, +1}. The labels are assigned based on
the match score distribution of genuine and impostor scores
and the likelihood ratio of the two ļ¬ngerprint classiļ¬ers.
As shown in Figure 2, for each modality, distance scores
are computed using the training data and the two ļ¬nger-
print veriļ¬cation algorithms. If the impostor score com-
puted using classiļ¬er1 is greater than the maximum gen-
uine score (conļ¬dently classiļ¬ed as impostor) or if the gen-
uine score computed using š‘š‘™š‘Žš‘ š‘ š‘–š‘“š‘–š‘’š‘Ÿ1 is less than the min-
imum impostor score (conļ¬dently classiļ¬ed as genuine),
the {āˆ’1} label is assigned to indicate that classiļ¬er1 can
correctly classify the gallery-probe pair. Label {0} is as-
signed when the impostor score computed using classiļ¬er2
is greater than the maximum genuine score (conļ¬dently
classiļ¬ed as impostor) or when the genuine score computed
using š‘š‘™š‘Žš‘ š‘ š‘–š‘“ š‘–š‘’š‘Ÿ2 is less than the minimum impostor score
Figure 2. Illustrating the process of assigning labels: the genuine-
impostor match score distribution are used to assign labels of input
gallery-probe quality vector š‘„ =[š‘„
š‘”
,š‘„
š‘
] during SVM training.
(conļ¬dently classiļ¬ed as genuine). If the score lies within
the conļ¬‚icting region for both the veriļ¬cation algorithms,
the {+1} label is assigned which signiļ¬es that for the given
gallery-probe pair, the individual ļ¬ngerprint classiļ¬ers is
not able to classify the gallery-probe pair and that another
modality, i.e. face, is required. If both the veriļ¬cation al-
gorithms correctly classify the gallery-probe pair based on
the score distribution, then the likelihood ratio is used to
make a decision (genuine or impostor). The quality vector
of the gallery-probe pair is assigned the label correspond-
ing to the veriļ¬cation algorithm that classiļ¬es it with higher
conļ¬dence (based on the accuracy computed using training
samples). Under Gaussian assumption, the likelihood ra-
tio is computed from the estimated densities š‘“
š‘”š‘’š‘›
(š‘„) and
š‘“
š‘–š‘šš‘
(š‘„) as šæš‘… (š‘„ )=š‘“
š‘”š‘’š‘›
(š‘„)/š‘“
š‘–š‘šš‘
(š‘„).
Training SVM for Face: Similar to SVM1, SVM2 is also
a three-class SVM trained using the labeled training data
{š‘„
2š‘–
,š‘¦
2š‘–
}, where, š‘„
2š‘–
= [š‘„
š‘”
,š‘„
š‘
] is the quality vector of the
š‘–
š‘”ā„Ž
gallery-probe face image pair in the training set. The
labels are assigned in a similar manner as SVM1. The only
variation here is with the {+1} label. If the score lies within
the conļ¬‚icting region for both the face veriļ¬cation algo-
rithms, the {+1} label is assigned which signiļ¬es that for
the given gallery-probe pair, the individual classiļ¬ers are
not able to classify the gallery-probe pair and that match
score fusion is required.
3.2. Classiļ¬er Selection for Veriļ¬cation
During veriļ¬cation, the trained SVMs are used to select
the most appropriate classiļ¬er for each modality based only
on quality. The biometric modalities are used one at a time
and the second modality is selected only when the individ-
ual classiļ¬ers pertaining to the ļ¬rst modality are not able to
classify the given gallery-probe pair.
The quality vectors of gallery-probe pair for the ļ¬rst
modality is computed and provided as input to the trained
SVM1. Based on the quality vector, SVM1 makes the pre-
diction. If SVM1 predicts that one of the classiļ¬ers of the
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011

ļ¬rst modality can be used to correctly classify the given
gallery-probe pair, then the framework selects the classi-
ļ¬er predicted by SVM1. Otherwise, the quality vector for
the gallery-probe pair corresponding to the second modal-
ity is computed and provided as input to SVM2.IfSVM2
predicts that one of the classiļ¬ers of the second modality
can correctly classify the gallery-probe pair, then the frame-
work selects the classiļ¬er predicted by SVM2. Otherwise, if
both SVMs predict that the individual classiļ¬ers of both the
modalities are unable to classify the gallery-probe pair, the
sum rule-based score level fusion of the classiļ¬ers across
both modalities is used to generate the ļ¬nal score. It should
be noted that since the SVMs are based only on the quality
of the gallery-probe pair, the framework does not require
computing the scores for all the modalities and classiļ¬ers.
4. Experimental Results
To evaluate the effectiveness of the proposed framework,
experiments are performed on two different multimodal
databases using two face classiļ¬ers and two ļ¬ngerprint clas-
siļ¬ers. Details about the feature extractors and matchers
used for each modality, database, experimental protocol,
and key observations are presented in this section.
4.1. Unimodal Algorithms
Fingerprint: The two ļ¬ngerprint classiļ¬ers used in this
study are the NIST Biometric Image Software (NBIS)
1
and
a commercial
2
ļ¬ngerprint matching software. NBIS con-
sists of a minutiae detector called MINDTCT and a ļ¬nger-
print matching algorithm known as BOZORTH3. The sec-
ond classiļ¬er, a commercial ļ¬ngerprint matching software,
is also based on extracting and matching minutiae points.
Face: The two face classiļ¬ers used in this research
are Uniform Circular Local Binary Pattern (UCLBP) [1]
and Speeded Up Robust Features (SURF) [4]. UCLBP is
a widely used texture-based operator whereas SURF is a
point-based descriptor which is invariant to scale and rota-
tion. šœ’
2
distance measure is used to compare two UCLBP
feature histograms and two SURF descriptors.
4.2. Database
The evaluation is performed on two different databases.
The ļ¬rst is the WVU multimodal database [6] from which
270 subjects that have at least 6 ļ¬ngerprint and face images
each are selected. For each modality, two images per sub-
ject are placed in the gallery and the remaining images are
used as probes.
To evaluate the scalability of the proposed approach, a
large multimodal (chimeric) database is used. The WVU
1
http://www.nist.gov/itl/iad/ig/nbis.cfm
2
The license agreement does not allow us to name the software in any
comparative study.
Table 2. Parameters of noise and blur kernels used to create the
synthetic degraded database.
Type Parameter
Gaussian noise šœŽ =0.05
Poisson noise šœ† =1
Salt & pepper noise d=0.05
Speckle noise v=0.05
Gaussian blur šœŽ =1
Motion blur angle 5
š‘œ
& length 1-10 pixels
Unsharp blur š›¼ = 0.1 to 1
multimodal database consists of ļ¬ngerprint images from
four ļ¬ngers per subject. Assuming that the four ļ¬ngers
are independent, a database of 1068 virtual subjects with
six or more samples per subject is prepared. For associat-
ing face with ļ¬ngerprint images, a face database of 1068
subjects is created containing 446 subjects from the MBGC
Version2 database
3
, 270 subjects from the WVU database
[6], 233 from the CMU MultiPIE database [ 9], and 119 sub-
jects from the AR face database [12].
4.3. Experimental Protocol
In all the experiments, 40% of the subjects in the
database are used for training and the remaining 60% are
used for performance evaluation. During training, the
SVMs are trained as explained in Section 3.1. The 40%-
60% partitioning was done ļ¬ve times (repeated random sub-
sampling validation) and veriļ¬cation accuracies are com-
puted at 0.01% false accept rate (FAR). Two experiments
are performed as explained below:
Experiment 1: In this experiment, with two biometric
modalities (face and ļ¬ngerprints) and four classiļ¬ers, the
proposed quality-based classiļ¬er selection framework se-
lects the most appropriate unimodal classiļ¬er to process the
gallery-probe pair based on the quality. In this experiment
both gallery and probe images are of good quality (unal-
tered/original images).
Experiment 2: In this experiment, the quality of probe im-
ages is synthetically degraded. A synthetic poor quality
database is prepared where probe images are corrupted by
adding different types of noise and blur as shown in Fig-
ure 3. Table 2 shows the parameters of noise and blur ker-
nels used to create the synthetic database. Experiments are
performed for each type of degradation introduced in both
ļ¬ngerprints and face images. It should be noted that for ex-
periment 2, training is done on good quality gallery-probe
pairs and performance is evaluated on non-overlapping sub-
jects from the synthetically corrupted database.
3
http://www.nist.gov/itl/iad/ig/mbgc-presentations.cfm
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011

Figure 3. Sample images from the database that are degraded using
different types of noise and blur.
Figure 4. Sample decisions of the proposed algorithm when (a)
Fingerprint classiļ¬er 1 is selected, (b) Fingerprint classiļ¬er 2 is
selected, and (c) Face classiļ¬er 2 is selected.
4.4. Results and Analysis
Figure 4 illustrates sample decisions of the proposed al-
gorithm. Figures 5 and 6 show the Receiver Operating
Characteristic (ROC) curves for experiment 1. Table 3
summarizes the veriļ¬cation accuracy for different types of
degradations introduced in the probe set. The key results
are listed below:
āˆ™ ROC curves in Figures 5 and 6 show that for exper-
iment 1, the proposed quality-based classiļ¬er selec-
Figure 5. ROC curves of the individual classiļ¬ers, sum-rule fusion
and the proposed quality based classiļ¬er selection framework on
the WVU multimodal database with good gallery-probe quality.
Figure 6. ROC curves of the individual classiļ¬ers, sum-rule fusion
and the proposed quality based classiļ¬er selection framework on
the large scale chimeric database with good gallery-probe quality.
tion framework outperforms the unimodal classiļ¬ers
and sum-rule fusion by at least 1.05% and 1.57% on
the WVU multimodal database and the large scale
chimeric database, respectively.
āˆ™ It is observed that when the quality of probe images
is degraded, the performances of individual classiļ¬ers
are affected. However, the quality-based classiļ¬er se-
lection framework still performs better than individual
classiļ¬ers and sum rule fusion. This improvement is
attributed to the fact that the proposed framework can
dynamically determine when to use the most appropri-
ate single classiļ¬er and when to perform fusion based
on the quality of gallery-probe image pairs. Table 3 re-
ports the performance of all the algorithms when probe
images are of sub-optimal quality.
āˆ™ In experiment 1 with the WVU database, 27.95%
gallery-probe pairs were processed by ļ¬ngerprint clas-
siļ¬er1 - NBIS, 25.33% pairs with ļ¬ngerprint classi-
ļ¬er2 - commercial matcher, 18.99% with face clas-
siļ¬er1 - UCLBP, and 15.51% with face classiļ¬er2 -
SURF. The remaining 12.19% pairs were processed
using weighted sum rule fusion. Similarly for the
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011

Citations
More filters
Journal ArticleDOI

Biometric quality: a review of fingerprint, iris, and face

TL;DR: The analysis of the characteristic function of quality and match scores shows that a careful selection of complimentary set of quality metrics can provide more benefit to various applications of biometric quality.
Proceedings Article

Biometric system

TL;DR: This paper covers the field of biometric systems focus on biometric authentication systems and gives some insight in how the various biometric data can be used for authentication.
Journal ArticleDOI

Design and evaluation of photometric image quality measures for effective face recognition

TL;DR: A new face image quality index (FQI) is proposed that combines multiple quality measures, and classifies a face image based on this index, and conducts statistical significance Z-tests that demonstrate the advantages of the proposed FQI in face recognition applications.
Journal ArticleDOI

QFuse: Online learning framework for adaptive biometric system

TL;DR: This paper presents an adaptive context switching algorithm coupled with online learning to address the scalability and accommodate the variations in data distribution of biometrics.

Rapid Access Control on Ubuntu Cloud Computing with Facial Recognition and Fingerprint Identification.

TL;DR: The rapid facial recognition and fingerprint identification accomplishes fast access control in Ubuntu Cloud Computing for preventing illegal incursions outside the cloud computing system.
References
More filters
Book ChapterDOI

I and J

Book ChapterDOI

SURF: speeded up robust features

TL;DR: A novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Journal ArticleDOI

On combining classifiers

TL;DR: A common theoretical framework for combining classifiers which use distinct pattern representations is developed and it is shown that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision.
Journal ArticleDOI

Face Description with Local Binary Patterns: Application to Face Recognition

TL;DR: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features that is assessed in the face recognition problem under different challenges.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What are the contributions mentioned in the paper "A framework for quality-based biometric classifier selection" ?

This paper presents a framework for dynamic classifier selection and fusion based on the quality of the gallery and probe images associated with each modality with multiple classifiers.Ā 

The quality vector used in this study comprises of four quality attributes (scores): noreference quality, edge spread, spectral energy, and modality specific image quality.Ā 

The two fingerprint classifiers used in this study are the NIST Biometric Image Software (NBIS)1 and a commercial2 fingerprint matching software.Ā 

The major advantage of the proposed quality based classifier selection framework is that it can be easily extended to include other biometric modalities, unimodal classifiers and fusion rules.Ā 

This research focuses on developing a dynamic selection approach for a multi-classifier biometric system that can yield high verification performance even when operating on moderate-to-poor quality probe images.Ā 

The sequential design of the classifier selection framework allows it to process each biometric modality in sequence using the quality of the gallery-probe pair.Ā 

The first is the WVU multimodal database [6] from which 270 subjects that have at least 6 fingerprint and face images each are selected.Ā 

For a given gallery-probe pair, the quality vector of both gallery and probe images are concatenated to form a quality vector of eight quality scores represented as š‘„ = [š‘„š‘”, š‘„š‘], where š‘„š‘” and š‘„š‘ are the quality vectors of gallery and probe images, respectively.Ā 

The quality vector of the gallery-probe pair is assigned the label corresponding to the verification algorithm that classifies it with higher confidence (based on the accuracy computed using training samples).Ā 

No-reference quality: Wang et al. [18] used blockiness and activity estimation in both horizontal and vertical directions in an image to compute a no-reference quality score.Ā 

The key results are listed below:āˆ™ ROC curves in Figures 5 and 6 show that for experiment 1, the proposed quality-based classifier selec-tion framework outperforms the unimodal classifiers and sum-rule fusion by at least 1.05% and 1.57% on the WVU multimodal database and the large scale chimeric database, respectively.āˆ™Ā 

the proposed approach utilizes image quality to dynamically select one or more classifiers for verifying if a given gallery-probe pair belongs to the genuine class or the impostor class.Ā 

the framework can be easily extended to accommodate more choices as it provides the flexibility to add new biometric modalities and to add/remove classifiers for each modality.Ā