scispace - formally typeset
Search or ask a question

Showing papers by "Nalini K. Ratha published in 2023"


Proceedings ArticleDOI
01 Jan 2023
TL;DR: In this article , the authors presented a rigorous study on gender bias in iris presentation attack detection algorithms using a large-scale and gender-balanced database, which can help in building future presentation attacks detection algorithms with the aim of fair treatment of each demography.
Abstract: One of the critical steps in biometrics pipeline is detection of presentation attacks, a physical adversary. Several presentation (adversary) attack detection (PAD) algorithms, including iris PAD, have been proposed and have shown superlative performance. However, a recent study, on a small-scale database, has highlighted that iris PAD may have gender biases. In this research, we present a rigorous study on gender bias in iris presentation attack detection algorithms using a large-scale and gender-balanced database. The paper provides several interesting observations which can help in building future presentation attack detection algorithms with aim of fair treatment of each demography. In addition, we also present a robust iris presentation attack detection algorithm by combining gender-covariate based classifiers. The proposed robust classifier not only reduces the difference in accuracy between different genders but also improves the overall performance of the PAD system.




DOI
TL;DR: In this article , the behavior of face recognition models is evaluated to understand if similar to humans, models also encode group-specific features for face recognition, along with where bias is encoded in these models.
Abstract: Humans are known to favor other individuals who exist in similar groups as them, exhibiting biased behavior, which is termed as in-group bias. The groups could be formed on the basis of ethnicity, age, or even a favorite sports team. Taking cues from aforementioned observation, we inspect if deep learning networks also mimic this human behavior, and are affected by in-group and out-group biases. In this first of its kind research, the behavior of face recognition models is evaluated to understand if similar to humans, models also encode group-specific features for face recognition, along with where bias is encoded in these models. Analysis has been performed for two use-cases of bias: age and ethnicity in face recognition models. Thorough experimental evaluation leads us to several insights: (i) deep learning models focus on different facial regions for different ethnic groups and age groups, and (ii) large variation in face verification performance is also observed across different sub-groups for both known and our own trained deep networks. Based on the observations, a novel bias index is presented for evaluating a trained model’s level of bias. We believe that a better understanding of how deep learning models work and encode bias, along with the proposed bias index would enable researchers to address the challenge of bias in AI, and develop more robust and fairer algorithms for mitigating bias as well as developing fairer models.