scispace - formally typeset
Open AccessJournal ArticleDOI

A Comprehensive Study on Face Recognition Biases Beyond Demographics

Reads0
Chats0
TLDR
In this paper, the influence of an extended range of facial attributes on the verification performance of two popular face recognition models is investigated, and the results demonstrate that also many non-demographic attributes strongly affect the recognition performance, such as accessories, hair styles and colors, face shapes, or facial anomalies.
Abstract
Face recognition (FR) systems have a growing effect on critical decision-making processes. Recent works have shown that FR solutions show strong performance differences based on the user’s demographics. However, to enable a trustworthy FR technology, it is essential to know the influence of an extended range of facial attributes on FR beyond demographics. Therefore, in this work, we analyse FR bias over a wide range of attributes. We investigate the influence of 47 attributes on the verification performance of two popular FR models. The experiments were performed on the publicly available MAAD-Face attribute database with over 120M high-quality attribute annotations. To prevent misleading statements about biased performances, we introduced control group based validity values to decide if unbalanced test data causes the performance differences. The results demonstrate that also many non-demographic attributes strongly affect the recognition performance, such as accessories, hair-styles and colors, face shapes, or facial anomalies. The observations of this work show the strong need for further advances in making FR system more robust, explainable, and fair. Moreover, our findings might help to a better understanding of how FR networks work, to enhance the robustness of these networks, and to develop more generalized bias-mitigating face recognition solutions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Sensitive loss: Improving accuracy and fairness of face representations with discrimination-aware deep learning

TL;DR: In this paper , a discriminative learning method based on triplet loss function and a sensitive triplet generator is proposed to improve both the accuracy and fairness of biased face recognition algorithms.
Posted Content

MAAD-Face: A Massively Annotated Attribute Dataset for Face Images

TL;DR: The large amount of high-quality annotations from MAAD-Face is made use of to study the viability of soft-biometrics for recognition, providing insights about which attributes support genuine and imposter decisions.
Journal ArticleDOI

Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning

TL;DR: In this paper, the authors use inductive logic programming (ILP) to learn a propositional logic theory equivalent to a given black-box system under certain conditions, and check the viability of learning from interpretation transition (LFIT) in a specific AI application scenario.
Journal ArticleDOI

A survey of automated data augmentation algorithms for deep learning-based image classification tasks

TL;DR: In this article , the authors provide a taxonomy of existing image AutoDA approaches and discuss their pros and cons, and propose several potential directions for future improvements, and identify three key components of a standard AutoDA model: a search space, a search algorithm and an evaluation function.
Journal ArticleDOI

Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation

TL;DR: In this paper , the authors propose a new evaluation dataset (FAIR) and an algorithm (TRUST) to improve albedo estimation and, hence, fairness, by conditioning on both the face region and a global illumination signal obtained from the scene image.
Related Papers (5)