scispace - formally typeset
Open AccessProceedings ArticleDOI

Inherent Trade-Offs in the Fair Determination of Risk Scores

TLDR
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.
Abstract
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.

read more

Citations
More filters
Proceedings Article

Equality of opportunity in supervised learning

TL;DR: This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Journal ArticleDOI

Dissecting racial bias in an algorithm used to manage the health of populations

TL;DR: It is suggested that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
Posted Content

A Survey on Bias and Fairness in Machine Learning

TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Proceedings Article

Counterfactual fairness

TL;DR: This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.
Proceedings ArticleDOI

Algorithmic Decision Making and the Cost of Fairness

TL;DR: This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
References
More filters
Proceedings Article

Equality of opportunity in supervised learning

TL;DR: This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Journal ArticleDOI

Discrimination and racial disparities in health: evidence and needed research

TL;DR: Advancing the understanding of the relationship between perceived discrimination and health will require more attention to situating discrimination within the context of other health-relevant aspects of racism, measuring it comprehensively and accurately, assessing its stressful dimensions, and identifying the mechanisms that link discrimination to health.
Posted Content

Fairness Through Awareness

TL;DR: In this article, the authors proposed a framework for fair classification comprising a task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand, and an algorithm for maximizing utility subject to the fairness constraint that similar individuals are treated similarly.
Journal ArticleDOI

Big Data's Disparate Impact

TL;DR: In the absence of a demonstrable intent to discriminate, the best doctrinal hope for data mining's victims would seem to lie in disparate impact doctrine as discussed by the authors, which holds that a practice can be justified as a business necessity when its outcomes are predictive of future employment outcomes, and data mining is specifically designed to find such statistical correlations.
Proceedings Article

Learning Fair Representations

TL;DR: A learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly).
Related Papers (5)