scispace - formally typeset
Open AccessPosted Content

Fairness Through Awareness

Reads0
Chats0
TLDR
In this article, the authors proposed a framework for fair classification comprising a task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand, and an algorithm for maximizing utility subject to the fairness constraint that similar individuals are treated similarly.
Abstract
We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.

read more

Citations
More filters
Proceedings Article

Equality of opportunity in supervised learning

TL;DR: This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Posted Content

Towards A Rigorous Science of Interpretable Machine Learning

TL;DR: This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
Journal ArticleDOI

Semantics derived automatically from language corpora contain human-like biases

TL;DR: This article showed that applying machine learning to ordinary human language results in human-like semantic biases and replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web.
Posted Content

A Survey on Bias and Fairness in Machine Learning

TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Posted Content

Concrete Problems in AI Safety

TL;DR: A list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function, an objective function that is too expensive to evaluate frequently, or undesirable behavior during the learning process, are presented.
References
More filters
Book ChapterDOI

Calibrating noise to sensitivity in private data analysis

TL;DR: In this article, the authors show that for several particular applications substantially less noise is needed than was previously understood to be the case, and also show the separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
Posted Content

Incorporating Fairness into Game Theory and Economics

TL;DR: In this article, it is shown that every mutual-max or mutual-min Nash equilibrium is a fairness equilibrium, and that if payoffs are small, fairness equilibria are roughly the set of mutualmax and mutualmin outcomes; if payoff are large, fairness equilibrium are roughly a set of Nash equilibra.
Journal Article

Calibrating noise to sensitivity in private data analysis

TL;DR: The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.
Proceedings ArticleDOI

Mechanism Design via Differential Privacy

TL;DR: It is shown that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie.