Open AccessProceedings Article
Optimized Pre-Processing for Discrimination Prevention
Flavio P. Calmon,Dennis Wei,Bhanukiran Vinzamuri,Karthikeyan Natesan Ramamurthy,Kush R. Varshney +4 more
- Vol. 30, pp 3992-4001
TLDR
This paper proposes a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility, and describes the impact of limited sample size in accomplishing this objective.Abstract:
Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy.read more
Citations
More filters
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Joy Buolamwini,Timnit Gebru +1 more
TL;DR: It is shown that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men, in commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition.
Posted Content
A Survey on Bias and Fairness in Machine Learning
TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Posted Content
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Rachel K. E. Bellamy,Kuntal Dey,Michael Hind,Samuel C. Hoffman,Stephanie Houde,Kalapriya Kannan,Pranay Lohia,Jacquelyn A. Martino,Sameep Mehta,Aleksandra Mojsilovic,Seema Nagar,Karthikeyan Natesan Ramamurthy,John T. Richards,Diptikalyan Saha,Prasanna Sattigeri,Moninder Singh,Kush R. Varshney,Yunfeng Zhang +17 more
TL;DR: A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
Proceedings ArticleDOI
A comparative study of fairness-enhancing interventions in machine learning
Sorelle A. Friedler,Carlos Scheidegger,Suresh Venkatasubramanian,Sonam Choudhary,Evan P. Hamilton,Derek Roth +5 more
TL;DR: It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.
Posted Content
A Reductions Approach to Fair Classification
TL;DR: The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
References
More filters
Proceedings ArticleDOI
t-Closeness: Privacy Beyond k-Anonymity and l-Diversity
TL;DR: T-closeness as mentioned in this paper requires that the distribution of a sensitive attribute in any equivalence class is close to the distributions of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t).
Proceedings Article
Equality of opportunity in supervised learning
TL;DR: This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Proceedings ArticleDOI
Fairness through awareness
TL;DR: A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Posted Content
Fairness Through Awareness
TL;DR: In this article, the authors proposed a framework for fair classification comprising a task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand, and an algorithm for maximizing utility subject to the fairness constraint that similar individuals are treated similarly.