Open AccessPosted Content
Flexibly Fair Representation Learning by Disentanglement
Elliot Creager,David Madras,Jörn-Henrik Jacobsen,Marissa A. Weis,Kevin Swersky,Toniann Pitassi,Richard S. Zemel +6 more
TLDR
This work proposes an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction, but are also flexible fair, meaning they can be easily modified at test time to achieve subgroup demographic parity.Abstract:
We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes. Taking inspiration from the disentangled representation learning literature, we propose an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction, but are also \emph{flexibly fair}, meaning they can be easily modified at test time to achieve subgroup demographic parity with respect to multiple sensitive attributes and their conjunctions. We show empirically that the resulting encoder---which does not require the sensitive attributes for inference---enables the adaptation of a single representation to a variety of fair classification tasks with new target labels and subgroup definitions.read more
Citations
More filters
Posted Content
A Survey on Bias and Fairness in Machine Learning
TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Journal ArticleDOI
Shortcut learning in deep neural networks
Robert Geirhos,Jörn-Henrik Jacobsen,Claudio Michaelis,Richard S. Zemel,Wieland Brendel,Matthias Bethge,Felix A. Wichmann +6 more
TL;DR: A set of recommendations for model interpretation and benchmarking is developed, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
Journal ArticleDOI
Shortcut Learning in Deep Neural Networks
Robert Geirhos,Jörn-Henrik Jacobsen,Claudio Michaelis,Richard S. Zemel,Wieland Brendel,Matthias Bethge,Felix A. Wichmann +6 more
TL;DR: In this paper, a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications, are presented.
Proceedings Article
Minimax Pareto Fairness: A Multi Objective Perspective
TL;DR: In this paper, the authors formulate and formally characterize group fairness as a multi-objective optimization problem, where each sensitive group risk is a separate objective, and propose a fairness criterion where a classifier achieves minimax risk and is Pareto-efficient w.r.t. all groups.
Posted Content
Fairness in Deep Learning: A Computational Perspective
TL;DR: In this paper, the authors provide a review covering recent progresses to tackle algorithmic fairness problems of deep learning from the computational perspective, showing that interpretability can serve as a useful ingredient to diagnose the reasons that lead to algorithmic discrimination.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
Auto-Encoding Variational Bayes
Diederik P. Kingma,Max Welling +1 more
TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Journal ArticleDOI
Independent component analysis, a new concept?
TL;DR: An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time and may actually be seen as an extension of the principal component analysis (PCA).
Journal ArticleDOI
Independent component analysis: algorithms and applications
Aapo Hyvärinen,Erkki Oja +1 more
TL;DR: The basic theory and applications of ICA are presented, and the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible.
Posted Content
Auto-Encoding Variational Bayes
Diederik P. Kingma,Max Welling +1 more
TL;DR: In this paper, a stochastic variational inference and learning algorithm was proposed for directed probabilistic models with intractable posterior distributions and large datasets, which scales to large datasets.