scispace - formally typeset
Open AccessProceedings Article

Empirical Risk Minimization Under Fairness Constraints

Reads0
Chats0
TLDR
This work presents an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem, and derives both risk and fairness bounds that support the statistical consistency of the approach.
Abstract
We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairly influence the outcome of a classifier. We present an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem. It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable. We derive both risk and fairness bounds that support the statistical consistency of our methodology. We specify our approach to kernel methods and observe that the fairness requirement implies an orthogonality constraint which can be easily added to these methods. We further observe that for linear models the constraint translates into a simple data preprocessing step. Experiments indicate that the method is empirically effective and performs favorably against state-of-the-art approaches.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Out-of-Distribution Generalization via Risk Extrapolation (REx)

TL;DR: This work introduces the principle of Risk Extrapolation (REx), and shows conceptually how this principle enables extrapolation, and demonstrates the effectiveness and scalability of instantiations of REx on various OoD generalization tasks.
Book ChapterDOI

Fairness in Machine Learning

TL;DR: It is shown how causal Bayesian networks can play an important role to reason about and deal with fairness, especially in complex unfairness scenarios, and how optimal transport theory can be leveraged to develop methods that impose constraints on the full shapes of distributions corresponding to different sensitive attributes.
Posted Content

Fairness in Machine Learning: A Survey.

TL;DR: An overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature is provided, organises approaches into the widely accepted framework of pre-processing, in- processing, and post-processing methods, subcategorizing into a further 11 method areas.
Posted Content

Explainable Deep Learning: A Field Guide for the Uninitiated

TL;DR: A field guide to explore the space of explainable deep learning for those in the AI/ML field who are uninitiated and hopes it is seen as a starting point for those embarking on this research field.
Posted Content

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

TL;DR: This work describes how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems and outlines methods for displaying uncertainty to stakeholders and recommends how to collect information required for incorporating uncertainty into existing ML pipelines.
Related Papers (5)