AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Citations
1,571 citations
Cites background or methods from "AI Fairness 360: An Extensible Tool..."
...If the algorithm is allowed to modify the training data, then pre-processing can be used [11]....
[...]
...AI Fairness 360 (AIF360) is another toolkit developed by IBM in order to help moving fairness research algorithms into an industrial setting and to create a benchmark for fairness algorithms to get evaluated and an environment for fairness researchers to share their ideas [11]....
[...]
...allowed to change the learning procedure for a machine learning model, then in-processing can be used during the training of a model— either by incorporating changes into the objective function or imposing a constraint [11, 14]....
[...]
...If the algorithm can only treat the learned model as a black box without any ability to modify the training data or learning algorithm, then only post-processing can be used in which the labels assigned by the black-box model initially get reassigned based on a function during the post-processing phase [11, 14]....
[...]
...In addition, IBM’s AI Fairness 360 (AIF360) toolkit [11] has implemented many of the current fair learning algorithms and has demonstrated some of the results as demos which can be utilized by interested users to compare different methods with regards to different fairness measures....
[...]
296 citations
257 citations
Cites background or methods from "AI Fairness 360: An Extensible Tool..."
...Researchers have accompanied the proliferation of AI ethics principles by creating mathematical methods and software toolkits for developing fairer [8, 75, 76], more interpretable [72], and privacy-preserving AI systems [45]....
[...]
...Both concurrently and in response to these principles, researchers have created mathematical methods and software toolkits for developing fairer [8, 75, 76], more interpretable [72], and privacy-preserving AI systems [45]....
[...]
240 citations
174 citations
References
2,690 citations
"AI Fairness 360: An Extensible Tool..." refers methods in this paper
...Post-processing algorithms: Equalized odds postprocessing (Hardt et al., 2016) solves a linear program to find probabilities with which to change output labels to optimize equalized odds....
[...]
..., 2012) Post-processing Equalized odds post-processing (Hardt et al., 2016) Algorithms Calibrated eq....
[...]
1,667 citations
"AI Fairness 360: An Extensible Tool..." refers methods in this paper
...We currently provide an interface to seven popular datasets: Adult Census Income (Kohavi, 1996), German Credit (Dheeru & Karra Taniskidou, 2017), ProPublica Recidivism (COMPAS) (Angwin et al., 2016), Bank Marketing (Moro et al., 2014), and three versions of Medical Expenditure Panel Surveys (AHRQ, 2015; 2016)....
[...]
...We currently provide an interface to seven popular datasets: Adult Census Income (Kohavi, 1996), German Credit (Dheeru & Karra Taniskidou, 2017), ProPublica Recidivism (COMPAS) (Angwin et al., 2016), Bank Marketing (Moro et al., 2014), and three versions of Medical Expenditure Panel Surveys (AHRQ,…...
[...]
...The processed Adult Census Income, German Credit, and COMPAS datasets contain 45,222, 1,000 and 6,167 records respectively....
[...]
...An example result for Adult Census Income dataset with race as protected attribute is shown in Figure 5....
[...]
...C.1 Datasets C.1.1 Adult Census Income For protected attribute sex, Male is privileged, and Female is unprivileged....
[...]
1,444 citations
"AI Fairness 360: An Extensible Tool..." refers background in this paper
...The metrics therein are the group fairness measures of disparate (DI) and statistical parity difference (SPD) — the ratio and difference, respectively, of the base rate conditioned on the protected attribute — and the individual fairness measure consistency defined by Zemel et al. (2013)....
[...]
..., 2017) Algorithms Learning fair representations (Zemel et al., 2013) Disparate impact remover (Feldman et al....
[...]
...Learning fair representations (Zemel et al., 2013) finds a latent representation that encodes the data well but obfuscates information about protected attributes....
[...]
1,434 citations
"AI Fairness 360: An Extensible Tool..." refers background or methods in this paper
...Disparate impact remover (Feldman et al., 2015) edits feature values to increase group fairness while preserving rank-ordering within groups....
[...]
...It includes several bias detection metrics as well as bias mitigation methods, including disparate impact remover (Feldman et al., 2015), prejudice remover (Kamishima et al., 2012), and two-Naive Bayes (Calders & Verwer, 2010)....
[...]