scispace - formally typeset
Search or ask a question

Economics Of Discrimination

About: The article was published on 2016-01-01 and is currently open access. It has received 1631 citations till now.
Citations
More filters
Posted ContentDOI
TL;DR: In this article, the authors provide new empirical evidence on the extent of and trends in the gender wage gap, using PSID microdata over the 1980-2010, which shows that women's work force interruptions and shorter hours remain significant in high skilled occupations, possibly due to compensating differentials.
Abstract: Using PSID microdata over the 1980-2010, we provide new empirical evidence on the extent of and trends in the gender wage gap, which declined considerably over this period. By 2010, conventional human capital variables taken together explained little of the gender wage gap, while gender differences in occupation and industry continued to be important. Moreover, the gender pay gap declined much more slowly at the top of the wage distribution that at the middle or the bottom and by 2010 was noticeably higher at the top. We then survey the literature to identify what has been learned about the explanations for the gap. We conclude that many of the traditional explanations continue to have salience. Although human capital factors are now relatively unimportant in the aggregate, women’s work force interruptions and shorter hours remain significant in high skilled occupations, possibly due to compensating differentials. Gender differences in occupations and industries, as well as differences in gender roles and the gender division of labor remain important, and research based on experimental evidence strongly suggests that discrimination cannot be discounted. Psychological attributes or noncognitive skills comprise one of the newer explanations for gender differences in outcomes. Our effort to assess the quantitative evidence on the importance of these factors suggests that they account for a small to moderate portion of the gender pay gap, considerably smaller than say occupation and industry effects, though they appear to modestly contribute to these differences.

984 citations

Proceedings ArticleDOI
04 Aug 2017
TL;DR: This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Abstract: Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

959 citations

Posted Content
TL;DR: It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Abstract: The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.

685 citations


Cites background from "Economics Of Discrimination"

  • ...notes that taste-based discrimination is independent of intent, and covers situations in which a decision maker acts “not because he is prejudiced against them but because he is ignorant of their true efficiency” (Becker, 1957)....

    [...]

  • ..., 1973; Phelps, 1972) and taste-based (Becker, 1957), both of which focus on utility....

    [...]

Journal ArticleDOI
TL;DR: This paper found that applicants with distinctively African-American names are 16% less likely to be accepted relative to identical hosts with White names on the same platform. But, their results suggest that only a subset of hosts discriminate.
Abstract: In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.

581 citations


Cites background from "Economics Of Discrimination"

  • ...Bertrand and Mullainathan (2004) find a 10% to 6% gap in callback rates for jobs....

    [...]

  • ...Bertrand and Mullainathan (2004) find a 10% to 6% gap in callback rates for jobs. Pope & Sydnor (2011) find a 9% to 6% gap in lending rates in an online lending market....

    [...]

  • ...For example, Becker (1957) formalizes racial discrimination as distaste for interactions with individuals of a certain race....

    [...]

  • ...For example, Becker (1957) formalizes racial discrimination as distaste for interactions with individuals of a certain race....

    [...]

  • ...Becker (1957) suggests that this loss will help drive discriminating firms out of competitive markets....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors used longitudinal data on the hourly wages of Portuguese workers matched with balance sheet information for rms to show that the wages of both men and women contain rm-specic premiums that are strongly correlated with employer productivity.
Abstract: There is growing evidence that rm-specic pay premiums are an important source of wage inequality. These premiums will contribute to the gender wage gap if women are less likely to work at high-paying rms or if women negotiate worse wage bargains with their employers than men. Using longitudinal data on the hourly wages of Portuguese workers matched with balance sheet information for rms, we show that the wages of both men and women contain rm-specic premiums that are strongly correlated with employer productivity. We then show how the impact of these rm-specic pay dierentials on the gender wage gap can be decomposed into a combination of bargaining and sorting eects. Consistent with the bargaining literature, we nd that women receive only 90% of the rm-specic pay premiums earned by men. Notably, we obtain very similar estimates of the relative bargaining power ratio from our analysis of between-rm wage premiums and from analyzing changes in

433 citations

References
More filters
Posted ContentDOI
TL;DR: In this article, the authors provide new empirical evidence on the extent of and trends in the gender wage gap, using PSID microdata over the 1980-2010, which shows that women's work force interruptions and shorter hours remain significant in high skilled occupations, possibly due to compensating differentials.
Abstract: Using PSID microdata over the 1980-2010, we provide new empirical evidence on the extent of and trends in the gender wage gap, which declined considerably over this period. By 2010, conventional human capital variables taken together explained little of the gender wage gap, while gender differences in occupation and industry continued to be important. Moreover, the gender pay gap declined much more slowly at the top of the wage distribution that at the middle or the bottom and by 2010 was noticeably higher at the top. We then survey the literature to identify what has been learned about the explanations for the gap. We conclude that many of the traditional explanations continue to have salience. Although human capital factors are now relatively unimportant in the aggregate, women’s work force interruptions and shorter hours remain significant in high skilled occupations, possibly due to compensating differentials. Gender differences in occupations and industries, as well as differences in gender roles and the gender division of labor remain important, and research based on experimental evidence strongly suggests that discrimination cannot be discounted. Psychological attributes or noncognitive skills comprise one of the newer explanations for gender differences in outcomes. Our effort to assess the quantitative evidence on the importance of these factors suggests that they account for a small to moderate portion of the gender pay gap, considerably smaller than say occupation and industry effects, though they appear to modestly contribute to these differences.

984 citations

Proceedings ArticleDOI
04 Aug 2017
TL;DR: This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Abstract: Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

959 citations

Posted Content
TL;DR: It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Abstract: The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.

685 citations

Journal ArticleDOI
TL;DR: This paper found that applicants with distinctively African-American names are 16% less likely to be accepted relative to identical hosts with White names on the same platform. But, their results suggest that only a subset of hosts discriminate.
Abstract: In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.

581 citations

Journal ArticleDOI
TL;DR: The authors explored data from a field test of how an algorithm delivered ads promoting job opportunities in the science, technology, engineering, and math fields, which was explicitly intended to be gender neutral.
Abstract: We explore data from a field test of how an algorithm delivered ads promoting job opportunities in the science, technology, engineering and math fields. This ad was explicitly intended to be gender...

418 citations