scispace - formally typeset
Search or ask a question
Journal Article

The Scored Society: Due Process for Automated Predictions

01 Mar 2014-Washington Law Review (Washington Law Review Association)-Vol. 89, Iss: 1, pp 1
TL;DR: Eggers as discussed by the authors describes persistent surveillance technologies that score people in every imaginable way, such as high school students' test results, their class rank, their school's relative academic strength, and a number of other factors.
Abstract: [Jennifer is] ranked 1,396 out of 179,827 high school students in Iowa. . . . Jennifer's score is the result of comparing her test results, her class rank, her school's relative academic strength, and a number of other factors. . . .[C]an this be compared against all the other students in the country, and maybe even the world? . . .That's the idea . . . .That sounds very helpful. . . . And would eliminate a lot of doubt and stress out there.-Dave Eggers, The Circle1INTRODUCTION TO THE SCORED SOCIETYIn his novel The Circle, Dave Eggers imagines persistent surveillance technologies that score people in every imaginable way. Employees receive rankings for their participation in social media.2 Retinal apps allow police officers to see career criminals in distinct colors-yellow for low-level offenders, orange for slightly more dangerous, but still nonviolent offenders, and red for the truly violent.3 Intelligence agencies can create a web of all of a suspect's contacts so that criminals' associates are tagged in the same color scheme as the criminals themselves.4Eggers's imagination is not far from current practices. Although predictive algorithms may not yet be ranking high school students nationwide, or tagging criminals' associates with color-coded risk assessments, they are increasingly rating people in countless aspects of their lives.Consider these examples. Job candidates are ranked by what their online activities say about their creativity and leadership.5 Software engineers are assessed for their contributions to open source projects, with points awarded when others use their code.6 Individuals are assessed as likely to vote for a candidate based on their cable-usage patterns.7 Recently released prisoners are scored on their likelihood of recidivism.8How are these scores developed? Predictive algorithms mine personal information to make guesses about individuals' likely actions and risks.9 A person's on- and offline activities are turned into scores that rate them above or below others.10 Private and public entities rely on predictive algorithmic assessments to make important decisions about individuals.11Sometimes, individuals can score the scorers, so to speak. Landlords can report bad tenants to data brokers while tenants can check abusive landlords on sites like ApartmentRatings.com. On sites like Rate My Professors, students can score professors who can respond to critiques via video. In many online communities, commenters can in turn rank the interplay between the rated, the raters, and the raters of the rated, in an effort to make sense of it all (or at least award the most convincing or popular with points or "karma"). 12Although mutual-scoring opportunities among formally equal subjects exist in some communities, the realm of management and business more often features powerful entities who turn individuals into ranked and rated objects.13 While scorers often characterize their work as an oasis of opportunity for the hardworking, the following are examples of ranking systems that are used to individuals' detriment. A credit card company uses behavioral-scoring algorithms to rate consumers' credit risk because they used their cards to pay for marriage counseling, therapy, or tire-repair services.14 Automated systems rank candidates' talents by looking at how others rate their online contributions.15 Threat assessments result in arrests or the inability to fly even though they are based on erroneous information.16 Political activists are designated as "likely" to commit crimes.17And there is far more to come. Algorithmic predictions about health risks, based on information that individuals share with mobile apps about their caloric intake, may soon result in higher insurance premiums.18 Sites soliciting feedback on "bad drivers" may aggregate the information, and could possibly share it with insurance companies who score the risk potential of insured individuals. …

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It is argued that concerns about the legitimacy of these techniques are not satisfactorily resolved through reliance on individual notice and consent, touching upon the troubling implications for democracy and human flourishing if Big Data analytic techniques driven by commercial self-interest continue their onward march unchecked by effective and legitimate constraints.
Abstract: This paper draws on regulatory governance scholarship to argue that the analytic phenomenon currently known as ‘Big Data’ can be understood as a mode of ‘design-based’ regulation. Although Big Data decision-making technologies can take the form of automated decision-making systems, this paper focuses on algorithmic decision-guidance techniques. By highlighting correlations between data items that would not otherwise be observable, these techniques are being used to shape the informational choice context in which individual decision-making occurs, with the aim of channelling attention and decision-making in directions preferred by the ‘choice architect’. By relying upon the use of ‘nudge’ – a particular form of choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives, these techniques constitute a ‘soft’ form of design-based control. But, unlike the static Nudges popularised by Thaler and Sunstein [(20...

426 citations

Proceedings ArticleDOI
29 Jan 2019
TL;DR: The authors compare different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly, and contrast the different classes of explanations in philosophy and sociology.
Abstract: Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.

392 citations

Journal ArticleDOI
TL;DR: An overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making is provided and the Open Algortihms project is described as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.
Abstract: The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we provide an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy-makers, and citizens to co-develop, deploy, and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.

330 citations

Proceedings ArticleDOI
27 Jun 2017
TL;DR: A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.
Abstract: Ranking and scoring are ubiquitous We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics The final output is a ranking that represents the relative quality of the individuals While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among othersIn this paper we propose fairness measures for ranked outputs We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets We then apply our proposed measures to several real datasets, and detect cases of bias Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracyThe code implementing all parts of this work is publicly available at https://githubcom/DataResponsibly/FairRank

325 citations

Journal ArticleDOI
TL;DR: In this article, the authors propose a new theoretical framework for understanding the development of modern organ- izations that follow an institutional data imperative to collect as much data as possible, and as a result of the analysis and use of this data, individuals accrue a form of capital flowing from their positions as measured by various digital scoring and ranking methods.
Abstract: Socio-Economic Review, 2017, Vol. 15, No. 1, 9–29 doi: 10.1093/ser/mww033 Advance Access Publication Date: 8 December 2016 Article Article Seeing like a market Marion Fourcade 1, * and Kieran Healy 2, * University of California, Berkeley and 2 Duke University *Correspondence: fourcade@berkeley.edu; kjhealy@soc.duke.edu Abstract What do markets see when they look at people? Information dragnets increasingly yield huge quantities of individual-level data, which are analyzed to sort and slot people into categories of taste, riskiness or worth. These tools deepen the reach of the market and define new strategies of profit-making. We present a new theoretical framework for understanding their development. We argue that (a) modern organ- izations follow an institutional data imperative to collect as much data as possible; (b) as a result of the analysis and use of this data, individuals accrue a form of capital flowing from their positions as measured by various digital scoring and ranking methods; and (c) the facticity of these scoring methods makes them organizational devices with potentially stratifying effects. They offer firms new opportunities to structure and price offerings to consumers. For individuals, they create classification situations that identify shared life-chances in product and service markets. We dis- cuss the implications of these processes and argue that they tend toward a new economy of moral judgment, where outcomes are experienced as morally deserved positions based on prior good actions and good tastes, as measured and classified by this new infrastructure of data collection and analysis. Key words: classification, big data, technology, markets, institutions, morality JEL classification: A13, O33 Across institutional domains, tracking and measurement is expanding and becoming ever more fine-grained (Limn, 2012; Gillespie et al., 2014; Pasquale, 2015). We see it in everyday consumption, in housing and credit markets, in health, employment, education (Cottom, 2016), social relations, including intimate ones (Levy, 2015), legal services, and even into political life (Ziewitz, 2016) and the private sphere (Neff and Nafus, 2016). Sociologists studying the state, technology and the market have sought to describe and understand these trends in different ways. This article proposes a framework to analytically unify their con- cerns, and to grasp the implications of contemporary technological developments for proc- esses of inequality and stratification. C The Author 2016. Published by Oxford University Press and the Society for the Advancement of Socio-Economics. V All rights reserved. For Permissions, please email: journals.permissions@oup.com

303 citations