scispace - formally typeset
Search or ask a question
Posted Content

The Claims of Official Reason: Administrative Guidance on Social Inclusion

TL;DR: The legal validity and effect of recent administrative actions concerning civil rights and social inclusion have been examined in this paper, where the authors argue that guidance can provide a privileged reason for an agency to act, but cannot categorically mandate or prohibit any course of public or private conduct.
Abstract: This Article examines the legal validity and effect of recent administrative actions concerning civil rights and social inclusion Agencies under the Obama Administration issued “guidance” concerning sexual assault and harassment on college campuses, transgender rights, the use of arrest and conviction records in employment decisions, and deferral of deportation proceedings against undocumented immigrants These actions have either been set aside by circuit courts or rescinded under the Trump Administration, in part on the grounds that they were issued without notice-and-comment rulemaking Nonetheless, two district courts have blocked the Trump Administration’s rescission of the deferred action program because the government failed to take into account the “serious reliance interests” the program had generated I explore these controversies over guidance on social inclusion in order to address some of the most difficult and long-disputed questions of administrative law: what is the appropriate scope of the “guidance exception” to notice-and-comment rulemaking, and what kinds of legal effects, if any, can such guidance generate? Drawing on the philosophy of law to interpret the case law, I argue that guidance can provide a privileged reason for an agency to act, but cannot categorically mandate or prohibit any course of public or private conduct I show how such non-binding actions can nonetheless generate legally cognizable interests when individuals and institutions rely on the guidance to make plans and investments, or to see their status or the harms they suffer recognized These reliance interests need to be taken into account if the policy is to be rescinded My argument has concrete consequences for the staying power of the policies federal agencies put in place during the Obama Administration More broadly, it sheds light on problems of internal administrative procedure and judicial review of administrative action, as well as fundamental issues in jurisprudence concerning “the force and effect of law”
Citations
More filters
Journal ArticleDOI
TL;DR: Brayne et al. as discussed by the authors conducted ethnographic fieldwork in a large urban police department and a midsized criminal court to assess the impact of predictive technologies at different stages of the criminal justice process.
Abstract: The number of predictive technologies used in the U.S. criminal justice system is on the rise. Yet there is little research to date on the reception of algorithms in criminal justice institutions. We draw on ethnographic fieldwork conducted within a large urban police department and a midsized criminal court to assess the impact of predictive technologies at different stages of the criminal justice process. We first show that similar arguments are mobilized to justify the adoption of predictive algorithms in law enforcement and criminal courts. In both cases, algorithms are described as more objective and efficient than humans’ discretionary judgment. We then study how predictive algorithms are used, documenting similar processes of professional resistance among law enforcement and legal professionals. In both cases, resentment toward predictive algorithms is fueled by fears of deskilling and heightened managerial surveillance. Two practical strategies of resistance emerge: footdragging and data obfuscation. We conclude by discussing how predictive technologies do not replace, but rather displace discretion to less visible—and therefore less accountable— areas within organizations, a shift which has important implications for inequality and the administration of justice in the age of big data. K E Y W O R D S : algorithms; prediction; policing; criminal courts; ethnography. In recent years, algorithms and artificial intelligence have attracted a great deal of scholarly and journalistic attention. Of particular interest is the development of predictive technologies designed to estimate the likelihood of a future event, such as the probability that an individual will default on a loan, the likelihood that a consumer will buy a specific product online, or the odds that a job candidate will have a long tenure in an organization. Predictive algorithms capture the imagination of scholars and journalists alike, in part because they raise the question of automated judgment: the replacement – or at least the augmentation – of human discretion by mechanical procedures. Nowhere are these The authors contributed equally and are listed alphabetically. We would like to thank the three anonymous reviewers, as well as the organizers and participants of the Willen Seminar at Barnard College in 2016, “Punishment, Society, and Technology” session of the LSA Annual Meeting in 2018, and “Innovations and Technology in Studies of Crime and Social Control” session of the ASA Annual Meeting in 2018 for their helpful comments and feedback. Please direct correspondence to Sarah Brayne at the Department of Sociology at the University of Texas at Austin, 305 E. 23rd Street, A1700, RLP 3.306, Austin, TX 78712; telephone (512) 475-8641; email sbrayne@utexas.edu. VC The Author(s) 2020. Published by Oxford University Press on behalf of the Society for the Study of Social Problems. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. 1 Social Problems, 2020, 0, 1–17 doi: 10.1093/socpro/spaa004 Article D ow naded rom http/academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114 by Snford U niersity user on 06 M arch 2020 questions more salient than in the context of criminal justice. Over recent decades, the U.S. criminal justice system has witnessed a proliferation of algorithmic technologies. Police departments now increasingly rely on predictive software programs to target potential victims and offenders and predict when and where future crimes are likely to occur (Brayne 2017; Ferguson 2017). Likewise, criminal courts use multiple predictive instruments, called “risk-assessment tools,” to assess the risk of recidivism or failure to appear in court among defendants (Hannah-Moffat 2018; Harcourt 2006; Monahan and Skeem 2016). Predictive technologies, in turn, raise many questions about fairness and inequality in criminal justice. On the positive side, advocates emphasize the benefits of using “smart statistics” to reduce crime and improve a dysfunctional criminal justice system characterized by racial discrimination and mass incarceration (Brantingham, Valasik, and Mohler 2018; Milgram 2012). On the negative side, critics argue that algorithms tend to embed bias and reinforce social and racial inequalities, rather than reducing them (Benjamin 2019; Eubanks 2018; O’Neil 2016). They note that predictive algorithms draw on variables or proxies that are unfair and may be unconstitutional (Ferguson 2017; Starr 2014). Many point out that predictive algorithms may lead individuals to be surveilled and detained based on crimes they have not committed yet, frequently comparing these technologies to the science-fiction story Minority Report by Philip K. Dick and its movie adaptation, which evoke a dystopian future. To date, studies of criminal justice algorithms share three main characteristics. First, existing work tends to focus on the construction of algorithms, highlighting the proprietary aspect of most of these tools (which are often built by private companies) and criticizing their opacity (Angwin et al. 2016; Pasquale 2015; Wexler 2017). Second, they tend to treat the criminal justice system as a monolith, lumping together the cases of law enforcement, adjudication, sentencing, and community supervision (O’Neil 2016; Scannell 2016). Third, and most importantly, most studies fail to analyze contexts of reception, implicitly assuming – usually without empirical data – that police officers, judges, and prosecutors rely uncritically on what algorithms direct them to do in their daily routines (Harcourt 2006; Hvistendahl 2016; Mohler et al. 2015; Uchida and Swatt 2013). In this article, we adopt a different perspective. Building on a growing body of literature that analyzes the impact of big data in criminal justice (Hannah-Moffat, Maurutto, and Turnbull 2009; Lageson 2017; Lum, Koper, and Willis 2017; Sanders, Weston, and Schott 2015; Stevenson and Doleac 2018), as well as existing ethnographic work on the uses of algorithms (Brayne 2017; Christin 2017; Levy 2015; Rosenblat and Stark 2016; Shestakovsky 2017), we focus on the reception of predictive algorithms in different segments of the criminal justice system. Drawing on two indepth ethnographic studies – one conducted in a police department and the other in a criminal court – we examine two questions. First, to what extent does the adoption of predictive algorithms affect work practices in policing and criminal courts? Second, how do practitioners respond to algorithmic technologies (i.e., do they embrace or contest them)? Based on this ethnographic material, this article provides several key findings. First, we document a widespread – albeit uneven – use of big data technologies on the ground. In policing, big data are used for both person-based and place-based predictive identification, in addition to risk management, crime analysis, and investigations. In criminal courts, multiple predictive instruments, complemented by digital case management systems, are employed to quantify the risk of the defendants. Second, similar arguments are used in policing and courts to justify the use of predictive technologies. In both cases, algorithms are presented as more rational and objective than “gut feelings” or discretionary judgments. Third, we find similar strategies of resistance, fueled by fears of experiential devaluation and increased managerial surveillance, among law enforcement and legal professionals—most importantly, foot-dragging and data obfuscation. Despite these resemblances, we document important differences between our two cases. In particular, law enforcement officers were under more direct pressure to use the algorithms, whereas the legal professionals under consideration were able to keep their distance and ignore predictive technologies without consequences, a finding we relate to the 2 Brayne and Christin D ow naded rom http/academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114 by Snford U niersity user on 06 M arch 2020 distinct hierarchical structures and levels of managerial oversight of the police department and criminal court we compared. We conclude by discussing the implications of these findings for research on technology and inequality in criminal justice. Whereas the current wave of critical scholarship on algorithmic bias often leans upon technological deterministic narratives in order to make social justice claims, here we focus on the social and institutional contexts within which such predictive systems are deployed and negotiated. In the process, we show that these tools acquire political nuance and meaning through practice, which can lead to unanticipated or undesirable outcomes: forms of workplace surveillance and the displacement of discretion to less accountable places. We argue that this sheds new light on the transformations of police and judicial discretion – with important consequences for social and racial inequality – in the age of big data. D E C I S I O N M A K I N G A C R O S S A V A R I E T Y O F D O M A I N S As a growing number of daily activities now take place online, an unprecedented amount of digital information is being collected, stored, and analyzed, making it possible to aggregate data across previously separate institutional settings. Harnessing this rapidly expanding corpus of digitized information, algorithms – broadly defined here as “[a] formally specified sequence(s) of logical operations that provides step-by-step instructions for computers to act on data and thus automate decisions” (Barocas et al. 2014) – are being used to guide decision-making across institutional domains as varied as education, journalism, credit, and criminal justice (Brayne 2017; Christin 2018; Fourcade and Healy 2017; O’Neil 2016; Pasquale 2015). Advocates for algorithmic technologies argue that by relying on “unbiased” assessments, algorithms may help deploy resources more efficiently and objective

85 citations

Journal ArticleDOI
TL;DR: Pretrial risk assessment instruments are used in many jurisdictions to inform decisions regarding pretrial release and conditions as mentioned in this paper, and many are concerned that the use of pre-trial risk assessment instru...
Abstract: Pretrial risk assessment instruments are used in many jurisdictions to inform decisions regarding pretrial release and conditions. Many are concerned that the use of pretrial risk assessment instru...

10 citations

Journal ArticleDOI
TL;DR: In this paper, a feature of recent advances in artificial intelligence (AI) is the ability to make uncertainty visible to refugee status decision-makers, which could help to reduce their confidence in their conclusions.
Abstract: Deciding refugee claims is a paradigm case of an inherently uncertain judgment and prediction exercise. Yet refugee status decision-makers may underestimate the uncertainty inherent in their decisions. A feature of recent advances in artificial intelligence (AI) is the ability to make uncertainty visible. By making clear to refugee status decision-makers how uncertain their predictions are, AI and related statistical tools could help to reduce their confidence in their conclusions. Currently, this would only hurt claimants, since many countries around the world have designed their refugee status determination systems using inductive inference which distorts risk assessment. Increasing uncertainty would therefore contribute to mistaken rejections. If, however, international refugee law was to recognize an obligation under the UN Convention to resolve decision-making doubt in the claimant’s favour and use abductive inference, as Evans Cameron has advocated, then by making uncertainty visible, AI could help reduce the number of wrong denied claims..

9 citations

Journal ArticleDOI
TL;DR: In this paper, the relevance of the discretion of government officials during the COVID-19 pandemic with the concepts and legislation related to legal issues was analyzed, as well as the existence of a Circular to legitimize the handling of COVID19 with statutory regulations.
Abstract: The purpose of this legal research is to analyze the relevance of the discretion of government officials during the COVID-19 pandemic with the concepts and legislation related to legal issues; as well as analyzing the existence of a Circular to legitimize the handling of COVID-19 with statutory regulations. This legal research is carried out by making an inventory of various primary and secondary legal materials, so as to obtain relevant and critical studies of the legal issues discussed. The results of this legal research are that the discretion made by government officials can be justified legally if it is relevant to several provisions contained in legislation for the realization of good emergency governance; and the existence of a circular letter is legally valid if it is in accordance with the laws and regulations and the General Principles of Good Governance, by understanding that a circular is not a product of rules that are in the order of national legislation. Thus, a circular does not have strong and binding legal legitimacy. Therefore, the researcher recommends the criteria and classification of the parameters of discretion in the form of a circular as outlined in the form of a Supreme Court Regulation. This should be done so that there is no abuse of authority in implementing discretionary power by government officials and general legal principles.

8 citations

Book ChapterDOI
07 Nov 2019
TL;DR: Examples and activities that promote consumer protection through adapting of non-discriminatory algorithms used for surveillance, social profiling, surveillance, and business intelligence are discussed.
Abstract: This paper discusses examples and activities that promote consumer protection through adapting of non-discriminatory algorithms. The casual observer of data from smartphones to artificial intelligence believes in technological determinism. To them, data reveal real trends with neutral decision-makers that are not prejudiced. However, machine learning technologies are created by people. Therefore, creator biases can appear in decisions based on algorithms used for surveillance, social profiling, surveillance, and business intelligence.

5 citations

References
More filters
Journal ArticleDOI
TL;DR: Brayne et al. as discussed by the authors conducted ethnographic fieldwork in a large urban police department and a midsized criminal court to assess the impact of predictive technologies at different stages of the criminal justice process.
Abstract: The number of predictive technologies used in the U.S. criminal justice system is on the rise. Yet there is little research to date on the reception of algorithms in criminal justice institutions. We draw on ethnographic fieldwork conducted within a large urban police department and a midsized criminal court to assess the impact of predictive technologies at different stages of the criminal justice process. We first show that similar arguments are mobilized to justify the adoption of predictive algorithms in law enforcement and criminal courts. In both cases, algorithms are described as more objective and efficient than humans’ discretionary judgment. We then study how predictive algorithms are used, documenting similar processes of professional resistance among law enforcement and legal professionals. In both cases, resentment toward predictive algorithms is fueled by fears of deskilling and heightened managerial surveillance. Two practical strategies of resistance emerge: footdragging and data obfuscation. We conclude by discussing how predictive technologies do not replace, but rather displace discretion to less visible—and therefore less accountable— areas within organizations, a shift which has important implications for inequality and the administration of justice in the age of big data. K E Y W O R D S : algorithms; prediction; policing; criminal courts; ethnography. In recent years, algorithms and artificial intelligence have attracted a great deal of scholarly and journalistic attention. Of particular interest is the development of predictive technologies designed to estimate the likelihood of a future event, such as the probability that an individual will default on a loan, the likelihood that a consumer will buy a specific product online, or the odds that a job candidate will have a long tenure in an organization. Predictive algorithms capture the imagination of scholars and journalists alike, in part because they raise the question of automated judgment: the replacement – or at least the augmentation – of human discretion by mechanical procedures. Nowhere are these The authors contributed equally and are listed alphabetically. We would like to thank the three anonymous reviewers, as well as the organizers and participants of the Willen Seminar at Barnard College in 2016, “Punishment, Society, and Technology” session of the LSA Annual Meeting in 2018, and “Innovations and Technology in Studies of Crime and Social Control” session of the ASA Annual Meeting in 2018 for their helpful comments and feedback. Please direct correspondence to Sarah Brayne at the Department of Sociology at the University of Texas at Austin, 305 E. 23rd Street, A1700, RLP 3.306, Austin, TX 78712; telephone (512) 475-8641; email sbrayne@utexas.edu. VC The Author(s) 2020. Published by Oxford University Press on behalf of the Society for the Study of Social Problems. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. 1 Social Problems, 2020, 0, 1–17 doi: 10.1093/socpro/spaa004 Article D ow naded rom http/academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114 by Snford U niersity user on 06 M arch 2020 questions more salient than in the context of criminal justice. Over recent decades, the U.S. criminal justice system has witnessed a proliferation of algorithmic technologies. Police departments now increasingly rely on predictive software programs to target potential victims and offenders and predict when and where future crimes are likely to occur (Brayne 2017; Ferguson 2017). Likewise, criminal courts use multiple predictive instruments, called “risk-assessment tools,” to assess the risk of recidivism or failure to appear in court among defendants (Hannah-Moffat 2018; Harcourt 2006; Monahan and Skeem 2016). Predictive technologies, in turn, raise many questions about fairness and inequality in criminal justice. On the positive side, advocates emphasize the benefits of using “smart statistics” to reduce crime and improve a dysfunctional criminal justice system characterized by racial discrimination and mass incarceration (Brantingham, Valasik, and Mohler 2018; Milgram 2012). On the negative side, critics argue that algorithms tend to embed bias and reinforce social and racial inequalities, rather than reducing them (Benjamin 2019; Eubanks 2018; O’Neil 2016). They note that predictive algorithms draw on variables or proxies that are unfair and may be unconstitutional (Ferguson 2017; Starr 2014). Many point out that predictive algorithms may lead individuals to be surveilled and detained based on crimes they have not committed yet, frequently comparing these technologies to the science-fiction story Minority Report by Philip K. Dick and its movie adaptation, which evoke a dystopian future. To date, studies of criminal justice algorithms share three main characteristics. First, existing work tends to focus on the construction of algorithms, highlighting the proprietary aspect of most of these tools (which are often built by private companies) and criticizing their opacity (Angwin et al. 2016; Pasquale 2015; Wexler 2017). Second, they tend to treat the criminal justice system as a monolith, lumping together the cases of law enforcement, adjudication, sentencing, and community supervision (O’Neil 2016; Scannell 2016). Third, and most importantly, most studies fail to analyze contexts of reception, implicitly assuming – usually without empirical data – that police officers, judges, and prosecutors rely uncritically on what algorithms direct them to do in their daily routines (Harcourt 2006; Hvistendahl 2016; Mohler et al. 2015; Uchida and Swatt 2013). In this article, we adopt a different perspective. Building on a growing body of literature that analyzes the impact of big data in criminal justice (Hannah-Moffat, Maurutto, and Turnbull 2009; Lageson 2017; Lum, Koper, and Willis 2017; Sanders, Weston, and Schott 2015; Stevenson and Doleac 2018), as well as existing ethnographic work on the uses of algorithms (Brayne 2017; Christin 2017; Levy 2015; Rosenblat and Stark 2016; Shestakovsky 2017), we focus on the reception of predictive algorithms in different segments of the criminal justice system. Drawing on two indepth ethnographic studies – one conducted in a police department and the other in a criminal court – we examine two questions. First, to what extent does the adoption of predictive algorithms affect work practices in policing and criminal courts? Second, how do practitioners respond to algorithmic technologies (i.e., do they embrace or contest them)? Based on this ethnographic material, this article provides several key findings. First, we document a widespread – albeit uneven – use of big data technologies on the ground. In policing, big data are used for both person-based and place-based predictive identification, in addition to risk management, crime analysis, and investigations. In criminal courts, multiple predictive instruments, complemented by digital case management systems, are employed to quantify the risk of the defendants. Second, similar arguments are used in policing and courts to justify the use of predictive technologies. In both cases, algorithms are presented as more rational and objective than “gut feelings” or discretionary judgments. Third, we find similar strategies of resistance, fueled by fears of experiential devaluation and increased managerial surveillance, among law enforcement and legal professionals—most importantly, foot-dragging and data obfuscation. Despite these resemblances, we document important differences between our two cases. In particular, law enforcement officers were under more direct pressure to use the algorithms, whereas the legal professionals under consideration were able to keep their distance and ignore predictive technologies without consequences, a finding we relate to the 2 Brayne and Christin D ow naded rom http/academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114 by Snford U niersity user on 06 M arch 2020 distinct hierarchical structures and levels of managerial oversight of the police department and criminal court we compared. We conclude by discussing the implications of these findings for research on technology and inequality in criminal justice. Whereas the current wave of critical scholarship on algorithmic bias often leans upon technological deterministic narratives in order to make social justice claims, here we focus on the social and institutional contexts within which such predictive systems are deployed and negotiated. In the process, we show that these tools acquire political nuance and meaning through practice, which can lead to unanticipated or undesirable outcomes: forms of workplace surveillance and the displacement of discretion to less accountable places. We argue that this sheds new light on the transformations of police and judicial discretion – with important consequences for social and racial inequality – in the age of big data. D E C I S I O N M A K I N G A C R O S S A V A R I E T Y O F D O M A I N S As a growing number of daily activities now take place online, an unprecedented amount of digital information is being collected, stored, and analyzed, making it possible to aggregate data across previously separate institutional settings. Harnessing this rapidly expanding corpus of digitized information, algorithms – broadly defined here as “[a] formally specified sequence(s) of logical operations that provides step-by-step instructions for computers to act on data and thus automate decisions” (Barocas et al. 2014) – are being used to guide decision-making across institutional domains as varied as education, journalism, credit, and criminal justice (Brayne 2017; Christin 2018; Fourcade and Healy 2017; O’Neil 2016; Pasquale 2015). Advocates for algorithmic technologies argue that by relying on “unbiased” assessments, algorithms may help deploy resources more efficiently and objective

85 citations

Journal ArticleDOI
TL;DR: Pretrial risk assessment instruments are used in many jurisdictions to inform decisions regarding pretrial release and conditions as mentioned in this paper, and many are concerned that the use of pre-trial risk assessment instru...
Abstract: Pretrial risk assessment instruments are used in many jurisdictions to inform decisions regarding pretrial release and conditions. Many are concerned that the use of pretrial risk assessment instru...

10 citations

Journal ArticleDOI
TL;DR: In this paper, a feature of recent advances in artificial intelligence (AI) is the ability to make uncertainty visible to refugee status decision-makers, which could help to reduce their confidence in their conclusions.
Abstract: Deciding refugee claims is a paradigm case of an inherently uncertain judgment and prediction exercise. Yet refugee status decision-makers may underestimate the uncertainty inherent in their decisions. A feature of recent advances in artificial intelligence (AI) is the ability to make uncertainty visible. By making clear to refugee status decision-makers how uncertain their predictions are, AI and related statistical tools could help to reduce their confidence in their conclusions. Currently, this would only hurt claimants, since many countries around the world have designed their refugee status determination systems using inductive inference which distorts risk assessment. Increasing uncertainty would therefore contribute to mistaken rejections. If, however, international refugee law was to recognize an obligation under the UN Convention to resolve decision-making doubt in the claimant’s favour and use abductive inference, as Evans Cameron has advocated, then by making uncertainty visible, AI could help reduce the number of wrong denied claims..

9 citations

Journal ArticleDOI
TL;DR: In this paper, the relevance of the discretion of government officials during the COVID-19 pandemic with the concepts and legislation related to legal issues was analyzed, as well as the existence of a Circular to legitimize the handling of COVID19 with statutory regulations.
Abstract: The purpose of this legal research is to analyze the relevance of the discretion of government officials during the COVID-19 pandemic with the concepts and legislation related to legal issues; as well as analyzing the existence of a Circular to legitimize the handling of COVID-19 with statutory regulations. This legal research is carried out by making an inventory of various primary and secondary legal materials, so as to obtain relevant and critical studies of the legal issues discussed. The results of this legal research are that the discretion made by government officials can be justified legally if it is relevant to several provisions contained in legislation for the realization of good emergency governance; and the existence of a circular letter is legally valid if it is in accordance with the laws and regulations and the General Principles of Good Governance, by understanding that a circular is not a product of rules that are in the order of national legislation. Thus, a circular does not have strong and binding legal legitimacy. Therefore, the researcher recommends the criteria and classification of the parameters of discretion in the form of a circular as outlined in the form of a Supreme Court Regulation. This should be done so that there is no abuse of authority in implementing discretionary power by government officials and general legal principles.

8 citations

Book ChapterDOI
07 Nov 2019
TL;DR: Examples and activities that promote consumer protection through adapting of non-discriminatory algorithms used for surveillance, social profiling, surveillance, and business intelligence are discussed.
Abstract: This paper discusses examples and activities that promote consumer protection through adapting of non-discriminatory algorithms. The casual observer of data from smartphones to artificial intelligence believes in technological determinism. To them, data reveal real trends with neutral decision-makers that are not prejudiced. However, machine learning technologies are created by people. Therefore, creator biases can appear in decisions based on algorithms used for surveillance, social profiling, surveillance, and business intelligence.

5 citations