scispace - formally typeset
Search or ask a question
Author

Christopher T. Lowenkamp

Bio: Christopher T. Lowenkamp is an academic researcher from University of Missouri–Kansas City. The author has contributed to research in topics: Recidivism & Risk assessment. The author has an hindex of 33, co-authored 83 publications receiving 4108 citations. Previous affiliations of Christopher T. Lowenkamp include University of Cincinnati & Government of the United States of America.


Papers
More filters
Journal Article
TL;DR: The authors pointed out that ProPublica's report was based on faulty statistics and data analysis, and that the report failed to show that the COMPAS itself is racially biased, let alone that other risk instruments are biased.
Abstract: The validity and intellectual honesty of conducting and reporting analysis are critical, since the ramifications of published data, accurate or misleading, may have consequences for years to come.-Marco and Larkin, 2000, p. 692PROPUBLICA RECENTLY RELEASED a much-heralded investigative report claiming that a risk assessment tool (known as the COMPAS) used in criminal justice is biased against black defendants.12 The report heavily implied that such bias is inherent in all actuarial risk assessment instruments (ARAIs).We think ProPublica's report was based on faulty statistics and data analysis, and that the report failed to show that the COMPAS itself is racially biased, let alone that other risk instruments are biased. Not only do ProPublica's results contradict several comprehensive existing studies concluding that actuarial risk can be predicted free of racial and/or gender bias, a correct analysis of the underlying data (which we provide below) sharply undermines ProPublicas approach.Our reasons for writing are simple. It might be that the existing justice system is biased against poor minorities due to a wide variety of reasons (including economic factors, policing patterns, prosecutorial behavior, and judicial biases), and therefore, regardless of the degree of bias, risk assessment tools informed by objective data can help reduce racial bias from its current level. It would be a shame if policymakers mistakenly thought that risk assessment tools were somehow worse than the status quo. Because we are at a time in history when there appears to be bipartisan political support for criminal justice reform, one poorly executed study that makes such absolute claims of bias should not go unchallenged. The gravity of this study's erroneous conclusions is exacerbated by the large-market outlet in which it was published (ProPublica).Before we expand further into our criticisms of the ProPublica piece, we describe some context and characteristics of the American criminal justice system and risk assessments.Mass Incarceration and ARAIsThe United States is clearly the worldwide leader in imprisonment. The prison population in the United States has declined by small percentages in recent years and at year-end 2014 the prison population was the smallest it had been since 2004. Yet, we still incarcerated 1,561,500 individuals in federal and state correctional facilities (Carson, 2015). By sheer numbers, or rates per 100,000 inhabitants, the United States incarcerates more people than just about any country in the world that reports reliable incarceration statistics (Wagner & Walsh, 2016).Further, it appears that there is a fair amount of racial disproportion when comparing the composition of the general population with the composition of the prison population. The 2014 United States Census population projection estimates that, across the U.S., the racial breakdown of the 318 million residents comprised 62.1 percent white, 13.2 percent black or African American, and 17.4 percent Hispanic. In comparison, 37 percent of the prison population was categorized as black, 32 percent was categorized as white, and 22 percent as Hispanic (Carson, 2015). Carson (2015:15) states that, "As a percentage of residents of all ages at yearend 2014, 2.7 percent of black males (or 2,724 per 100,000 black male residents) and 1.1 percent of Hispanic males (1,090 per 100,000 Hispanic males) were serving sentences of at least 1 year in prison, compared to less than 0.5 percent of white males (465 per 100,000 white male residents)."Aside from the negative effects caused by imprisonment, there is a massive financial cost that extends beyond official correctional budgets. A recent report by The Vera Institute of Justice (Henrichson & Delaney, 2012) indicated that the cost of prison operations (including such things as pension and insurance contributions, capital costs, legal fees, and administrative fees) in 40 states participating in their study was 39. …

679 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated how adherence to the risk principle by targeting offenders who are higher risk and varying length of stay and services by level of risk affects program effectiveness in reducing recidivism.
Abstract: Over the recent past there have been several meta-analyses and primary studies that support the importance of the risk principle. Oftentimes these studies, particularly the meta-analyses, are limited in their ability to assess how the actual implementation of the risk principle by correctional agencies affects effectiveness in reducing recidivism. Furthermore, primary studies are typically limited to the assessment of one or two programs, which again limits the types of analyses conducted. This study, using data from two independent studies of 97 correctional programs, investigates how adherence to the risk principle by targeting offenders who are higher risk and varying length of stay and services by level of risk affects program effectiveness in reducing recidivism. Overall, this research indicates that for residential and nonresidential programs, adhering to the risk principle has a strong relationship with a program’s ability to reduce recidivism.

459 citations

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed data on 3,237 offenders placed in 1 of 38 community-based residential programs as part of their parole or other post-release control, and found significant and substantial relationships between program characteristics and program effectiveness.
Abstract: Research Summary: This study analyzed data on 3,237 offenders placed in 1 of 38 community-based residential programs as part of their parole or other post-release control. Offenders terminated from these programs were matched to, and compared with, a group of offenders (N = 3,237) under parole or other post-release control who were not placed in residential programming. Data on program characteristics and treatment integrity were obtained through staff surveys and interviews with program directors. This information on program characteristics was then related to the treatment effects associated with each program. Policy Implications: Significant and substantial relationships between program characteristics and program effectiveness were noted. This research provides information that is relevant to the development of correctional programs, and it can be used by funding agencies when awarding contracts for services.

336 citations

Journal ArticleDOI
TL;DR: In this article, the authors analyzed data on 7,306 offenders placed in 1 of 53 community-based residential programs as part of their parole, post-release control, or probation, and found significant and substantial differences in the effectiveness of programming were found on the basis of various risk levels.
Abstract: Research Summary: This study analyzed data on 7,306 offenders placed in 1 of 53 community-based residential programs as part of their parole, post-release control, or probation. Offenders who successfully completed residential programming were compared with a group of offenders (n = 5801) under parole/post-release control who were not placed in residential programming. Analyses of program effectiveness were conducted, controlling for risk and a risk-by-group (treatment versus comparison) interaction term. Policy Implications: Significant and substantial differences in the effectiveness of programming were found on the basis of various risk levels. This research challenges the referral and acceptance policies and procedures of many states’ departments of corrections, local probation departments and courts, and social service agencies that provide offender services.

253 citations

Journal ArticleDOI
TL;DR: This article replicated Sampson and Groves's findings with data from the 1994 British Crime Survey and found that similar models with similar measures yield results consistent with social disorganization theory and consistent with the results presented by Sampson-Groves.
Abstract: Using data from the British Crime Survey conducted in 1982, Sampson and Groves provided a convincing test of social disorganization theory. Although macro-level theory was in the midst of a revival when this investigation appeared, no single article did more to polish the previously tarnished image of social disorganization theory than Sampson and Groves's analysis; in fact, this work has become a criminological classic. Subsequent research, however, has not systematically replicated this study. Questions thus remain as to whether Sampson and Groves uncovered enduring empirical realities or idiosyncratic relationships reflecting the time period from which the data were drawn. In this context, the current research seeks to replicate Sampson and Groves's findings with data from the 1994 British Crime Survey. Analyses of similar models with similar measures yield results consistent with social disorganization theory and consistent with the results presented by Sampson and Groves. Our study suggests, therefor...

205 citations


Cited by
More filters
Journal ArticleDOI
Cynthia Rudin1
TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
Abstract: Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.

3,609 citations

Journal ArticleDOI
TL;DR: GARLAND, 2001, p. 2, the authors argues that a modernidade tardia, esse distintivo padrão de relações sociais, econômicas e culturais, trouxe consigo um conjunto de riscos, inseguranças, and problemas de controle social that deram uma configuração específica às nossas respostas ao crime, ao garantir os altos custos das
Abstract: Nos últimos trinta trinta anos, houve profundas mudanças na forma como compreendemos o crime e a justiça criminal. O crime tornou-se um evento simbólico, um verdadeiro teste para a ordem social e para as políticas governamentais, um desafio para a sociedade civil, para a democracia e para os direitos humanos. Segundo David Garland, professor da Faculdade de Direito da New York University, um dos principais autores no campo da Sociologia da Punição e com artigo publicado na Revista de Sociologia e Política , número 13, na modernidade tardia houve uma verdadeira obsessão securitária, direcionando as políticas criminais para um maior rigor em relação às penas e maior intolerância com o criminoso. Há trinta anos, nos EUA e na Inglaterra essa tendência era insuspeita. O livro mostra que os dois países compartilham intrigantes similaridades em suas práticas criminais, a despeito da divisão racial, das desigualdades econômicas e da letalidade violenta que marcam fortemente o cenário americano. Segundo David Garland, encontram-se nos dois países os “mesmos tipos de riscos e inseguranças, a mesma percepção a respeito dos problemas de um controle social não-efetivo, as mesmas críticas da justiça criminal tradicional, e as mesmas ansiedades recorrentes sobre mudança e ordem sociais”1 (GARLAND, 2001, p. 2). O argumento principal da obra é o seguinte: a modernidade tardia, esse distintivo padrão de relações sociais, econômicas e culturais, trouxe consigo um conjunto de riscos, inseguranças e problemas de controle social que deram uma configuração específica às nossas respostas ao crime, ao garantir os altos custos das políticas criminais, o grau máximo de duração das penas e a excessivas taxas de encarceramento.

2,183 citations

Journal ArticleDOI
14 Apr 2017-Science
TL;DR: This article showed that applying machine learning to ordinary human language results in human-like semantic biases and replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web.
Abstract: Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

1,874 citations

Posted Content
TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Abstract: With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.

1,571 citations

Journal ArticleDOI
01 Jun 2017
TL;DR: It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Abstract: Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time. Although such instruments are gaining increasing popularity across the country, their use is attracting tremendous controversy. Much of the controversy concerns potential discriminatory bias in the risk assessments that are produced. This article discusses several fairness criteria that have recently been applied to assess the fairness of RPIs. We demonstrate that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups. We then show how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.

1,452 citations