scispace - formally typeset
Search or ask a question
Author

Christopher T. Lowenkamp

Bio: Christopher T. Lowenkamp is an academic researcher from University of Missouri–Kansas City. The author has contributed to research in topics: Recidivism & Risk assessment. The author has an hindex of 33, co-authored 83 publications receiving 4108 citations. Previous affiliations of Christopher T. Lowenkamp include University of Cincinnati & Government of the United States of America.


Papers
More filters
Journal Article
TL;DR: The decision to release or detain a defendant pretrial represents a crucial component within the criminal justice process and can potentially affect case outcomes by increasing the likelihood of conviction, the length of an imposed sentence, and the probability of future recidivism.
Abstract: AFTER A PERSON is1 arrested and accused of a crime in the federal system, a judicial official must determine whether the accused person (that is, the defendant) will be released back into the community or detained until the case is disposed (American Bar Association, 2007). The decision to release or detain a defendant pretrial represents a crucial component within the criminal justice process (Eskridge, 1983; Goldkamp, 1985). In addition to curtailing a defendant’s liberty, the decision to detain a defendant pretrial can potentially affect case outcomes by increasing the likelihood of conviction, the length of an imposed sentence, and the probability of future recidivism (Heaton, Mayson, & Stevenson, 2017; Lowenkamp, VanNostrand, & Holsinger, 2013; Oleson, VanNostrand, Lowenkamp, Cadigan, & Wooldredge, 2014). Given the importance of the pretrial release

10 citations

Journal Article
TL;DR: Johnson et al. as discussed by the authors developed the Post-Conviction Risk Assessment (PCRA) tool as a means to assess offender risk in an effort to reduce future criminal behavior.
Abstract: ONE OF THE primary goals of the federal probation and pretrial services system is to protect the community through the use of controlling and correctional strategies designed to assess and manage risk. In 2010, the Administrative Office of the U.S. Courts (AO) developed the Post-Conviction Risk Assessment (PCRA) tool as a means to assess offender risk in an effort to reduce future criminal behavior. Arguably, the best chances for reducing future criminal behavior occur when officers not only have a reliable way of identifying high-risk offenders but also can intervene in the criminogenic needs of those offenders (Andrews et al., 1990; Lowenkamp & Latessa, 2004; Bonta & Andrews, 2007; Campbell, French, & Gendreau, 2007; Johnson et al., 2011).Clients with higher PCRA scores have poorer probation outcomes-compelling evidence of PCRAs predictive accuracy (Johnson, Lowenkamp, VanBenschoten, & Robinson, 2011; Lowenkamp, Johnson, Holsinger, VanBenschoten, & Robinson, 2013). Half of the 18 PCRA points reflect criminal history factors, while the other half reflect viable case planning targets indicative of criminogenic needs (Bonta & Andrews, 2016). Moreover, clients with similar PCRA scores can have different point elevations across the subscales (i.e., education/employment, substance abuse, social networks, and cognitions) that identify different case planning needs for different clients. Furthermore, PCRA score changes over time are related to client outcomes; increases in PCRA scores lead to increased client failure, while decreases in PCRA scores lead to lower rates of recidivism (Cohen, Lowenkamp, & VanBenschoten, 2016; Luallen, Radakrishnan, 8c Rhodes, 2016). Because the PCRA has the ability to predict client outcomes for both baseline and change scores, probation officers are better equipped to identify intervention strategies for individual clients. Nonetheless, while the PCRA predicts client rearrests as well as informing case planning and risk management, this process is not completely intuitive for some officers. Therefore, the purpose of this paper is to make the process more explicit, especially regarding violent rearrest.Revisions to the PCRA have led to the creation of PCRA 2.0, which reflects improved client normative data, clarifications of scoring rules, removal of some unscored test questions that did not substantially enhance predictive power, inclusion of static risk factor questions, and Psychological Inventory of Criminal Thinking Styles (PICTS) scales predictive of violent arrest. Despite evidence that probation officers in some jurisdictions ignore or override statistical risk assessments (Miller 8c Maloney, 2013), the importance of the PCRA is embedded within federal probation policy. Future training is intended to assist officers in recognizing the predictive validity PCRA 2.0 provides, while also highlighting the limitations of unstructured assessments (i.e., ignoring or overriding PCRA risk categories based on professional judgment or intuition). The expectation is that officers will incorporate PCRA 2.0 assessments into their correctional practices, thereby improving decisional accuracy, case planning, and risk management.Increased scrutiny of sentinel events (e.g., sensational community failure-see Sheil, Doyle, & Lowenkamp, 2016, in this issue of Federal Probation) sparked interest within federal probation in including within the PCRA a violence risk assessment and interventions. Central to a consideration of sentinel events is the inclusion of acute dynamic risk factors that could signify the potential imminence of an event within a higher-risk group. Before including the violence assessment in PCRA, only one item was violence-specific, raising the question of whether the utility of the PCRA could be augmented through the rating of violence flags as a second level of risk assessment. The inclusion of validated violence flags is intended not only to insulate officers and the agency from undue criticism in the wake of an offender committing a serious violent offense, but also to reduce risk of harm to the community and further enhance officer safety. …

9 citations

Journal ArticleDOI
TL;DR: In this paper, a randomized experimental trial was designed to test the effects of court notification strategies, using failure to appear (FTA) as the primary outcome of interest, and the results showed that the utility of an actuarial method of risk classification when predicting the likelihood of FTA.
Abstract: Jurisdictions at every level throughout the US are paying an increasing amount of attention to pretrial case processing The primary areas of attention appear to be on risk assessment development and classification, the effects of pretrial detention, and the effectiveness of various strategies that may impact a defendant’s failure to appear for their assigned court dates The current study is a randomized experimental trial designed to test the effects of court notification strategies, using failure to appear (FTA) as the primary outcome of interest Our findings do not reveal a palpable effect for court notification strategies (telephone calls, and text messaging, with other conditions layered in), but do indicate and reinforce the utility of an actuarial method of risk classification when predicting likelihood of FTA

9 citations


Cited by
More filters
Journal ArticleDOI
Cynthia Rudin1
TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
Abstract: Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.

3,609 citations

Journal ArticleDOI
TL;DR: GARLAND, 2001, p. 2, the authors argues that a modernidade tardia, esse distintivo padrão de relações sociais, econômicas e culturais, trouxe consigo um conjunto de riscos, inseguranças, and problemas de controle social that deram uma configuração específica às nossas respostas ao crime, ao garantir os altos custos das
Abstract: Nos últimos trinta trinta anos, houve profundas mudanças na forma como compreendemos o crime e a justiça criminal. O crime tornou-se um evento simbólico, um verdadeiro teste para a ordem social e para as políticas governamentais, um desafio para a sociedade civil, para a democracia e para os direitos humanos. Segundo David Garland, professor da Faculdade de Direito da New York University, um dos principais autores no campo da Sociologia da Punição e com artigo publicado na Revista de Sociologia e Política , número 13, na modernidade tardia houve uma verdadeira obsessão securitária, direcionando as políticas criminais para um maior rigor em relação às penas e maior intolerância com o criminoso. Há trinta anos, nos EUA e na Inglaterra essa tendência era insuspeita. O livro mostra que os dois países compartilham intrigantes similaridades em suas práticas criminais, a despeito da divisão racial, das desigualdades econômicas e da letalidade violenta que marcam fortemente o cenário americano. Segundo David Garland, encontram-se nos dois países os “mesmos tipos de riscos e inseguranças, a mesma percepção a respeito dos problemas de um controle social não-efetivo, as mesmas críticas da justiça criminal tradicional, e as mesmas ansiedades recorrentes sobre mudança e ordem sociais”1 (GARLAND, 2001, p. 2). O argumento principal da obra é o seguinte: a modernidade tardia, esse distintivo padrão de relações sociais, econômicas e culturais, trouxe consigo um conjunto de riscos, inseguranças e problemas de controle social que deram uma configuração específica às nossas respostas ao crime, ao garantir os altos custos das políticas criminais, o grau máximo de duração das penas e a excessivas taxas de encarceramento.

2,183 citations

Journal ArticleDOI
14 Apr 2017-Science
TL;DR: This article showed that applying machine learning to ordinary human language results in human-like semantic biases and replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web.
Abstract: Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

1,874 citations

Posted Content
TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Abstract: With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.

1,571 citations

Journal ArticleDOI
01 Jun 2017
TL;DR: It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Abstract: Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time. Although such instruments are gaining increasing popularity across the country, their use is attracting tremendous controversy. Much of the controversy concerns potential discriminatory bias in the risk assessments that are produced. This article discusses several fairness criteria that have recently been applied to assess the fairness of RPIs. We demonstrate that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups. We then show how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.

1,452 citations