scispace - formally typeset
Search or ask a question
Author

Christopher T. Lowenkamp

Bio: Christopher T. Lowenkamp is an academic researcher from University of Missouri–Kansas City. The author has contributed to research in topics: Recidivism & Risk assessment. The author has an hindex of 33, co-authored 83 publications receiving 4108 citations. Previous affiliations of Christopher T. Lowenkamp include University of Cincinnati & Government of the United States of America.


Papers
More filters
Journal ArticleDOI
TL;DR: In the federal supervision system, officers have discretion to depart from the risk designations provided by the postconviction risk assessment (PCRA) instrument as mentioned in this paper, and this component of the risk classi...
Abstract: In the federal supervision system, officers have discretion to depart from the risk designations provided by the Post Conviction Risk Assessment (PCRA) instrument. This component of the risk classi...

7 citations

Journal Article
TL;DR: The PCRA is a risk and need instrument similar to other measures such as the LS/CMI, the COMPAS, and the ORAS as mentioned in this paper, which has comparable or superior predictive accuracy to these other instruments.
Abstract: RISK FACTORS HAVE commonly been distinguished as being either static (e.g., age at first arrest, number of prior convictions) or dynamic (e.g., substance use, employment status). In the early days of risk assessment (1970s), static factors were most commonly incorporated into risk measures. They were easy to code and readily available; most importantly, these initial static risk measures demonstrated accuracy equal to or greater than unstructured assessments (Grove, Zald, Lebow, Snitz, & Nelson, 2000). Importantly, by the early 1980s, opposition to measures with exclusively static risk factors was beginning to develop, primarily because these scales could not identify intervention targets, and if scores could change, the range of potential change was greatly restricted and unidirectional (i.e., clients could only be rated worse; Bonta, 1996; Wong & Gordon, 2006). Notably, involvement in treatment could not improve scores, leading to the problematic practice of treatment completion having no impact on an individuals predicted outcome.Andrews and Bonta (2010) presented a hierarchy of risk factors intended to identify appropriate targets for rehabilitation programs; their choice of variables was consistent with a conceptualization of dynamic risk factors as relatively slow-evolving features. Their description of these targets as criminogenic needs came to be considered synonymous with the concept of dynamic risk and led to the risk and need principles. Indeed these stable dynamic risks were increasingly common in risk and need measures; their inclusion was intended to inform both levels of risk and case planning requirements for clients. Clients with a greater number of stable dynamic risks (i.e., criminogenic needs) were considered higher risk, warranting more intensive intervention and level of service. Encouragingly, targeting these criminogenic needs leads to improved client outcomes (Aos, Miller, 8c Drake, 2006; Smith, Gendreau, 8c Swartz, 2009).The PCRA is a contemporary risk and need instrument similar to other measures such as the LS/CMI, the COMPAS, and the ORAS. Validity research indicates the PCRA has comparable or superior predictive accuracy to these other instruments (Desmarais 8c Singh, 2013). Importantly, even though the PCRA assessment is done at baseline, at 6 months, and then yearly thereafter, change scores across time on the PCRA are related to client outcome (Cohen, Lowenkamp, 8c VanBenschoten, 2016; Luallen, Radakrishnan, 8c Rhodes, 2016). The odds of client failure can be predicted by changes from one PCRA assessment to the next. For instance, in a case where the client's PCRA score is 3 points lower, the probability of violent rearrest is decreased by 19 percent. In contrast, in a case where the clients PCRA score is 3 points higher, the probability of violent rearrest is increased by 31 percent. Clearly, change on criminogenic needs, as measure by the PCRA, is important in understanding client outcome.Increasingly, experts in the risk assessment field have argued that accuracy regarding the timing of client outcome can be enhanced by considering changes in acute dynamic risk factors (Douglas & Skeem, 2005; Serin, Chadwick, 8c Lloyd, 2016). Specifically, the expectation is that acute risks flag imminence of problematic outcomes for clients and augment risk assessment beyond static factors. As well, elevations in acute risk should mean that clients with similar crimes and PCRA scores could be managed differently from clients without such acute risks. Several examples illustrate this viewpoint. You have a client for whom employment has been a concern in that when unemployed, the client commonly turns to criminal behavior to generate income. Hence, when that client advises you that he or she has just been fired, this should be a flag that increased monitoring (e.g., efforts to secure a new job, assistance with job search, access to and association with criminal peers, etc.) is in order. …

6 citations

Journal ArticleDOI
TL;DR: The risk principle directs correctional practitioners to provide greater amounts of correctional treatment to higher-risk offenders, while the responsivity principle directs practitioners to target higher risk offenders as discussed by the authors. But, the risk principle is not applicable to mental health patients.
Abstract: The risk principle directs correctional practitioners to provide greater amounts of correctional treatment to higher-risk offenders, while the responsivity principle directs practitioners to target...

6 citations

Journal Article
TL;DR: Cohen et al. as discussed by the authors explored how changes in offender risk influence the likelihood of recidivism (i.e., arrests for either felony or misdemeanor offenses within one year after the second PCRA assessment) by tracking a sample of 64,716 offenders placed on federal supervision and found that offenders initially classified at the highest risk levels moved to a lower risk category in their second assessment and that offenders tended to improve the most in the PCRA risk domains of employment and substance abuse.
Abstract: THE POST CONVICTION Risk Assessment (PCRA) is a correctional assessment tool used by federal probation officers that identifies offenders most likely to commit new crimes and the criminogenic characteristics that, if changed, could reduce the likelihood of recidi visni. Implementation of the PCRA allows federal probation officers to measure whether the criminogenic factors of offenders are changing over time and the relationship of these changes to subsequent reoffending behavior. We explored how changes in offender risk influence the likelihood of recidivism (i.e., arrests for either felony or misdemeanor offenses within one year after the second PCRA assessment) by tracking a sample of 64,716 offenders placed on federal supervision. Ehe study found that many offenders initially classified at the highest risk levels moved to a lower risk category in their second assessment and that offenders tended to improve the most in the PCRA risk domains of employment and substance abuse.The study also found that high, moderate, and low-moderate risk offenders witnessing decreases in either their risk classifications (i.e., going from high to moderate risk) or overall PCRA scores (i.e., going from 18 to 15 points) were less likely to recidivate compared to their counterparts whose risk levels or scores remained unchanged or increased. Conversely, increases in offender risk were associated with higher rates of arrests irrespective of whether the increase in risk involved higher risk levels or overall PCRA scores. For the most part, offend ers with decreasing scores in any of the dynamic risk domains were consistently less likely to be rearrested. Finally, offenders in the lowest risk category saw no recidivism reduction if either their overall score or the score of any of their risk domains decreased.This is a synopsis of key findings from our study examining federally supervised offenders with multiple PCRA assessments, which was published in the journal Criminology and Public Policy (Cohen et al., 2016). The PCRA is a dynamic fourth-generation risk assessment tool that predicts an offenders likelihood of recidivism at multiple time points. This instrument identifies offenders who are most likely to recidivate, ascertains crime-supporting characteristics that will benefit from supervision intervention, and provides information on barriers to successful offender re-integration and/ or treatment (AOUSC, 2011).With the implementation of the PCRA, we can for the first time investigate how much the risk levels of offenders are decreasing between assessments, which risk domains are most likely to get better, and whether offenders with declining risk levels are being arrested less frequently compared to their counterparts with stable or increasing risk levels. These issues are explored in this study using a sample of federally supervised offenders with multiple PCRA assessments. Before discussing this study's findings and implications, we briefly provide an overview of the PCRA risk tool, discuss previous research on the PCRAs capacity to assess change in offender recidivism risk, and detail the methodological approaches utilized in this study.Using the PCRA to Examine Changes in Offender RiskThe PCRA is a dynamic risk assessment instrument that was developed for United States probation officers (Johnson, Lowenkamp, VanBenschoten, & Robinson, 2011; Lowenkamp, Johnson, VanBenschoten, Robinson, & Holsinger, 2013). The instrument uses five general domains that have been shown to be both theoretically and statistically predictive of offender recidivism: criminal history, education/employment, substance abuse, social networks, and cognitions (i.e., attitudes towards supervision) (Johnson et al., 2011; Lowenkamp et al., 2013). The PCRA has been shown to be highly predictive of whether an offender will reoffend after the commencement of his or her supervision term. For details of studies describing the construction and validation of the PCRA, see Johnson et al. …

6 citations

Journal ArticleDOI
TL;DR: In this article, a study was conducted to determine whether there is a statistically significant relationship between a sex offender's probability of reoffending and his registration and notification assignment in an Ohio sample of male sex offenders.
Abstract: Many sex offender registration and notification procedures use an assignment process that places offenders into a lower, middle, or upper tier. This implies that the offenders on the lowest tier pose less risk than those on the highest tier; yet empirical testing of this assumption is lacking. As a first step to determining whether this approach correctly identifies the dangerousness of sex offenders, this study seeks to determine whether there is a statistically significant relationship between a sex offender’s probability of reoffending and his registration and notification assignment in an Ohio sample of male sex offenders. Chi-square results showed no significant relationship between a sex offender’s probability of reoffending and his registration and notification assignment. Regression results demonstrated only two variables to be predictive of registration assignment—prior sex offenses and current first degree felony offense— while other variables shown to be correlated to sex offending were not predictive of registration assignment.

5 citations


Cited by
More filters
Journal ArticleDOI
Cynthia Rudin1
TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
Abstract: Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.

3,609 citations

Journal ArticleDOI
TL;DR: GARLAND, 2001, p. 2, the authors argues that a modernidade tardia, esse distintivo padrão de relações sociais, econômicas e culturais, trouxe consigo um conjunto de riscos, inseguranças, and problemas de controle social that deram uma configuração específica às nossas respostas ao crime, ao garantir os altos custos das
Abstract: Nos últimos trinta trinta anos, houve profundas mudanças na forma como compreendemos o crime e a justiça criminal. O crime tornou-se um evento simbólico, um verdadeiro teste para a ordem social e para as políticas governamentais, um desafio para a sociedade civil, para a democracia e para os direitos humanos. Segundo David Garland, professor da Faculdade de Direito da New York University, um dos principais autores no campo da Sociologia da Punição e com artigo publicado na Revista de Sociologia e Política , número 13, na modernidade tardia houve uma verdadeira obsessão securitária, direcionando as políticas criminais para um maior rigor em relação às penas e maior intolerância com o criminoso. Há trinta anos, nos EUA e na Inglaterra essa tendência era insuspeita. O livro mostra que os dois países compartilham intrigantes similaridades em suas práticas criminais, a despeito da divisão racial, das desigualdades econômicas e da letalidade violenta que marcam fortemente o cenário americano. Segundo David Garland, encontram-se nos dois países os “mesmos tipos de riscos e inseguranças, a mesma percepção a respeito dos problemas de um controle social não-efetivo, as mesmas críticas da justiça criminal tradicional, e as mesmas ansiedades recorrentes sobre mudança e ordem sociais”1 (GARLAND, 2001, p. 2). O argumento principal da obra é o seguinte: a modernidade tardia, esse distintivo padrão de relações sociais, econômicas e culturais, trouxe consigo um conjunto de riscos, inseguranças e problemas de controle social que deram uma configuração específica às nossas respostas ao crime, ao garantir os altos custos das políticas criminais, o grau máximo de duração das penas e a excessivas taxas de encarceramento.

2,183 citations

Journal ArticleDOI
14 Apr 2017-Science
TL;DR: This article showed that applying machine learning to ordinary human language results in human-like semantic biases and replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web.
Abstract: Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

1,874 citations

Posted Content
TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Abstract: With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.

1,571 citations

Journal ArticleDOI
01 Jun 2017
TL;DR: It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Abstract: Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time. Although such instruments are gaining increasing popularity across the country, their use is attracting tremendous controversy. Much of the controversy concerns potential discriminatory bias in the risk assessments that are produced. This article discusses several fairness criteria that have recently been applied to assess the fairness of RPIs. We demonstrate that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups. We then show how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.

1,452 citations