scispace - formally typeset
Search or ask a question
Author

Jordan Axt

Bio: Jordan Axt is an academic researcher from McGill University. The author has contributed to research in topics: Implicit attitude & Implicit-association test. The author has an hindex of 14, co-authored 48 publications receiving 6725 citations. Previous affiliations of Jordan Axt include Duke University & Center for Open Science.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
28 Aug 2015-Science
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

5,532 citations

Journal ArticleDOI
TL;DR: It is shown that a substantial number of white laypeople and medical students and residents hold false beliefs about biological differences between blacks and whites and this work demonstrates that these beliefs predict racial bias in pain perception and treatment recommendation accuracy.
Abstract: Black Americans are systematically undertreated for pain relative to white Americans. We examine whether this racial bias is related to false beliefs about biological differences between blacks and whites (e.g., “black people’s skin is thicker than white people’s skin”). Study 1 documented these beliefs among white laypersons and revealed that participants who more strongly endorsed false beliefs about biological differences reported lower pain ratings for a black (vs. white) target. Study 2 extended these findings to the medical context and found that half of a sample of white medical students and residents endorsed these beliefs. Moreover, participants who endorsed these beliefs rated the black (vs. white) patient’s pain as lower and made less accurate treatment recommendations. Participants who did not endorse these beliefs rated the black (vs. white) patient’s pain as higher, but showed no bias in treatment recommendations. These findings suggest that individuals with at least some medical training hold and may use false beliefs about biological differences between blacks and whites to inform medical judgments, which may contribute to racial disparities in pain assessment and treatment.

1,253 citations

Journal ArticleDOI
24 Dec 2018
TL;DR: This paper conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings, and found that very little heterogeneity was attributable to the order in which the tasks were performed or whether the task were administered in lab versus online.
Abstract: We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.

495 citations

Journal ArticleDOI
TL;DR: It is found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30), and changes in implicit measures did not mediate changes in explicit measures or behavior.
Abstract: Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures' effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

312 citations

Journal ArticleDOI
TL;DR: The authors tested 9 interventions (8 real and 1 sham) to reduce implicit racial preferences over time and found that none were effective after a delay of several hours to several days, and also found that these interventions did not change explicit racial preferences and were not reliably moderated by motivations to respond without prejudice.
Abstract: Implicit preferences are malleable, but does that change last? We tested 9 interventions (8 real and 1 sham) to reduce implicit racial preferences over time. In 2 studies with a total of 6,321 participants, all 9 interventions immediately reduced implicit preferences. However, none were effective after a delay of several hours to several days. We also found that these interventions did not change explicit racial preferences and were not reliably moderated by motivations to respond without prejudice. Short-term malleability in implicit preferences does not necessarily lead to long-term change, raising new questions about the flexibility and stability of implicit preferences. (PsycINFO Database Record

298 citations


Cited by
More filters
Journal ArticleDOI
26 May 2016-Nature

2,609 citations

Journal ArticleDOI
25 Oct 2019-Science
TL;DR: It is suggested that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
Abstract: Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.

2,003 citations

Journal Article
TL;DR: Qualitative research in such mobile health clinics has found that patients value the informal, familiar environment in a convenient location, with staff who “are easy to talk to,” and that the staff’s “marriage of professional and personal discourses” provides patients the space to disclose information themselves.
Abstract: www.mobilehealthmap.org 617‐442‐3200 New research shows that mobile health clinics improve health outcomes for hard to reach populations in cost‐effective and culturally competent ways . A Harvard Medical School study determined that for every dollar invested in a mobile health clinic, the US healthcare system saves $30 on average. Mobile health clinics, which offer a range of services from preventive screenings to asthma treatment, leverage their mobility to treat people in the convenience of their own communities. For example, a mobile health clinic in Baltimore, MD, has documented savings of $3,500 per child seen due to reduced asthma‐related hospitalizations. The estimated 2,000 mobile health clinics across the country are providing similarly cost‐effective access to healthcare for a wide range of populations. Many successful mobile health clinics cite their ability to foster trusting relationships. Qualitative research in such mobile health clinics has found that patients value the informal, familiar environment in a convenient location, with staff who “are easy to talk to,” and that the staff’s “marriage of professional and personal discourses” provides patients the space to disclose information themselves. A communications academic argued that mobile health clinics’ unique use of space is important in facilitating these relationships. Mobile health clinics park in the heart of the community in familiar spaces, like shopping centers or bus stations, which lend themselves to the local community atmosphere.

2,003 citations

Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson3, Magnus Johannesson1, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges14, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson18, Valen E. Johnson1 
University of Southern California1, Duke University2, Stockholm School of Economics3, University of Virginia4, Center for Open Science5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, New York University11, Research Institute of Industrial Economics12, Cardiff University13, Northwestern University14, Mathematica Policy Research15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations