scispace - formally typeset
Search or ask a question
Author

Rebecca S. Frazier

Bio: Rebecca S. Frazier is an academic researcher from University of Virginia. The author has contributed to research in topics: Social perception & Implicit-association test. The author has an hindex of 7, co-authored 9 publications receiving 5718 citations. Previous affiliations of Rebecca S. Frazier include University of California, San Francisco.

Papers
More filters
Journal ArticleDOI
28 Aug 2015-Science
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

5,532 citations

Journal ArticleDOI
TL;DR: The present state of knowledge provides a foundation for the next age of implicit social cognition: clarification of the mechanisms underlying implicit measurement and how the measured constructs influence behavior.

463 citations

Journal ArticleDOI
TL;DR: Eight of 17 interventions were effective at reducing implicit preferences for Whites compared with Blacks, particularly ones that provided experience with counterstereotypical exemplars, used evaluative conditioning methods, and provided strategies to override biases.
Abstract: Many methods for reducing implicit prejudice have been identified, but little is known about their relative effectiveness. We held a research contest to experimentally compare interventions for reducing the expression of implicit racial prejudice. Teams submitted seventeen interventions that were tested an average of 3.70 times each in four studies (total N = 17,021), with rules for revising interventions between studies. Eight of seventeen interventions were effective at reducing implicit preferences for Whites compared to Blacks, particularly ones that provided experience with counterstereotypical exemplars, used evaluative conditioning methods, and provided strategies to override biases. The other nine interventions were ineffective, particularly ones that engaged participants with others’ perspectives, asked participants to consider egalitarian values, or induced a positive emotion. The most potent interventions were ones that invoked high self-involvement or linked Black people with positivity and White people with negativity. No intervention consistently reduced explicit racial preferences. Furthermore, intervention effectiveness only weakly extended to implicit preferences for Asians and Hispanics.

375 citations

Journal ArticleDOI
TL;DR: This paper conducted a research contest to compare interventions for reducing the expression of implicit racial prejudice and found that the most potent interventions were those that invoked high self-involvement or linked Black people with positivity and White people with negativity.
Abstract: Many methods for reducing implicit prejudice have been identified, but little is known about their relative effectiveness. We held a research contest to experimentally compare interventions for reducing the expression of implicit racial prejudice. Teams submitted 17 interventions that were tested an average of 3.70 times each in 4 studies (total N = 17,021), with rules for revising interventions between studies. Eight of 17 interventions were effective at reducing implicit preferences for Whites compared with Blacks, particularly ones that provided experience with counterstereotypical exemplars, used evaluative conditioning methods, and provided strategies to override biases. The other 9 interventions were ineffective, particularly ones that engaged participants with others' perspectives, asked participants to consider egalitarian values, or induced a positive emotion. The most potent interventions were ones that invoked high self-involvement or linked Black people with positivity and White people with negativity. No intervention consistently reduced explicit racial preferences. Furthermore, intervention effectiveness only weakly extended to implicit preferences for Asians and Hispanics.

370 citations

Journal ArticleDOI
TL;DR: In this paper, the authors introduce complementary indicators that are unrelated to the number of years since PhD, and illustrate cumulative and career-stage approaches for assessing the scientific impact across a discipline.
Abstract: Number of citations and the h-index are popular metrics for indexing scientific impact. These, and other existing metrics, are strongly related to scientists’ seniority. This article introduces complementary indicators that are unrelated to the number of years since PhD. To illustrate cumulative and career-stage approaches for assessing the scientific impact across a discipline, citations for 611 scientists from 97 U.S. and Canadian social psychology programs are amassed and analyzed. Results provide benchmarks for evaluating impact across the career span in psychology and other disciplines with similar citation patterns. Career-stage indicators provide a very different perspective on individual and program impact than cumulative impact, and may predict emerging scientists and programs. Comparing social groups, Whites and men had higher impact than non-Whites and women, respectively. However, average differences in career stage accounted for most of the difference for both groups.

92 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The Human Side of Enterprise as mentioned in this paper is one of the most widely used management literature and has been widely used in business schools, industrial relations schools, psychology departments, and professional development seminars for over four decades.
Abstract: \"What are your assumptions (implicit as well as explicit) about the most effective way to manage people?\" So began Douglas McGregor in this 1960 management classic. It was a seemingly simple question he asked, yet it led to a fundamental revolution in management. Today, with the rise of the global economy, the information revolution, and the growth of knowledge-driven work, McGregor's simple but provocative question continues to resonate-perhaps more powerfully than ever before. Heralded as one of the most important pieces of management literature ever written, a touchstone for scholars and a handbook for practitioners, The Human Side of Enterprise continues to receive the highest accolades nearly half a century after its initial publication. Influencing such major management gurus such as Peter Drucker and Warren Bennis, McGregor's revolutionary Theory Y-which contends that individuals are self-motivated and self-directed-and Theory X-in which employees must be commanded and controlled-has been widely taught in business schools, industrial relations schools, psychology departments, and professional development seminars for over four decades. In this special annotated edition of the worldwide management classic, Joel Cutcher-Gershenfeld, Senior Research Scientist in MIT's Sloan School of Management and Engineering Systems Division, shows us how today's leaders have successfully incorporated McGregor's methods into modern management styles and practices. The added quotes and commentary bring the content right into today's debates and business models. Now more than ever, the timeless wisdom of Douglas McGregor can light the path towards a management style that nurtures leadership capability, creates effective teams, ensures internal alignment, achieves high performance, and cultivates an authentic, value-driven workplace--lessons we all need to learn as we make our way in this brave new world of the 21st century.

3,373 citations

Journal ArticleDOI
TL;DR: Ajzen et al. as discussed by the authors discussed the nature of intentions and the limits of predictive validity, rationality, affect and emotions; past behaviour and habit; the prototype/willingness model; and the role of such background factors as the big five personality traits and social comparison tendency.
Abstract: The seven articles in this issue, and the accompanying meta-analysis in Health Psychology Review [McEachan, R.R.C., Conner, M., Taylor, N., & Lawton, R.J. (2011). Prospective prediction of health-related behaviors with the theory of planned behavior: A meta-analysis. Health Psychology Review, 5, 97–144], illustrate the wide application of the theory of planned behaviour [Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211] in the health domain. In this editorial, Ajzen reflects on some of the issues raised by the different authors. Among the topics addressed are the nature of intentions and the limits of predictive validity; rationality, affect and emotions; past behaviour and habit; the prototype/willingness model; and the role of such background factors as the big five personality traits and social comparison tendency.

2,902 citations

Journal ArticleDOI
26 May 2016-Nature

2,609 citations

Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson1, Magnus Johannesson3, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges14, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson18, Valen E. Johnson1 
University of Southern California1, Duke University2, Stockholm School of Economics3, University of Virginia4, Center for Open Science5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Northwestern University14, Mathematica Policy Research15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce the current state-of-the-art of network estimation and propose two novel statistical methods: the correlation stability coefficient and the bootstrapped difference test for edge-weights and centrality indices.
Abstract: The usage of psychological networks that conceptualize behavior as a complex interplay of psychological and other components has gained increasing popularity in various research fields. While prior publications have tackled the topics of estimating and interpreting such networks, little work has been conducted to check how accurate (i.e., prone to sampling variation) networks are estimated, and how stable (i.e., interpretation remains similar with less observations) inferences from the network structure (such as centrality indices) are. In this tutorial paper, we aim to introduce the reader to this field and tackle the problem of accuracy under sampling variation. We first introduce the current state-of-the-art of network estimation. Second, we provide a rationale why researchers should investigate the accuracy of psychological networks. Third, we describe how bootstrap routines can be used to (A) assess the accuracy of estimated network connections, (B) investigate the stability of centrality indices, and (C) test whether network connections and centrality estimates for different variables differ from each other. We introduce two novel statistical methods: for (B) the correlation stability coefficient, and for (C) the bootstrapped difference test for edge-weights and centrality indices. We conducted and present simulation studies to assess the performance of both methods. Finally, we developed the free R-package bootnet that allows for estimating psychological networks in a generalized framework in addition to the proposed bootstrap methods. We showcase bootnet in a tutorial, accompanied by R syntax, in which we analyze a dataset of 359 women with posttraumatic stress disorder available online.

1,584 citations