scispace - formally typeset
Search or ask a question
Author

Neil A. Lewis

Bio: Neil A. Lewis is an academic researcher from Cornell University. The author has contributed to research in topics: Identity (social science) & Social cognition. The author has an hindex of 14, co-authored 38 publications receiving 834 citations. Previous affiliations of Neil A. Lewis include Max Planck Society & University of Michigan.

Papers
More filters
Journal ArticleDOI
24 Dec 2018
TL;DR: This paper conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings, and found that very little heterogeneity was attributable to the order in which the tasks were performed or whether the task were administered in lab versus online.
Abstract: We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.

495 citations

Journal ArticleDOI
TL;DR: In this article, a taxonomy of behavioural research on COVID-19 suitable for making policy decisions is presented. But is behavioural research suitable for policy making, and the authors caution practitioners to take extreme care translating their findings to applications.
Abstract: Social and behavioural scientists have attempted to speak to the COVID-19 crisis. But is behavioural research on COVID-19 suitable for making policy decisions? We offer a taxonomy that lets our science advance in ‘evidence readiness levels’ to be suitable for policy. We caution practitioners to take extreme care translating our findings to applications.

103 citations

Journal ArticleDOI
TL;DR: In this paper, the authors propose an agenda for adopting open science practices in communication, which includes the following seven suggestions: (1) publish materials, data, and code; (2) preregister studies and submit registered reports; (3) conduct replications; (4) collaborate; (5) foster open science skills; (6) implement Transparency and Openness Promotion Guidelines; and (7) incentivize open science practice.
Abstract: In the last 10 years, many canonical findings in the social sciences appear unreliable. This so-called “replication crisis” has spurred calls for open science practices, which aim to increase the reproducibility, replicability, and generalizability of findings. Communication research is subject to many of the same challenges that have caused low replicability in other fields. As a result, we propose an agenda for adopting open science practices in Communication, which includes the following seven suggestions: (1) publish materials, data, and code; (2) preregister studies and submit registered reports; (3) conduct replications; (4) collaborate; (5) foster open science skills; (6) implement Transparency and Openness Promotion Guidelines; and (7) incentivize open science practices. Although in our agenda we focus mostly on quantitative research, we also reflect on open science practices relevant to qualitative research. We conclude by discussing potential objections and concerns associated with open science practices.

92 citations

Journal ArticleDOI
TL;DR: For instance, the authors ask whether a person will be tempted by a donut at 4 p.m. or do homework at 9 pm. And their responses are based on their gut sense.
Abstract: Will you be going to that networking lunch? Will you be tempted by a donut at 4 p.m.? Will you be doing homework at 9 p.m.? If, like many people, your responses are based on your gut sense of who y...

92 citations

Journal ArticleDOI
TL;DR: The authors use identity-based motivation theory as an organizing framework to understand how macrolevel social stratification factors including minority ethnic group membership and low socioeconomic position (e.g., parental education, income) and the stigma they carry, matter.
Abstract: African Americans, Latinos, and Native Americans aspire to do well in school but often fall short of this goal. We use identity-based motivation theory as an organizing framework to understand how macrolevel social stratification factors including minority–ethnic group membership and low socioeconomic position (e.g., parental education, income) and the stigma they carry, matter. Macrolevel social stratification differentially exposes students to contexts in which choice and control are limited and stigma is evoked, shaping identity-based motivation in three ways. Stratification influences which behaviors likely feel congruent with important identities, undermines belief that one's actions and effort matter, and skews chronic interpretation of one's experienced difficulties with schoolwork from interpreting experienced difficulty as implying importance (e.g., “it's for me”) toward implying “impossibility.” Because minority students have high aspirations, policies should invest in destigmatizing, scalable, universal, identity-based motivation-bolstering institutions and interventions.

86 citations


Cited by
More filters
Posted Content
TL;DR: The Difference as discussed by the authors is a landmark book about how we think in groups and how our collective wisdom exceeds the sum of its parts, and how groups that display a range of perspectives outperform groups of like-minded experts.
Abstract: In this landmark book, Scott Page redefines the way we understand ourselves in relation to one another. The Difference is about how we think in groups--and how our collective wisdom exceeds the sum of its parts. Why can teams of people find better solutions than brilliant individuals working alone? And why are the best group decisions and predictions those that draw upon the very qualities that make each of us unique? The answers lie in diversity--not what we look like outside, but what we look like within, our distinct tools and abilities. The Difference reveals that progress and innovation may depend less on lone thinkers with enormous IQs than on diverse people working together and capitalizing on their individuality. Page shows how groups that display a range of perspectives outperform groups of like-minded experts. Diversity yields superior outcomes, and Page proves it using his own cutting-edge research. Moving beyond the politics that cloud standard debates about diversity, he explains why difference beats out homogeneity, whether you're talking about citizens in a democracy or scientists in the laboratory. He examines practical ways to apply diversity's logic to a host of problems, and along the way offers fascinating and surprising examples, from the redesign of the Chicago "El" to the truth about where we store our ketchup. Page changes the way we understand diversity--how to harness its untapped potential, how to understand and avoid its traps, and how we can leverage our differences for the benefit of all.

779 citations

Journal ArticleDOI
TL;DR: It is found that peer beliefs of replicability are strongly related to replicable, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.
Abstract: Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

759 citations

Journal ArticleDOI
01 Apr 1938

702 citations

Journal ArticleDOI
TL;DR: Certain biases have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown), and there were very large differences in the mean effects between psychological sub-disciplines and between different study designs,Making it impossible to apply any global benchmarks.
Abstract: Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or large?—has been guided by the recommendations Jacob Cohen gave in his pioneering writings starting in 1962: Either compare an effect with the effects found in past research or use certain conventional benchmarks. The present analysis shows that neither of these recommendations is currently applicable. From past publications without pre-registration, 900 effects were randomly drawn and compared with 93 effects from publications with pre-registration, revealing a large difference: Effects from the former (median r = .36) were much larger than effects from the latter (median r = .16). That is, certain biases, such as publication bias or questionable research practices, have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown). In addition, there were very large differences in the mean effects between psychological sub-disciplines and between different study designs, making it impossible to apply any global benchmarks. Many more pre-registered studies are needed in the future to derive a reliable picture of real population effects.

354 citations

Journal ArticleDOI
Marc Brysbaert1
19 Jul 2019
TL;DR: In this article, the authors describe reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated measures and one repeated measure, and split-plot design.
Abstract: Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of 100, 200, and even more participants are needed. As long as we do not accept these facts, we will keep on running underpowered studies with unclear results. Addressing the issue requires a change in the way research is evaluated by supervisors, examiners, reviewers, and editors. The present paper describes reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated-measures variables or one between-groups variable and one repeated-measures variable (split-plot design). The numbers are given for the traditional, frequentist analysis with p 10. These numbers provide researchers with a standard to determine (and justify) the sample size of an upcoming study. The article also describes how researchers can improve the power of their study by including multiple observations per condition per participant.

314 citations