scispace - formally typeset
Search or ask a question
Author

Danielle J. Navarro

Bio: Danielle J. Navarro is an academic researcher from University of New South Wales. The author has contributed to research in topics: Inductive reasoning & Inference. The author has an hindex of 10, co-authored 32 publications receiving 387 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This work describes the collection of word associations for over 12,000 cue words, currently the largest such English-language resource in the world, and shows that measures based on a mechanism of spreading activation derived from this new resource are highly predictive of direct judgments of similarity.
Abstract: Word associations have been used widely in psychology, but the validity of their application strongly depends on the number of cues included in the study and the extent to which they probe all associations known by an individual. In this work, we address both issues by introducing a new English word association dataset. We describe the collection of word associations for over 12,000 cue words, currently the largest such English-language resource in the world. Our procedure allowed subjects to provide multiple responses for each cue, which permits us to measure weak associations. We evaluate the utility of the dataset in several different contexts, including lexical decision and semantic categorization. We also show that measures based on a mechanism of spreading activation derived from this new resource are highly predictive of direct judgments of similarity. Finally, a comparison with existing English word association sets further highlights systematic improvements provided through these new norms.

192 citations

Posted ContentDOI
01 Mar 2019
TL;DR: The authors discuss some of these issues from a scientific perspective, and highlight the reasons why psychological researchers cannot avoid asking them, but do not offer answers to these questions, but hope to highlight some of the reasons they cannot avoid them.
Abstract: Discussions of model selection in the psychological literature typically frame the issues as a question of statistical inference, with the goal being to determine which model makes the best predictions about data. Within this setting, advocates of leave-one-out cross-validation and Bayes factors disagree on precisely which prediction problem model selection questions should aim to answer. In this comment, I discuss some of these issues from a scientific perspective. What goal does model selection serve when all models are known to be systematically wrong? How might “toy problems” tell a misleading story? How does the scientific goal of explanation align with (or differ from) traditional statistical concerns? I do not offer answers to these questions, but hope to highlight the reasons why psychological researchers cannot avoid asking them.

66 citations

Posted ContentDOI
28 Apr 2020-bioRxiv
TL;DR: A formal statistical analysis of three popular claims in the metascientific literature is presented, showing how the use and benefits of such formalism can inform and shape debates about such methodological claims.
Abstract: Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological reform attempts suffer from similar mistakes and over-generalizations to the ones they aim to address. We argue that this can be attributed in part to lack of formalism and first principles. Considering the costs of allowing false claims to become canonized, we argue for formal statistical rigor and scientific nuance in methodological reform. To attain this rigor and nuance, we propose a five-step formal approach for solving methodological problems. To illustrate the use and benefits of such formalism, we present a formal statistical analysis of three popular claims in the metascientific literature: (a) that reproducibility is the cornerstone of science; (b) that data must not be used twice in any analysis; and (c) that exploratory projects imply poor statistical practice. We show how our formal approach can inform and shape debates about such methodological claims.

51 citations

Journal ArticleDOI
16 Mar 2018-Glossa
TL;DR: The authors evaluate within-and between-participant test-retest reliability on a wide range of measures of sentence acceptability, including Likert scales, forced-choice judgments, magnitude estimation, and a novel measure based on Thurstonian approaches in psychophysics.
Abstract: Understanding and measuring sentence acceptability is of fundamental importance for linguists, but although many measures for doing so have been developed, relatively little is known about some of their psychometric properties. In this paper we evaluate within- and between-participant test-retest reliability on a wide range of measures of sentence acceptability. Doing so allows us to estimate how much of the variability within each measure is due to factors including participant-level individual differences, sample size, response styles, and item effects. The measures examined include Likert scales, two versions of forced-choice judgments, magnitude estimation, and a novel measure based on Thurstonian approaches in psychophysics. We reproduce previous findings of high between-participant reliability within and across measures, and extend these results to a generally high reliability within individual items and individual people. Our results indicate that Likert scales and the Thurstonian approach produce the most stable and reliable acceptability measures and do so with smaller sample sizes than the other measures. Moreover, their agreement with each other suggests that the limitation of a discrete Likert scale does not impose a significant degree of structure on the resulting acceptability judgments.

25 citations


Cited by
More filters
Journal ArticleDOI
01 Jun 1959

3,442 citations

01 Jan 2016
TL;DR: The handbook of psychological testing is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can download it instantly.
Abstract: Thank you very much for reading handbook of psychological testing. As you may know, people have search hundreds times for their favorite readings like this handbook of psychological testing, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some infectious virus inside their laptop. handbook of psychological testing is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the handbook of psychological testing is universally compatible with any devices to read.

1,177 citations

01 Jan 2016
TL;DR: In this article, the authors present a system for downloading knowledge of language its nature origin and use, which can end up in harmful downloads, such as harmful downloads of books that people have searched numerous times for their favorite books, but ended up with harmful downloads.
Abstract: Thank you very much for downloading knowledge of language its nature origin and use. Maybe you have knowledge that, people have search numerous times for their favorite books like this knowledge of language its nature origin and use, but end up in harmful downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some infectious virus inside their desktop computer.

586 citations

Journal ArticleDOI
Marc Brysbaert1
19 Jul 2019
TL;DR: In this article, the authors describe reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated measures and one repeated measure, and split-plot design.
Abstract: Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of 100, 200, and even more participants are needed. As long as we do not accept these facts, we will keep on running underpowered studies with unclear results. Addressing the issue requires a change in the way research is evaluated by supervisors, examiners, reviewers, and editors. The present paper describes reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated-measures variables or one between-groups variable and one repeated-measures variable (split-plot design). The numbers are given for the traditional, frequentist analysis with p 10. These numbers provide researchers with a standard to determine (and justify) the sample size of an upcoming study. The article also describes how researchers can improve the power of their study by including multiple observations per condition per participant.

314 citations