Journal of Memory and Language
About: Journal of Memory and Language is an academic journal published by Elsevier BV. The journal publishes majorly in the area(s): Sentence & Recall. It has an ISSN identifier of 0749-596X. Over the lifetime, 2190 publications have been published receiving 249114 citations. The journal is also known as: Memory and language.
Papers published on a yearly basis
TL;DR: It is argued that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades, and it is shown thatLMEMs generalize best when they include the maximal random effects structure justified by the design.
Abstract: Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond.
TL;DR: In this article, the authors provide an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects, and a worked-out example of how to use recent software for mixed effects modeling is provided.
Abstract: This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to traditional analyses based on quasi-F tests, by-subjects analyses, combined by-subjects and by-items analyses, and random regression. Applications and possibilities across a range of domains of inquiry are discussed.
TL;DR: In this article, a process dissociation procedure is proposed to separate the contributions of different types of processes to performance of a task, rather than equating processes with tasks, by separating automatic from intentional forms of processing.
Abstract: This paper begins by considering problems that have plagued investigations of automatic or unconscious influences of perception and memory. A process dissociation procedure that provides an escape from those problems is introduced. The process dissociation procedure separates the contributions of different types of processes to performance of a task, rather than equating processes with tasks. Using that procedure, I provide new evidence in favor of a two-factor theory of recognition memory; one factor relies on automatic processes and the other relies on intentional processes. Recollection (an intentional use of memory) is hampered when attention is divided, rather than full, at the time of test. In contrast, the use of familiarity as a basis for recognition memory judgments (an automatic use of memory) is shown to be invariant across full versus divided attention, manipulated at test. Process dissociation procedures provide a general framework for separating automatic from intentional forms of processing in a variety of domains; including perception, memory, and thought.
TL;DR: For instance, the authors found that recall is more sensitive than familiarity to response speeding, division of attention, generation, semantic encoding, the effects of aging, and the amnestic effects of benzodiazepines, while familiarity is less sensitive to shifts in response criterion, fluency manipulations, forgetting over short retention intervals, and some perceptual manipulations.
Abstract: To account for dissociations observed in recognition memory tests, several dual-process models have been proposed that assume that recognition judgments can be based on the recollection of details about previous events or on the assessment of stimulus familiarity. In the current article, these models are examined, along with the methods that have been developed to measure recollection and familiarity. The relevant empirical literature from behavioral, neuropsychological, and neuroimaging studies is then reviewed in order to assess model predictions. Results from a variety of measurement methods, including task-dissociation and process-estimation methods, are found to lead to remarkably consistent conclusions about the nature of recollection and familiarity, particularly when ceiling effects are avoided. For example, recollection is found to be more sensitive than familiarity to response speeding, division of attention, generation, semantic encoding, the effects of aging, and the amnestic effects of benzodiazepines, but it is less sensitive than familiarity to shifts in response criterion, fluency manipulations, forgetting over short retention intervals, and some perceptual manipulations. Moreover, neuropsychological and neuroimaging results indicate that the two processes rely on partially distinct neural substrates and provide support for models that assume that recollection relies on the hippocampus and prefrontal cortex, whereas familiarity relies on regions surrounding the hippocampus. Double dissociations produced by experimental manipulations at time of test indicate that the two processes are independent at retrieval, and single dissociations produced by study manipulations indicate that they are partially independent during encoding. Recollection is similar but not identical to free recall, whereas familiarity is similar to conceptual implicit memory, but is dissociable from perceptual implicit memory. Finally, the results indicate that recollection reflects a thresholdlike retrieval process that supports novel learning, whereas familiarity reflects a signal-detection process that can support novel learning only under certain conditions. The results verify a number of model predictions and prove useful in resolving several theoretical disagreements.
TL;DR: This paper identifies several serious problems with the widespread use of ANOVAs for the analysis of categorical outcome variables, and introduces ordinary logit models (i.e. logistic regression), which are well-suited to analyze categorical data and offer many advantages over ANOVA.
Abstract: This paper identifies several serious problems with the widespread use of ANOVAs for the analysis of categorical outcome variables such as forced-choice variables, question-answer accuracy, choice in production (e.g. in syntactic priming research), et cetera. I show that even after applying the arcsine-square-root transformation to proportional data, ANOVA can yield spurious results. I discuss conceptual issues underlying these problems and alternatives provided by modern statistics. Specifically, I introduce ordinary logit models (i.e. logistic regression), which are well-suited to analyze categorical data and offer many advantages over ANOVA. Unfortunately, ordinary logit models do not include random effect modeling. To address this issue, I describe mixed logit models (Generalized Linear Mixed Models for binomially distributed outcomes, Breslow and Clayton [Breslow, N. E. & Clayton, D. G. (1993). Approximate inference in generalized linear mixed models. Journal of the American Statistical Society 88 (421), 9–25]), which combine the advantages of ordinary logit models with the ability to account for random subject and item effects in one step of analysis. Throughout the paper, I use a psycholinguistic data set to compare the different statistical methods.