Showing papers in "Journal of Memory and Language in 2009"
TL;DR: The authors found that content words are shorter when more frequent, and shorter when repeated, while function words are not so affected, after controlling for frequency and predictability, while both content and function words were strongly affected by predictability from the word following them.
Abstract: In a regression study of conversational speech, we show that frequency, contextual predictability, and repetition have separate contributions to word duration, despite their substantial correlations. We also found that content- and function-word durations are affected differently by their frequency and predictability. Content words are shorter when more frequent, and shorter when repeated, while function words are not so affected. Function words have shorter pronunciations, after controlling for frequency and predictability. While both content and function words are strongly affected by predictability from the word following them, sensitivity to predictability from the preceding word is largely limited to very frequent function words. The results support the view that content and function words are accessed differently in production. We suggest a lexical-access-based model of our results, in which frequency or repetition leads to shorter or longer word durations by causing faster or slower lexical access, mediated by a general mechanism that coordinates the pace of higher-level planning and the execution of the articulatory plan.
TL;DR: The retrieval effort hypothesis as discussed by the authors states that difficult but successful retrieevals are better for memory than easier successful retrievals, and it has been shown that the difficulty of retrieval during practice increases, final test performance increases.
Abstract: Although substantial research has demonstrated the benefits of retrieval practice for promoting memory, very few studies have tested theoretical accounts of this effect. Across two experiments, we tested a hypothesis that follows from the desirable difficulty framework [Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe, A. Shimamura, (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press], the retrieval effort hypothesis, which states that difficult but successful retrievals are better for memory than easier successful retrievals. To test the hypothesis, we set up conditions under which retrieval during practice was successful but differentially difficult. Interstimulus interval (ISI) and criterion level (number of times items were required to be correctly retrieved) were manipulated to vary the difficulty of retrieval. In support of the retrieval effort hypothesis, results indicated that as the difficulty of retrieval during practice increased, final test performance increased. Longer versus shorter ISIs led to more difficulty retrieving items, but higher levels of final test performance. Additionally, as criterion level increased, retrieval was less difficult, and diminishing returns for final test performance were observed.
TL;DR: The authors showed that the attraction effect is limited to ungrammatical sentences, which would be unexpected if the representation of the subject number were inherently prone to error, and argued that agreement attraction in comprehension instead reflects a cue-based retrieval mechanism that is subject to retrieval errors.
Abstract: Much work has demonstrated so-called attraction errors in the production of subject–verb agreement (e.g., ‘The key to the cabinets are on the table’, [Bock, J. K., & Miller, C. A. (1991). Broken agreement. Cognitive Psychology, 23, 45–93]), in which a verb erroneously agrees with an intervening noun. Six self-paced reading experiments examined the online mechanisms underlying the analogous attraction effects that have been shown in comprehension; namely reduced disruption for subject–verb agreement violations when these ‘attractor’ nouns intervene. One class of theories suggests that these effects are rooted in faulty representation of the number of the subject, while another class of theories suggests instead that such effects arise in the process of re-accessing subject number at the verb. Two main findings provide evidence against the first class of theories. First, attraction also occurs in relative clause configurations in which the attractor noun does not intervene between subject and verb and is not in a direct structural relationship with the subject head (e.g., ‘The drivers who the runner wave to each morning’). Second, we observe a ‘grammatical asymmetry’: attraction effects are limited to ungrammatical sentences, which would be unexpected if the representation of subject number were inherently prone to error. We argue that agreement attraction in comprehension instead reflects a cue-based retrieval mechanism that is subject to retrieval errors. The grammatical asymmetry can be accounted for under one implementation that we propose, or if the mechanism is only called upon when the predicted agreement features fail to be instantiated on the verb.
TL;DR: This paper used hierarchical regression techniques to examine the effects of standard variables (phonological onsets, stress pattern, length, orthographic N, phonological N, word frequency) and additional variables (number of syllables, feedforward and feedback phonological consistency, novel orthographic and phonological similarity measures, semantics) on the pronunciation and lexical decision latencies of 6115 monomorphemic multisyllabic words.
Abstract: The visual word recognition literature has been dominated by the study of monosyllabic words in factorial experiments, computational models, and megastudies. However, it is not yet clear whether the behavioral effects reported for monosyllabic words generalize reliably to multisyllabic words. Hierarchical regression techniques were used to examine the effects of standard variables (phonological onsets, stress pattern, length, orthographic N, phonological N, word frequency) and additional variables (number of syllables, feedforward and feedback phonological consistency, novel orthographic and phonological similarity measures, semantics) on the pronunciation and lexical decision latencies of 6115 monomorphemic multisyllabic words. These predictors accounted for 61.2% and 61.6% of the variance in pronunciation and lexical decision latencies, respectively, higher than the estimates reported by previous monosyllabic studies. The findings we report represent a well-specified set of benchmark phenomena for constraining nascent multisyllabic models of English word recognition.
TL;DR: This article explored whether chunking in short-term memory for verbal materials depends on attentionally limited executive processes and concluded that executive processes are not crucial for the sentence chunking advantage and discuss implications for the episodic buffer and other theoretical accounts of working memory and chunking.
Abstract: A series of experiments explored whether chunking in short-term memory for verbal materials depends on attentionally limited executive processes. Secondary tasks were used to disrupt components of working memory and chunking was indexed by the sentence superiority effect, whereby immediate recall is better for sentences than word lists. To facilitate comparisons and maximise demands on working memory, materials were constrained by re-sampling a small set of words. Experiment 1 confirmed a reliable sentence superiority effect with constrained materials. Experiment 2 showed that secondary tasks of concurrent articulation and visual choice reaction impaired recall, but did not remove or reduce the sentence superiority effect. This was also the case with visual and verbal n-back concurrent tasks (Experiment 3), and with concurrent backward counting (Experiment 4). Backward counting did however interact with mode of presenting the memory materials, suggesting that our failure to find interactions between concurrent task and materials was not attributable to our methodology. We conclude that executive processes are not crucial for the sentence chunking advantage and we discuss implications for the episodic buffer and other theoretical accounts of working memory and chunking.
TL;DR: This article investigated the role of experience in perception and representation of cross-dialect variation in spoken word recognition and found that experience strongly affects a listener's ability to recognize and represent spoken words.
Abstract: The task of recognizing spoken words is notoriously difficult Once dialectal variation is considered, the difficulty of this task increases When living in a new dialect region, however, processing difficulties associated with dialectal variation dissipate over time Through a series of primed lexical decision tasks (form priming, semantic priming, and long-term repetition priming), we examine the general issue of dialectal variation in spoken word recognition, while investigating the role of experience in perception and representation The main questions we address are: (1) how are cross-dialect variants recognized and stored, and (2) how are these variants accommodated by listeners with different levels of exposure to the dialect? Three claims are made based on the results: (1) dialect production is not always representative of dialect perception and representation, (2) experience strongly affects a listener’s ability to recognize and represent spoken words, and (3) there is a general benefit for variants that are not regionally-marked
TL;DR: The data show that, regardless of lexical status, attempts at semantic access for orthographic neighbors of expected words is facilitated relative to the processing of orthographically unrelated items.
Abstract: Two related questions critical to understanding the predictive processes that come online during sentence comprehension are (1) what information is included in the representation created through prediction and (2) at what functional stage does top-down, predicted information begin to affect bottom-up word processing? We investigated these questions by recording event-related potentials (ERPs) as participants read sentences that ended with expected words or with unexpected items (words, pseudowords, or illegal strings) that were either orthographically unrelated to the expected word or were one of its orthographic neighbors. The data show that, regardless of lexical status, attempts at semantic access (N400) for orthographic neighbors of expected words are facilitated relative to the processing of orthographically unrelated items. Our findings support a view of sentence processing wherein orthographically organized information is brought online by prediction and interacts with input prior to any filter on lexical status.
TL;DR: In this article, the authors evaluated the interplay between two mechanisms of maintenance of verbal information in working memory, namely articulatory rehearsal as described in Baddeley's model, and attentional refreshing as postulated in Barrouillet and Camos's Time-Based Resource-Sharing (TBRS) model.
Abstract: The present study evaluated the interplay between two mechanisms of maintenance of verbal information in working memory, namely articulatory rehearsal as described in Baddeley’s model, and attentional refreshing as postulated in Barrouillet and Camos’s Time-Based Resource-Sharing (TBRS) model. In four experiments using complex span paradigm, we manipulated the degree of articulatory suppression and the attentional load of the processing component to affect orthogonally the two mechanisms of maintenance. In line with previous neurophysiological evidence reported in the literature, behavioral results suggest that articulatory rehearsal and attentional refreshing are two independent mechanisms that operate jointly on the maintenance of verbal information. It is suggested that these two mechanisms would affect different features that result from various levels of encoding. Moreover, time parameters should be carefully considered in any study on maintenance of verbal information in working memory.
TL;DR: Chan et al. as discussed by the authors showed that retrieval practice can sometimes improve later recall of the nontested material, a phenomenon termed retrieval-induced facilitation, which can benefit from prior testing of related material.
Abstract: Retrieval practice can enhance long-term retention of the tested material (the testing effect), but it can also impair later recall of the nontested material – a phenomenon known as retrieval-induced forgetting (Anderson, M. C., Bjork, R. A., & Bjork, E. L. (1994). Remembering can cause forgetting: retrieval dynamics in long-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(5), 1063–1087). Recent research, however, has shown that retrieval practice can sometimes improve later recall of the nontested material – a phenomenon termed retrieval-induced facilitation (Chan, J. C. K., McDermott, K. B., & Roediger, H. L. (2006). Retrieval-induced facilitation: initially nontested material can benefit from prior testing of related material. Journal of Experimental Psychology: General, 135, 553–571). What drives these different effects? Two experiments were designed to examine the conditions under which retrieval induces forgetting and facilitation. Two variables, the level of integration invoked during encoding and the length of delay between retrieval practice and final test, were revealed as critical factors in determining whether testing facilitated or hindered later retrieval of the nontested information. A text processing framework is advanced to account for these findings.
TL;DR: In this paper, the effects of associative strength and gist relations on rates of children's and adults' true and false memories were examined in three experiments, and the results showed that true recall was higher than false recall for all ages.
Abstract: The effects of associative strength and gist relations on rates of children’s and adults’ true and false memories were examined in three experiments. Children aged 5–11 and university-aged adults participated in a standard Deese/Roediger–McDermott false memory task using DRM and category lists in two experiments and in the third, children memorized lists that differed in associative strength and semantic cohesion. In the first two experiments, half of the participants were primed before list presentation with gist-relevant cues and the results showed that: (1) both true and false memories increased with age, (2) true recall was higher than false recall for all ages, (3) at all ages, false memory rates were determined by backward associative strength, and (4) false memories varied predictably with changes in associative strength but were unaffected by gist manipulations (category structure or gist priming). In the third experiment, both gist and associative strength were varied orthogonally and the results showed that regardless of age, children’s (5) true recall was affected by gist manipulations (semantic cohesion) and (6) false recall was affected by backward associative strength. These findings are discussed in the context of models of false memory illusions and continuities in memory development more generally.
TL;DR: An early, on-line partner-specific effect for interpretation of entrained terms is demonstrated, as well as preliminary evidence for an early, partner- specific effect for new terms, consistent with a large body of work demonstrating that the language processing system uses a rich source of contextual and pragmatic representations to guide on-lines processing decisions.
Abstract: In dialog settings, conversational partners converge on similar names for referents. These lexically entrained terms [Garrod, S., & Anderson, A. (1987). Saying what you mean in dialog: A study in conceptual and semantic co-ordination. Cognition, 27, 181–218] are part of the common ground between the particular individuals who established the entrained term [Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482–1493], and are thought to be encoded in memory with a partner-specific cue. Thus far, analyses of the time-course of interpretation suggest that partner-specific information may not constrain the initial interpretation of referring expressions [Barr, D. J., & Keysar, B. (2002). Anchoring comprehension in linguistic precedents. Journal of Memory and Language, 46, 391–418; Kronmuller, E., & Barr, D. J. (2007). Perspective-free pragmatics: Broken precedents and the recovery-from-preemption hypothesis. Journal of Memory and Language, 56, 436–455]. However, these studies used non-interactive paradigms, which may limit the use of partner-specific representations. This article presents the results of three eye-tracking experiments. Experiment 1a used an interactive conversation methodology in which the experimenter and participant jointly established entrained terms for various images. On critical trials, the same experimenter, or a new experimenter described a critical image using an entrained term, or a new term. The results demonstrated an early, on-line partner-specific effect for interpretation of entrained terms, as well as preliminary evidence for an early, partner-specific effect for new terms. Experiment 1b used a non-interactive paradigm in which participants completed the same task by listening to image descriptions recorded during Experiment 1a; the results showed that partner-specific effects were eliminated. Experiment 2 replicated the partner-specific findings of Experiment 1a with an interactive paradigm and scenes that contained previously unmentioned images. The results suggest that partner-specific interpretation is most likely to occur in interactive dialog settings; the number of critical trials and stimulus characteristics may also play a role. The results are consistent with a large body of work demonstrating that the language processing system uses a rich source of contextual and pragmatic representations to guide on-line processing decisions.
TL;DR: This paper found that the processing of compound words involves the integration of the constituents, but not much is known about what integration entails, except that integration draws on both linguistic and conceptual knowledge about the constituents and the compound word; ease of processing is affected by the lemma frequency of the whole compound, as well as by each constituent's positional family frequency.
Abstract: Although previous research has suggested that the processing of compound words involves the integration of the constituents, not much is known about what integration entails. Three experiments suggest that integration draws on both linguistic and conceptual knowledge about the constituents and the compound word; ease of processing (as reflected by RT in a sense/nonsense judgment task) is affected by the lemma frequency of the whole compound, as well as by each constituent’s positional family frequency. In addition, the data demonstrate that a compound’s constituents are not just conjointly activated but are bound together in a particular way; responses to a compound (e.g., snowball) were faster when the compound was preceded by a compound using the same relational structure (e.g., snowfort—MADE OF) than when preceded by a compound using a different relational structure (e.g., snowshovel—FOR). This finding suggests that the conceptual representation of a compound word might be based on a relational structure.
TL;DR: Effects of word initial VOT on lexical garden-path recovery are inconsistent with inhibition at the phoneme level and support models of spoken word recognition in which sub-phonetic detail is preserved throughout the processing system.
Abstract: Spoken word recognition shows gradient sensitivity to within-category voice onset time (VOT), as predicted by several current models of spoken word recognition, including TRACE (McClelland, J., & Elman, J. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1–86). It remains unclear, however, whether this sensitivity is short-lived or whether it persists over multiple syllables. VOT continua were synthesized for pairs of words like barricade and parakeet, which differ in the voicing of their initial phoneme, but otherwise overlap for at least four phonemes, creating an opportunity for “lexical garden-paths” when listeners encounter the phonemic information consistent with only one member of the pair. Simulations established that phoneme-level inhibition in TRACE eliminates sensitivity to VOT too rapidly to influence recovery. However, in two Visual World experiments, look-contingent and response-contingent analyses demonstrated effects of word initial VOT on lexical garden-path recovery. These results are inconsistent with inhibition at the phoneme level and support models of spoken word recognition in which sub-phonetic detail is preserved throughout the processing system.
TL;DR: A series of four experiments used a two-choice response time (RT) paradigm to investigate how the latency of correct agreement decisions is modulated by the presence of a number attractor, and to investigate the relative latency of errors andcorrect agreement decisions.
Abstract: Speakers frequently make subject–verb number agreement errors in the presence of a local noun with a different number from the head of the subject phrase. A series of four experiments used a two-choice response time (RT) paradigm to investigate how the latency of correct agreement decisions is modulated by the presence of a number attractor, and to investigate the relative latency of errors and correct agreement decisions. The presence of a number attractor reliably increased correct RT, and the size of this RT effect was consistently larger in conditions that also had larger effects on accuracy. Number attraction errors, however, were similar in RT to correct responses in the same experimental condition. These results are interpreted as supporting a model according to which an intervening number attractor makes the agreement computation process more difficult in general [Eberhard, K. M., Cutting, J. C., & Bock, K. (2005). Making sense of syntax: Number agreement in sentence production. Psychological Review 112, 531–559], with errors arising probabilistically. However, attraction from a non-intervening noun resulted in only mildly inflated correct RT, but dramatically inflated error RT, suggesting that non-intervening attraction errors may reflect confusion about the structure of the subject phrase.
TL;DR: The authors argue that the semantic analysis of task-irrelevant stimuli is modulated by feature-specific attention allocation, and find that semantic priming of pronunciation responses to depend upon the extent to which participants focused their attention upon specific semantic stimulus dimensions.
Abstract: We argue that the semantic analysis of task-irrelevant stimuli is modulated by feature-specific attention allocation. In line with this hypothesis, we found semantic priming of pronunciation responses to depend upon the extent to which participants focused their attention upon specific semantic stimulus dimensions. In Experiment 1, we examined the impact of feature-specific attention allocation upon affective priming. In Experiment 2, we examined the impact of feature-specific attention allocation upon nonaffective semantic priming. In Experiment 3, affective relatedness and nonaffective semantic relatedness were manipulated orthogonally under conditions that either promoted selective attention for affective stimulus information or selective attention for nonaffective semantic stimulus information. In each of these experiments, significant semantic priming emerged only for stimulus information that was selectively attended to. Implications for the hypothesis that the extraction of word meaning proceeds in an automatic, unconditional fashion are discussed.
TL;DR: The relationship between nonword repetition ability and vocabulary size and vocabulary learning has been a topic of intense research interest and investigation over the last two decades, following the demonstration that non-word repetition accuracy is predictive of vocabulary size as mentioned in this paper.
Abstract: The relationship between nonword repetition ability and vocabulary size and vocabulary learning has been a topic of intense research interest and investigation over the last two decades, following the demonstration that nonword repetition accuracy is predictive of vocabulary size (Gathercole & Baddeley, 1989). However, the nature of this relationship is not well understood. One prominent account posits that phonological short-term memory (PSTM) is a causal determinant both of nonword repetition ability and of phonological vocabulary learning, with the observed correlation between the two reflecting the effect of this underlying third variable (e.g., Baddeley, Gathercole, & Papagno, 1998). An alternative account proposes the opposite causality: that it is phonological vocabulary size that causally determines nonword repetition ability (e.g., Snowling, Chiat, & Hulme, 1991). We present a theory of phonological vocabulary learning, instantiated as a computational model. The model offers a precise account of the construct of PSTM, of performance in the nonword repetition task, of novel word form learning, and of the relationship between all of these. We show through simulation not only that PSTM causally affects both nonword repetition accuracy and phonological vocabulary size, but also that phonological vocabulary size causally affects nonword repetition ability. The plausibility of the model is supported by the fact that its nonword repetition accuracy displays effects of phonotactic probability and of nonword length, which have been taken as evidence for causal effects on nonword repetition accuracy of phonological vocabulary knowledge and PSTM, respectively. Thus the model makes explicit how the causal links posited by the two theoretical perspectives are both valid, in the process reconciling the two perspectives, and indicating that an opposition between them is unnecessary.
TL;DR: This article showed that participants are more familiar with phantom-words than with frequent syllable combinations. But they did not find that the phantom words had the same statistical structure as the occurring items.
Abstract: Word-segmentation, that is, the extraction of words from fluent speech, is one of the first problems language learners have to master. It is generally believed that statistical processes, in particular those tracking “transitional probabilities” (TPs), are important to word-segmentation. However, there is evidence that word forms are stored in memory formats differing from those that can be constructed from TPs, i.e. in terms of the positions of phonemes and syllables within words. In line with this view, we show that TP-based processes leave learners no more familiar with items heard 600 times than with “phantom-words” not heard at all if the phantom-words have the same statistical structure as the occurring items. Moreover, participants are more familiar with phantom-words than with frequent syllable combinations. In contrast, minimal prosody-like perceptual cues allow learners to recognize actual items. TPs may well signal co-occurring syllables; this, however, does not seem to lead to the extraction of word-like units. We review other, in particular prosodic, cues to word-boundaries which may allow the construction of positional memories while not requiring language-specific knowledge, and suggest that their contributions to word-segmentation need to be reassessed.
TL;DR: A connectionist model is presented that learns the print to sound mappings of Chinese characters using the same functional architecture and learning rules that have been applied to English, and predicts an interaction between item frequency and print-to-sound consistency analogous to what has been found for English.
Abstract: Many theoretical models of reading assume that different writing systems require different processing assumptions. For example, it is often claimed that print-to-sound mappings in Chinese are not represented or processed sub-lexically. We present a connectionist model that learns the print to sound mappings of Chinese characters using the same functional architecture and learning rules that have been applied to English. The model predicts an interaction between item frequency and print-to-sound consistency analogous to what has been found for English, as well as a language-specific regularity effect particular to Chinese. Behavioral naming experiments using the same test items as the model confirmed these predictions. Corpus properties and the analyses of internal representations that evolved over training revealed that the model was able to capitalize on information in "phonetic components" - sub-lexical structures of variable size that convey probabilistic information about pronunciation. The results suggest that adult reading performance across very different writing systems may be explained as the result of applying the same learning mechanisms to the particular input statistics of writing systems shaped by both culture and the exigencies of communicating spoken language in a visual medium.
TL;DR: Detailed analysis of the timing of adults' and children's eye movements provided clear evidence for incremental interpretation of the speech signal and accurate encoding of consonants even in words children cannot yet say.
Abstract: Previous tests of toddlers’ phonological knowledge of familiar words using word recognition tasks have examined syllable onsets but not word-final consonants (codas). However, there are good reasons to suppose that children’s knowledge of coda consonants might be less complete than their knowledge of onset consonants. To test this hypothesis, the present study examined 14–22-month-old children’s knowledge of the phonological forms of familiar words by measuring their comprehension of correctly pronounced and mispronounced instances of those words using a visual fixation task. Mispronunciations substituted onset or coda consonants. Adults were tested in the same task for comparison with children. Children and adults fixated named targets more upon hearing correct pronunciations than upon hearing mispronunciations, whether those mispronunciations involved the word’s initial or final consonant. In addition, detailed analysis of the timing of adults’ and children’s eye movements provided clear evidence for incremental interpretation of the speech signal. Children’s responses were slower and less accurate overall, but children and adults showed nearly identical temporal effects of the placement of phonological substitutions. The results demonstrate accurate encoding of consonants even in words children cannot yet say.
TL;DR: This paper examined how semantic convergence affects the centers and boundaries of lexical categories for common household objects for Dutch-French bilinguals and found evidence for converging category centers for bilinguals.
Abstract: Bilinguals’ lexical mappings for their two languages have been found to converge toward a common naming pattern. The present paper investigates in more detail how semantic convergence is manifested in bilingual lexical knowledge. We examined how semantic convergence affects the centers and boundaries of lexical categories for common household objects for Dutch–French bilinguals. We found evidence for converging category centers for bilinguals: (1) correlations were higher between their typicality ratings for roughly corresponding categories in the two languages than between typicality ratings of monolinguals in each language, and (2) in a geometrical representation, category centers derived from their naming data in the two languages were situated closer to each other than were the corresponding monolingual category centers. We also found evidence for less complex category boundaries for bilinguals: (1) bilinguals needed fewer dimensions than monolinguals to separate their categories linearly and (2) fewer violations of similarity-based naming were observed for bilinguals than for monolinguals. Implications for theories of the bilingual lexicon are discussed.
TL;DR: The authors presented a learning-based account of word order biases in the form of a connectionist model of syntax acquisition that can learn the distinct grammatical properties of English and Japanese while, at the same time, accounting for the cross-linguistic variability in processing biases in sentence production.
Abstract: Languages differ from one another and must therefore be learned. Processing biases in word order can also differ across languages. For example, heavy noun phrases tend to be shifted to late sentence positions in English, but to early positions in Japanese. Although these language differences suggest a role for learning, most accounts of these biases have focused on processing factors. This paper presents a learning-based account of these word order biases in the form of a connectionist model of syntax acquisition that can learn the distinct grammatical properties of English and Japanese while, at the same time, accounting for the cross-linguistic variability in processing biases in sentence production. This account demonstrates that the incremental nature of sentence processing can have an important effect on the representations that are learned in different languages.
TL;DR: In this article, an information-theoretical measure of the divergence in the frequency distributions of two of the paradigms to which a word simultaneously belongs: the paradigm of the stem and the more general paradigm of nominal class in which the stem is embedded is provided.
Abstract: In this study, we investigate the relevance of inflectional paradigms and inflectional classes for lexical processing. We provide an information-theoretical measure of the divergence in the frequency distributions of two of the paradigms to which a word simultaneously belongs: the paradigm of the stem and the more general paradigm of the nominal class in which the stem is embedded. We show that after controlling for other variables, this measure is positively correlated with response latencies and error counts in a visual lexical decision experiment in Serbian. We interpret these results as a trace of the simultaneous influence on lexical processing of both the stem and the inflectional paradigms.
TL;DR: In this paper, the authors used subsegmental phonological features and natural classes for learning, and showed that participants should generalize to the novel test segments, and further evidence for the generality of learning.
Abstract: representations such as subsegmental phonological features play such a vital role in explanations of phonological processes that many assume that these representations play an equally prominent role in the learning process. This assumption is tested in three artificial grammar experiments involving a mini language with morpho-phonological alternations based on back vowel harmony. In Experiments 1 and 2, adult participants were trained using positive data from four vowels in a six-vowel inventory: the two remaining vowels appeared at test only. If participants use subsegmental phonological features and natural classes for learning, they should generalize to the novel test segments. Results support a subsegmental feature-based learning strategy that makes use of phonetic information and knowledge of phonological principles. A third experiment (Experiment 3) tests for generalizations to novel suffixes, providing further evidence for the generality of learning.
TL;DR: It is shown that NoF effects are carried by shared visual form and surface, encyclopedic, tactile, and taste knowledge, and a decision-making account is proposed, rather than one based on the computation of word meaning.
Abstract: When asked to list semantic features for concrete concepts, participants list many features for some concepts and few for others. Concepts with many semantic features are processed faster in lexical and semantic decision tasks (Pexman, Holyk, & Monfils, 2003; Pexman, Lupker, & Hino, 2002). Using both lexical and concreteness decision tasks, we provided further insight into these number-of-features (NoF) effects. We began by replicating the effect using a larger and better controlled set of items. We then investigated the relationship between NoF and feature distinctiveness and found that features shared by numerous concrete concepts such as facilitate decisions to a greater extent than do distinctive features such as . Finally, we showed that NoF effects are carried by shared visual form and surface, encyclopedic, tactile, and taste knowledge. We propose a decision-making account of these results, rather than one based on the computation of word meaning.
TL;DR: The authors found that the joint effects of semantic priming and word frequency are critically dependent upon differences in the vocabulary knowledge of the participants, and additive effects of the two variables were observed in means, and in RT distributional analyses, in participants with more vocabulary knowledge, while interactive effects were observed by participants with less vocabulary knowledge.
Abstract: Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the joint effects of semantic priming and word frequency are critically dependent upon differences in the vocabulary knowledge of the participants. Specifically, across two Universities, additive effects of the two variables were observed in means, and in RT distributional analyses, in participants with more vocabulary knowledge, while interactive effects were observed in participants with less vocabulary knowledge. These results are discussed with reference to [Borowsky, R., & Besner, D. (1993). Visual word recognition: A multistage activation model. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 813–840] multistage account and [Plaut, D. C., & Booth, J. R. (2000). Individual and developmental differences in semantic priming: Empirical and computational support for a single-mechanism account of lexical processing. Psychological Review, 107, 786–823] single-mechanism model. In general, the findings are also consistent with a flexible lexical processing system that optimizes performance based on processing fluency and task demands.
TL;DR: In this article, the authors investigated the eye movements of Thai-English bilinguals when reading both Thai and English with and without interword spaces, in comparison with English monolinguals.
Abstract: The study investigated the eye movements of Thai–English bilinguals when reading both Thai and English with and without interword spaces, in comparison with English monolinguals. Thai is an alphabetic orthography without interword spaces. Participants read sentences with high and low frequency target words embedded in same sentence frames with and without interword spaces. Interword spaces had a selective effect on reading in Thai, as they facilitated word recognition, but did not affect eye guidance and lexical segmentation. Initial saccade landing positions were similar in spaced and unspaced text. As expected, removal of spaces severely disrupted reading in English, as reflected by the eye movement measures, in both bilinguals and monolinguals. Here, initial landing positions were significantly nearer the beginning of the target words when reading unspaced rather than spaced text. Effects were more accentuated in the bilinguals. In sum, results from reading in Thai give qualified support for a facilitatory function of interword spaces.
TL;DR: In a self-paced reading experiment and an eyetracking experiment, verb bias effects are demonstrated in sentences with simple structures that should require no reanalyis, and thus provide evidence that the combinatorial properties of individual words influence the earliest stages of sentence comprehension.
Abstract: Constraint-based lexical models of language processing assume that readers resolve temporary ambiguities by relying on a variety of cues, including particular knowledge of how verbs combine with nouns. Previous experiments have demonstrated verb bias effects only in structurally complex sentences, and have been criticized on the grounds that such effects could be due to a rapid reanalysis stage in a two-stage modular processing system. In a self-paced reading experiment and an eyetracking experiment, we demonstrate verb bias effects in sentences with simple structures that should require no reanalysis, and thus provide evidence that the combinatorial properties of individual words influence the earliest stages of sentence comprehension.
TL;DR: In this paper, the authors used the item-method of directed forgetting and obtained greater directed forgetting for VTs than SPTs, but only in the primacy region for SPTs.
Abstract: Performing action phrases (subject-performed tasks, SPTs) leads to better memory than verbal learning instructions (verbal tasks, VTs). In Experiments 1–3, the list-method directed forgetting design produced equivalent directed forgetting impairment for VTs and SPTs; however, directed forgetting enhancement emerged only for VTs, but not SPTs. Serial position analyses revealed that both item types suffered equivalent forgetting across serial positions, but enhancement was evident mostly in the first half of List 2. Experiment 4 used the item-method of directed forgetting and obtained greater directed forgetting for VTs than SPTs. A remember-all baseline group allowed estimating the impairment for to-be-forgotten (TBF) items and enhancement for to-be-remembered (TBR) items. Serial position analyses showed greater impairment for TBF items from the beginning of the list than elsewhere in the list. Directed forgetting enhancement for TBR items occurred throughout the list for VTs, but only in the primacy region for SPTs. Overall, dissociations across the list-method and item-method studies with SPTs suggest that the two methods have different underlying mechanisms. Furthermore, dissociations obtained with SPTs within list-method studies provide support for the dual-factor directed forgetting account and challenge the single-factor accounts.
TL;DR: The authors found evidence for lexical differentiation in the context of lexical entrainment and found that speakers reuse prior references to objects when choosing reference phrases and contrast new referents against a set of previously established referential precedents.
Abstract: Speakers reuse prior references to objects when choosing reference phrases, a phenomenon known as lexical entrainment. One explanation is that speakers want to maintain a set of previously established referential precedents. Speakers may also contrast any new referents against this previously established set, thereby avoiding applying the same reference phrase to refer to different referents, a complementary phenomenon I call lexical differentiation. This study provides evidence for lexical differentiation in the context of lexical entrainment. Both phenomena are present when speakers and addressees interact, when speakers imagine addressees, and when speakers simply name objects. This indicates that lexical entrainment and lexical differentiation may be products of speaker-centered processes. However, the magnitudes of these effects differ when speakers have different audience demands, indicating that audience-centered processes may also be involved.
TL;DR: The present study tests the limits of this phonological inference account by examining how listeners process for the first time a pronunciation variant of a newly learned word, demonstrating that lexical processing is necessary for variant recognition.
Abstract: One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments (Gaskell & Marslen-Wilson, 1998). The present study tests the limits of this phonological inference account by examining how listeners process for the first time a pronunciation variant of a newly learned word. Recognition of such a variant should occur as long as it possesses the phonological structure that legitimizes the variation. Experiments 1 and 2 identify a phonological environment that satisfies the conditions necessary for a phonological inference mechanism to be operational. Using a word-learning paradigm, Experiments 3 through 5 show that inference alone is not sufficient for generalization but could facilitate it, and that one condition that leads to generalization is meaningful exposure to the variant in an overheard conversation, demonstrating that lexical processing is necessary for variant recognition.