scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The broth in my brother's brothel: morpho-orthographic segmentation in visual word recognition.

TL;DR: Results showed significant and equivalent masked priming effects in cases in which primes and targets appeared to be morphologically related, and priming in these conditions could be distinguished from nonmorphological form priming.
Abstract: Much research suggests that words comprising more than one morpheme are represented in a “decomposed” manner in the visual word recognition system. In the research presented here, we investigate what information is used to segment a word into its morphemic constituents and, in particular, whether semantic information plays a role in that segmentation. Participants made visual lexical decisions to stem targets preceded by masked primes sharing (1) a semantically transparent morphological relationship with the target (e.g.,cleaner-CLEAN), (2) an apparent morphological relationship but no semantic relationship with the target (e.g.,corner-CORN), and (3) a nonmorphological form relationship with the target (e.g.,brothel-BROTH). Results showed significant and equivalent masked priming effects in cases in which primes and targets appeared to be morphologically related, and priming in these conditions could be distinguished from nonmorphological form priming. We argue that these findings suggest a level of representation at which apparently complex words are decomposed on the basis of their morpho-orthographic properties. Implications of these findings for computational models of reading are discussed.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure were investigated, finding that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence.
Abstract: Word frequency is the most important variable in research on word processing and memory. Yet, the main criterion for selecting word frequency norms has been the availability of the measure, rather than its quality. As a result, much research is still based on the old Kucera and Francis frequency norms. By using the lexical decision times of recently published megastudies, we show how bad this measure is and what must be done to improve it. In particular, we investigated the size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure. We observed that corpus size is of practical importance for small sizes (depending on the frequency of the word), but not for sizes above 16–30 million words. As for the language register, we found that frequencies based on television and film subtitles are better than frequencies based on written sources, certainly for the monosyllabic and bisyllabic words used in psycholinguistic research. Finally, we found that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence. Part of the superiority of the latter is due to the words that are frequently used as names. Assembling a new frequency norm on the basis of these considerations turned out to predict word processing times much better than did the existing norms (including Kucera & Francis and Celex). The new SUBTL frequency norms from the SUBTLEXUS corpus are freely available for research purposes from http://brm.psychonomic-journals.org/content/supplemental, as well as from the University of Ghent and Lexique Web sites.

2,106 citations

Journal ArticleDOI
TL;DR: A comprehensive tutorial review of the science of learning to read, spanning from children’s earliest alphabetic skills through to the fluent word recognition and skilled text comprehension characteristic of expert readers is presented.
Abstract: There is intense public interest in questions surrounding how children learn to read and how they can best be taught. Research in psychological science has provided answers to many of these questio...

447 citations

Journal Article
TL;DR: This article present a comprehensive tutorial review of the science of learning to read, spanning from children's earliest alphabetic skills through to the fluent word recognition and skilled text comprehension characteristic of expert readers.
Abstract: There is intense public interest in questions surrounding how children learn to read and how they can best be taught. Research in psychological science has provided answers to many of these questions but, somewhat surprisingly, this research has been slow to make inroads into educational policy and practice. Instead, the field has been plagued by decades of “reading wars.” Even now, there remains a wide gap between the state of research knowledge about learning to read and the state of public understanding. The aim of this article is to fill this gap. We present a comprehensive tutorial review of the science of learning to read, spanning from children’s earliest alphabetic skills through to the fluent word recognition and skilled text comprehension characteristic of expert readers. We explain why phonics instruction is so central to learning in a writing system such as English. But we also move beyond phonics, reviewing research on what else children need to learn to become expert readers and considering how this might be translated into effective classroom practice. We call for an end to the reading wars and recommend an agenda for instruction and research in reading acquisition that is balanced, developmentally informed, and based on a deep understanding of how language and writing systems work.

416 citations

Journal ArticleDOI
TL;DR: A 2-layer symbolic network model based on the equilibrium equations of the Rescorla-Wagner model (Danks, 2003) is proposed, showing that for pseudo-derived words no special morpho-orthographic segmentation mechanism is required and predicting that productive affixes afford faster response latencies for new words.
Abstract: A 2-layer symbolic network model based on the equilibrium equations of the Rescorla-Wagner model (Danks, 2003) is proposed. The study first presents 2 experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipovic Đurđevic, and Moscoso del Prado Martin (2009) for unprimed lexical decision. The empirical results are successfully modeled without having to assume separate representations for inflections or data structures such as inflectional paradigms. In the next step, the same naive discriminative learning approach is pitted against a wide range of effects documented in the morphological processing literature. Frequency effects for complex words as well as for phrases (Arnon & Snider, 2010) emerge in the model without the presence of whole-word or whole-phrase representations. Family size effects (Moscoso del Prado Martin, Bertram, Haikio, Schreuder, & Baayen, 2004; Schreuder & Baayen, 1997) emerge in the simulations across simple words, derived words, and compounds, without derived words or compounds being represented as such. It is shown that for pseudo-derived words no special morpho-orthographic segmentation mechanism, as posited by Rastle, Davis, and New (2004), is required. The model also replicates the finding of Plag and Baayen (2009) that, on average, words with more productive affixes elicit longer response latencies; at the same time, it predicts that productive affixes afford faster response latencies for new words. English phrasal paradigmatic effects modulating isolated word reading are reported and modeled, showing that the paradigmatic effects characterizing Serbian case inflection have crosslinguistic scope.

392 citations

Journal ArticleDOI
TL;DR: It is argued that the weight of evidence now suggests that the recognition of morphologically complex words begins with a rapid morphemic segmentation based solely on the analysis of orthography.
Abstract: Recent theories of morphological processing have been dominated by the notion that morphologically complex words are decomposed into their constituents on the basis of their semantic properties. In this article we argue that the weight of evidence now suggests that the recognition of morphologically complex words begins with a rapid morphemic segmentation based solely on the analysis of orthography. Following a review of this evidence, we discuss the characteristics of this form of decomposition, speculate on what its purpose might be, consider how it might be learned in the developing reader, and describe what is known of its neural bases. Our discussion ends by reflecting on how evidence for semantically based decomposition might be (re)interpreted in the context of the orthographically based form of decomposition that we have described.

385 citations


Cites background or methods or result from "The broth in my brother's brothel: ..."

  • ...Key studies on this topic were reported by Longtin et al. (2003) and by Rastle, Davis, and New (2004)....

    [...]

  • ...…LOW-PROBABILITY SEQUENCES AS CONTAINING BOUNDARIES One method by which readers could acquire morpho-orthographic knowledge is through the analysis of sequential probabilities of letter combinations in printed text (e.g., bigram or trigram troughs, Seidenberg, 1987; see also Rastle et al., 2004)....

    [...]

  • ...Preliminary corpus analyses suggest that placing morpheme boundaries within low-frequency transitions can segment many, though not all, polymorphemic words (Rastle et al., 2004)....

    [...]

  • ...Overall, the pattern of data closely follows the results of Rastle et al. (2004), with Diependaele, Sandra, and Grainger (2005) being the only outlier.1 Priming effects yielded by morphologically structured words that have no semantic relation to their stems (e.g., corner-CORN) are of approximately…...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena.
Abstract: How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.

6,014 citations


"The broth in my brother's brothel: ..." refers methods in this paper

  • ...In order to corroborate our intuitions about semantic transparency, we extracted semantic relatedness values for each prime–target pair in the experimental conditions using the Latent Semantic Analysis (LSA; Landauer & Dumais, 1997) Web facility (http://lsa.colorado.edu)....

    [...]

Journal ArticleDOI
TL;DR: The DRC model is a computational realization of the dual-route theory of reading, and is the only computational model of reading that can perform the 2 tasks most commonly used to study reading: lexical decision and reading aloud.
Abstract: This article describes the Dual Route Cascaded (DRC) model, a computational model of visual word recognition and reading aloud. The DRC is a computational realization of the dual-route theory of reading, and is the only computational model of reading that can perform the 2 tasks most commonly used to study reading: lexical decision and reading aloud. For both tasks, the authors show that a wide variety of variables that influence human latencies influence the DRC model's latencies in exactly the same way. The DRC model simulates a number of such effects that other computational models of reading do not, but there appear to be no effects that any other current computational model of reading can simulate but that the DRC model cannot. The authors conclude that the DRC model is the most successful of the existing computational models of reading.

3,472 citations

Journal ArticleDOI
TL;DR: DMDX is a Windows-based program designed primarily for language-processing experiments that uses the features of Pentium class CPUs and the library routines provided in DirectX to provide accurate timing and synchronization of visual and audio output.
Abstract: DMDX is a Windows-based program designed primarily for language-processing experiments. It uses the features of Pentium class CPUs and the library routines provided in DirectX to provide accurate timing and synchronization of visual and audio output. A brief overview of the design of the program is provided, together with the results of tests of the accuracy of timing. The Web site for downloading the software is given, but the source code is not available.

2,541 citations


"The broth in my brother's brothel: ..." refers methods in this paper

  • ...Stimulus presentation and data recording were controlled by DMDX software (Forster & Forster, 2003) running on a Pentium III personal computer....

    [...]

Journal ArticleDOI
TL;DR: The authors showed that the frequency attenuation effect is a product of the involvement of the episodic memory system in the lexical decision process, which is supported by the demonstration of constant repetition effects for high and low-frequency words when the priming stimulus is masked; the masking is assumed to minimize the influence of any possible episodic trace of the prime.
Abstract: Repetition priming effects in lexical decision tasks are stronger for low-frequency words than for high-frequency words. This frequency attenuation effect creates problems for frequency-ordered search models that assume a relatively stable frequency effect. The suggestion is made that frequency attenuation is a product of the involvement of the episodic memory system in the lexical decision process. This hypothesis is supported by the demonstration of constant repetition effects for high- and low-frequency words when the priming stimulus is masked; the masking is assumed to minimize the influence of any possible episodic trace of the prime. It is further shown that long-term repetition effects are much less reliable when the subject is not required to make a lexical decision response to the prime. When a response is required, the expected frequency attenuation effect is restored. It is concluded that normal repetition effects consist of two components: a very brief lexical effect that is independent of frequency and a long-term episodic effect that is sensitive to frequency. There has been much recent interest in the fact that in a lexical decision experiment, where subjects are required to classify letter strings as words or nonwords, there is a substantial increase in both the speed and the accuracy of classificatio n for words that are presented more than once during the experiment, even though considerable time may have elapsed between successive presen

1,324 citations


"The broth in my brother's brothel: ..." refers methods in this paper

  • ...Recent research using masked priming of visual word recognition (a priming technique in which primes are presented so briefly that they are unavailable for report; Forster & Davis, 1984) appears potentially inconsistent with this characterization, however....

    [...]

Trending Questions (1)
What is it called when most og the words consist of more than one morpheme?

The phenomenon of words consisting of more than one morpheme is called morphologically complex words.