scispace - formally typeset
Search or ask a question
Topic

Lexical decision task

About: Lexical decision task is a research topic. Over the lifetime, 4701 publications have been published within this topic receiving 259198 citations. The topic is also known as: LDT.


Papers
More filters
Journal ArticleDOI
TL;DR: A parallel distributed processing model of visual word recognition and pronunciation is described, which consists of sets of orthographic and phonological units and an interlevel of hidden units and which early in the learning phase corresponds to that of children acquiring word recognition skills.
Abstract: A parallel distributed processing model of visual word recognition and pronunciation is described. The model consists of sets of orthographic and phonological units and an interlevel of hidden units. Weights on connections between units were modified during a training phase using the back-propagation learning algorithm. The model simulates many aspects of human performance, including (a) differences between words in terms of processing difficulty, (b) pronunciation of novel items, (c) differences between readers in terms of word recognition skill, (d) transitions from beginning to skilled reading, and (e) differences in performance on lexical decision and naming tasks. The model's behavior early in the learning phase corresponds to that of children acquiring word recognition skills. Training with a smaller number of hidden units produces output characteristic of many dyslexic readers. Naming is simulated without pronunciation rules, and lexical decisions are simulated without accessing word-level representations. The performance of the model is largely determined by three factors: the nature of the input, a significant fragment of written English; the learning rule, which encodes the implicit structure of the orthography in the weights on connections; and the architecture of the system, which influences the scope of what can be learned.

3,642 citations

Journal ArticleDOI
TL;DR: The motivation for this project, the methods used to collect the data, and the search engine that affords access to the behavioral measures and descriptive lexical statistics for these stimuli are described.
Abstract: The English Lexicon Project is a multiuniversity effort to provide a standardized behavioral and descriptive data set for 40,481 words and 40,481 nonwords. It is available via the Internet at elexicon.wustl.edu. Data from 816 participants across six universities were collected in a lexical decision task (approximately 3400 responses per participant), and data from 444 participants were collected in a speeded naming task (approximately 2500 responses per participant). The present paper describes the motivation for this project, the methods used to collect the data, and the search engine that affords access to the behavioral measures and descriptive lexical statistics for these stimuli.

2,164 citations

Journal ArticleDOI
TL;DR: The authors showed that when conceptual activity is sufficiently great to activate a multiple set of corresponding lexical representations, interference is produced in the process of retrieving a single best lexical candidate as the name or translation.

2,133 citations

Journal ArticleDOI
TL;DR: The size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure were investigated, finding that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence.
Abstract: Word frequency is the most important variable in research on word processing and memory. Yet, the main criterion for selecting word frequency norms has been the availability of the measure, rather than its quality. As a result, much research is still based on the old Kucera and Francis frequency norms. By using the lexical decision times of recently published megastudies, we show how bad this measure is and what must be done to improve it. In particular, we investigated the size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure. We observed that corpus size is of practical importance for small sizes (depending on the frequency of the word), but not for sizes above 16–30 million words. As for the language register, we found that frequencies based on television and film subtitles are better than frequencies based on written sources, certainly for the monosyllabic and bisyllabic words used in psycholinguistic research. Finally, we found that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence. Part of the superiority of the latter is due to the words that are frequently used as names. Assembling a new frequency norm on the basis of these considerations turned out to predict word processing times much better than did the existing norms (including Kucera & Francis and Celex). The new SUBTL frequency norms from the SUBTLEXUS corpus are freely available for research purposes from http://brm.psychonomic-journals.org/content/supplemental, as well as from the University of Ghent and Lexique Web sites.

2,106 citations

Journal ArticleDOI
TL;DR: This article examined the effects of prior semantic context on lexical access during sentence comprehension and found that the lexical decisions for visual words related to each of the meanings of ambiguity were facilitated when these words were presented simultaneous with the end of the ambiguity.

1,468 citations


Network Information
Related Topics (5)
Semantic memory
9.4K papers, 659.8K citations
90% related
Working memory
26.5K papers, 1.6M citations
89% related
Episodic memory
10.7K papers, 626.9K citations
88% related
Social cognition
16.1K papers, 1.2M citations
84% related
Visual perception
20.8K papers, 997.2K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202365
2022114
2021120
2020122
2019157
2018127