scispace - formally typeset
Search or ask a question

Showing papers by "James L. McClelland published in 2004"


Book
18 Jun 2004
TL;DR: The authors propose that performance in semantic tasks arises through the propagation of graded signals in a system of interconnected processing units, and show how a simple computational model proposed by Rumelhart exhibits a progressive differentiation of conceptual knowledge, paralleling aspects of cognitive development seen in the work of Frank Keil and Jean Mandler.
Abstract: This groundbreaking monograph offers a mechanistic theory of the representation and use of semantic knowledge, integrating the strengths and overcoming many of the weaknesses of hierarchical, categorization-based approaches, similarity-based approaches, and the approach often called "theory theory." Building on earlier models by Geoffrey Hinton in the 1980s and David Rumelhart in the early 1990s, the authors propose that performance in semantic tasks arises through the propagation of graded signals in a system of interconnected processing units. The representations used in performing these tasks are patterns of activation across units, governed by weighted connections among them. Semantic knowledge is acquired through the gradual adjustment of the strengths of these connections in the course of day-to-day experience. The authors show how a simple computational model proposed by Rumelhart exhibits a progressive differentiation of conceptual knowledge, paralleling aspects of cognitive development seen in the work of Frank Keil and Jean Mandler. The authors extend the model to address aspects of conceptual knowledge acquisition in infancy, disintegration of conceptual knowledge in dementia, "basic-level" effects and their interaction with expertise, and many findings introduced to support the idea that semantic cognition is guided by naive, domain-specific theories.

976 citations


Journal ArticleDOI
TL;DR: The authors present a parallel distributed processing implementation of this theory, in which semantic representations emerge from mechanisms that acquire the mappings between visual representations of objects and their verbal descriptions, to understand the structure of impaired performance in patients with selective and progressive impairments of conceptual knowledge.
Abstract: Wernicke (1900, as cited in G. H. Eggert, 1977) suggested that semantic knowledge arises from the interaction of perceptual representations of objects and words. The authors present a parallel distributed processing implementation of this theory, in which semantic representations emerge from mechanisms that acquire the mappings between visual representations of objects and their verbal descriptions. To test the theory, they trained the model to associate names, verbal descriptions, and visual representations of objects. When its inputs and outputs are constructed to capture aspects of structure apparent in attribute-norming experiments, the model provides an intuitive account of semantic task performance. The authors then used the model to understand the structure of impaired performance in patients with selective and progressive impairments of conceptual knowledge. Data from 4 well-known semantic tasks revealed consistent patterns that find a ready explanation in the model. The relationship between the model and related theories of semantic representation is discussed.

847 citations


Journal ArticleDOI
TL;DR: It is shown that participants have much more knowledge about the game than previously thought, and that when they behave advantageously, their verbal reports nearly always reveal evidence of quantitativeknowledge about the outcomes of the decks that would be sufficient to guide such advantageous behavior.
Abstract: ‡Bechara, Damasio, and coworkers [Bechara, A., Damasio, H., Tranel, D. & Damasio, A. R. (1997) Science 275, 1293–1295] have reported that normal participants decide advantageously before knowing the advantageous strategy in a simple card game designed to mimic real-life decision-making. Bechara et al. have used this result to support their view that nonconscious somatic markers can guide advantageous behavior. By using more sensitive methods, we show that participants have much more knowledge about the game than previously thought. In fact, participants report knowledge of the advantageous strategy more reliably than they behave advantageously. Furthermore, when they behave advantageously, their verbal reports nearly always reveal evidence of quantitative knowledge about the outcomes of the decks that would be sufficient to guide such advantageous behavior. In addition, there is evidence that participants also have access to more qualitative reportable knowledge. These results are compatible with the view that, in this task, both overt behavior and verbal reports reflect sampling from consciously accessible knowledge; there is no need to appeal to nonconscious somatic markers. We also discuss the findings of other studies that similarly suggest alternative interpretations of other evidence previously used to support a role for somatic markers in decision-making.

525 citations


Journal ArticleDOI
TL;DR: An alternative theory is proposed, integrating loss aversion and attention switching into a nonlinear model that relies on inhibition independent of similarity among alternatives that accounts for the 3 effects and makes testable predictions contrasting with those of the Roe et al. (2001) model.
Abstract: The roles of loss aversion and inhibition among alternatives are examined in models of the similarity, compromise, and attraction effects that arise in choices among 3 alternatives differing on 2 attributes. R. M. Roe, J. R. Busemeyer, and J. T. Townsend (2001) have proposed a linear model in which effects previously attributed to loss aversion (A. Tversky & D. Kahneman, 1991) arise from attention switching between attributes and similarity-dependent inhibitory interactions among alternatives. However, there are several reasons to maintain loss aversion in a theory of choice. In view of this, an alternative theory is proposed, integrating loss aversion and attention switching into a nonlinear model (M. Usher & J. L. McClelland, 2001) that relies on inhibition independent of similarity among alternatives. The model accounts for the 3 effects and makes testable predictions contrasting with those of the Roe et al. (2001) model.

361 citations


Journal ArticleDOI
TL;DR: A general account for the speech and nonspeech patterns is proposed based on the supposition that the perceptual trace of rapidly-changing sounds decays faster than the trace of steady-state sounds.
Abstract: Different patterns of performance across vowels and consonants in tests of categorization and discrimination indicate that vowels tend to be perceived more continuously, or less categorically, than consonants. The present experiments examined whether analogous differences in perception would arise in nonspeech sounds that share critical transient acoustic cues of consonants and steady-state spectral cues of simplified synthetic vowels. Listeners were trained to categorize novel nonspeech sounds varying along a continuum defined by a steady-state cue, a rapidly-changing cue, or both cues. Listeners’ categorization of stimuli varying on the rapidly changing cue showed a sharp category boundary and posttraining discrimination was well predicted from the assumption of categorical perception. Listeners more accurately discriminated but less accurately categorized steady-state nonspeech stimuli. When listeners categorized stimuli defined by both rapidly-changing and steady-state cues, discrimination performance was accurate and the categorization function exhibited a sharp boundary. These data are similar to those found in experiments with dynamic vowels, which are defined by both steady-state and rapidly-changing acoustic cues. A general account for the speech and nonspeech patterns is proposed based on the supposition that the perceptual trace of rapidly-changing sounds decays faster than the trace of steady-state sounds.

63 citations


Journal ArticleDOI
TL;DR: Erlbaum et al. as mentioned in this paper suggest that U-shaped curves can arise within a domain-general learning mechanism as it slowly masters a domain characterized by statistical regularities and exceptions.
Abstract: As the articles in this issue attest, U-shaped curves in development have stimulated a wide spectrum of research across disparate task domains and age groups and have provoked a variety of ideas about their origins and theoretical significance. In our view, the ubiquity of the general pattern suggests that U-shaped curves can arise from multiple factors, and that the various viewpoints represented herein may be useful for explaining some aspects of developmental change. In this spirit, we offer an additional way of thinking about such phenomena. Specifically, we suggest that U-shaped curves can arise within a domain-general learning mechanism as it slowly masters a domain characterized by statistical regularities and exceptions. This idea differs from those considered thus far, and may encompass many of the phenomena addressed by other views, three of which we outline briefly here. JOURNAL OF COGNITION AND DEVELOPMENT, 1(5), 137–145 Copyright © 2004, Lawrence Erlbaum Associates, Inc.

32 citations


01 Jan 2004
TL;DR: In this article, an attentional scaling parameter was added to the TRACE model to dampen overall lexical layer activation, which is a simple mechanism that works within the interactive framework of TRACE, and was tested in two cases of lexical effects on phoneme identification.
Abstract: Attentional Modulation of Lexical Effects in an Interactive Model of Speech Perception Daniel Mirman (dmirman@andrew.cmu.edu) James L. McClelland (jlm@cnbc.cmu.edu) Lori L. Holt (lholt@andrew.cmu.edu) Center for the Neural Basis of Cognition & Department of Psychology, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA 15213 USA A number of studies have demonstrated that the strength of lexical effects on phoneme processing can be modulated by attention (e.g., Cutler et al., 1987; Eimas, Hornstein, & Payton, 1990; Vitevitch, 2003). The TRACE model (McClelland & Elman, 1986) posits direct feedback from lexical processing to phonemic processing, thus accounting for lexical influences on phoneme identification. However, the TRACE model lacks a mechanism for modulation of this feedback through attention. Some researchers (Norris, McQueen, & Cutler, 2000) have argued that this is a weakness of the interactive view of speech perception and is one reason to prefer an autonomous model. We consider biased competition (Desimone & Duncan, 1995) as a possible attention mechanism that fits within the interactive framework of TRACE. In the context of TRACE, when an input is presented, phonemes that are partially consistent with the input compete through lateral inhibition. This competition is biased by lexical feedback proportional to the magnitude of lexical activation. Activation of lexical items is based on excitatory input from the phoneme layer and lateral inhibitory interactions among lexical items. The magnitude and rate at which lexical items become active can be manipulated by a scaling factor on the lexical units’ response to input. This, in turn, influences the strength of lexical influences on phoneme perception. That is, task or stimulus conditions that cause participants to direct attention away from lexical processing may operate by causing a dampening of lexical layer activity and thereby reducing lexical biasing of phoneme processing. To implement this mechanism in TRACE, an attentional scaling parameter (α) was added to the function specifying the change in activation for lexical units for each processing cycle. When α=1.0, this is the standard TRACE model as implemented by McClelland and Elman (1986), when α<1.0, the lexical activation is dampened and lexical effects should be reduced. This mechanism was tested in two cases of lexical effects on phoneme identification. Ambiguous phonemes tend to be perceived as lexically consistent (Ganong, 1980), but the strength of this effect varies with task and stimulus differences (see Pitt & Samuel, 1993, for review and meta- analysis). The attention parameter captured this variability. When lexical attention is high, lexical items become more active more quickly, thus providing stronger and earlier feedback to the phoneme level and biasing perception of the ambiguous acoustic input. When lexical attention is very low, lexical items become active more slowly, thus providing less feedback to the phoneme level and causing a small and late-developing lexical bias. A second lexical effect on phoneme recognition is that phonemes are recognized more quickly in words than nonwords. This word advantage has also been shown to be affected by task and stimulus factors (e.g., Cutler et al., 1987). Variation of the attention parameter also captures this variability: at high α values, TRACE is faster to recognize phonemes that are embedded in words; at lower α values, the word advantage disappears. This is because lexical items are less active, thus they provide less support to their constituent phonemes. The addition of a scaling parameter that dampens overall lexical layer activation provides a simple mechanism that works within the interactive framework of the TRACE model to modulate the strength of lexical influences on phoneme processing. References Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19(2), 141-177. Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18, 193-222. Eimas, P. D., Hornstein, S. M., & Payton, P. (1990). Attention and the role of dual codes in phoneme monitoring. Journal of Memory & Language, 29(2), 160-180. Ganong, W. F. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception & Performance, 6(1), 110-125. McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18(1), Norris, D., McQueen, J. M., & Cutler, A. (2000). Merging information in speech recognition: Feedback is never necessary. Behavioral & Brain Sciences, 23(3), 299-370. Pitt, M. A., & Samuel, A. G. (1993). An empirical and meta-analytic evaluation of the phoneme identification task. Journal of Experimental Psychology: Human Perception & Performance, 19(4), 699-725. Vitevitch, M. S. (2003). The influence of sublexical and lexical representations on the processing of spoken words in English. Clinical Linguistics & Phonetics, 17(6), 487-499.

1 citations