J
James L. McClelland
Researcher at Stanford University
Publications - 332
Citations - 84307
James L. McClelland is an academic researcher from Stanford University. The author has contributed to research in topics: Cognition & Connectionism. The author has an hindex of 102, co-authored 323 publications receiving 80253 citations. Previous affiliations of James L. McClelland include University of Lethbridge & University of Pittsburgh.
Papers
More filters
Parallel distributed processing: Implications for cognition and development.
TL;DR: The application of the connectionist framework to problems of cognitive development is considered, and a network that learns to anticipate which side of a balance beam will go down is illustrated, based on the number of weights on each side of the fulcrum and their distance from the Fulcrum.
Journal ArticleDOI
Modelling the N400 brain potential as change in a probabilistic representation of meaning.
TL;DR: The authors provide a unified explanation of the N400 in a neural network model that avoids the commitments of traditional approaches to meaning in language and connects human language comprehension with recent deep learning approaches to language processing.
Journal ArticleDOI
Stochastic Interactive Processes and the Effect of Context on Perception
TL;DR: The findings suggest that interactive models should not be viewed as alternatives to classical accounts, but as hypotheses about the dynamics of information processing that lead to the global asymptotic behavior that the classical models describe.
Journal ArticleDOI
Deficits in irregular past-tense verb morphology associated with degraded semantic knowledge.
TL;DR: Evaluated the past-tense verb abilities of 11 patients with semantic dementia, a neurodegenerative condition characterised by degraded semantic knowledge, and predicted and confirmed that the patients would have essentially normal ability to generate and recognise regular (and novel) past-Tense forms, but a marked and frequency-modulated deficit on irregular verbs.
Proceedings Article
Learning Representations by Recirculation
TL;DR: Simulations in simple networks show that the learning procedure usually converges rapidly on a good set of codes, and analysis shows that in certain restricted cases it performs gradient descent in the squared reconstruction error.