scispace - formally typeset
Search or ask a question

Showing papers by "James L. McClelland published in 1993"



01 Jan 1993
TL;DR: The authors trained an attractor network to pronounce virtually all of a large corpus of monosyllabic words, including both regular and exception words, and the network generalizes because the attractors it developed for regular words are componential, they have substructure that reflect common sublexical correspondences between orthography and phonology.
Abstract: Networks that learn to make familiar activity patterns into stable attractors have proven useful in accounting for many aspects of normal and impaired cognition. However, their ability to generalize is questionable, particularly in quasiregular tasks that involve both regularities and exceptions, such as word reading. We trained an attractor network to pronounce virtually all of a large corpus of monosyllabic words, including both regular and exception words. When tested on the lists of pronounceable nonwords used in several empirical studies, its accuracy was closely comparable to that of human subjects. The network generalizes because the attractors it developed for regular words are componential—they have substructure that reflects common sublexical correspondences between orthography and phonology. This componentiality is faciliated by the use of orthographic and phonological representations that make explicit the structured relationship between written and spoken words. Furthermore, the componential attractors for regular words coexist with much less componential attractors for exception words. These results demonstrate that attractors can support effective generalization, challenging “dual-route” assumptions that multiple, independent mechanisms are required for quasiregular tasks.

100 citations


Journal ArticleDOI
TL;DR: Simulations show that symmetric diffusion networks can be trained with the CHL rule to approximate discrete and continuous probability distributions of various types.

57 citations


Journal ArticleDOI
TL;DR: Computational models support novel explanations of important aspects of perception, memory, language, thought and cognitive development, and allow cognitive processes to be linked with the underlying physiological mechanisms.

18 citations