scispace - formally typeset
Journal ArticleDOI

Lesioning an attractor network: investigations of acquired dyslexia

Geoffrey E. Hinton, +1 more
- 01 Jan 1991 - 
- Vol. 98, Iss: 1, pp 74-95
Reads0
Chats0
TLDR
In this paper, a recurrent connectionist network was trained to output semantic feature vectors when presented with letter strings, and when damaged, the network exhibited characteristics that resembled several of the phenomena found in deep dyslexia and semantic access dyslexias.
Abstract
A recurrent connectionist network was trained to output semantic feature vectors when presented with letter strings. When damaged, the network exhibited characteristics that resembled several of the phenomena found in deep dyslexia and semantic-access dyslexia. Damaged networks sometimes settled to the semantic vectors for semantically similar but visually dissimilar words. With severe damage, a forced-choice decision between categories was possible even when the choice of the particular semantic vector within the category was not possible. The damaged networks typically exhibited many mixed visual and semantic errors in which the output corresponded to a word that was both visually and semantically similar. Surprisingly, damage near the output sometimes caused pure visual errors. Indeed, the characteristic error pattern of deep dyslexia occurred with damage to virtually any part of the network.

read more

Citations
More filters
Journal ArticleDOI

A theory of lexical access in speech production.

TL;DR: The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging.
Journal ArticleDOI

Understanding normal and impaired word reading: computational principles in quasi-regular domains.

TL;DR: Analysis of the ability of networks to reproduce data on acquired surface dyslexia support a view of the reading system that incorporates a graded division of labor between semantic and phonological processes, and contrasts in important ways with the standard dual-route account.
Proceedings ArticleDOI

A theory of lexical access in speech production

TL;DR: The authors focused on experimental reaction time evidence in support of the theory and showed that the speaker monitors the output and self-corrects, if necessary, selfcorrecting to correct the output.
Journal ArticleDOI

Face recognition by independent component analysis

TL;DR: Independent component analysis (ICA), a generalization of PCA, was used, using a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons, which was superior to representations based on PCA for recognizing faces across days and changes in expression.
Journal ArticleDOI

The time course of perceptual choice: The leaky, competing accumulator model.

TL;DR: The time course of perceptual choice is discussed in a model of gradual, leaky, stochastic, and competitive information accumulation in nonlinear decision units that captures choice behavior regardless of the number of alternatives, and explains a complex pattern of visual and contextual priming in visual word identification.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

A spreading-activation theory of semantic processing

TL;DR: The present paper shows how the extended theory can account for results of several production experiments by Loftus, Juola and Atkinson's multiple-category experiment, Conrad's sentence-verification experiments, and several categorization experiments on the effect of semantic relatedness and typicality by Holyoak and Glass, Rips, Shoben, and Smith, and Rosch.
Book ChapterDOI

A framework for representing knowledge

Marvin Minsky
TL;DR: The enormous problem of the volume of background common sense knowledge required to understand even very simple natural language texts is discussed and it is suggested that networks of frames are a reasonable approach to represent such knowledge.

A framework for representing knowledge

Marvin Minsky
TL;DR: The authors describes frame systems as a formalism for representing knowledge and then concentrates on the issue of what the content of knowledge should be in specific domains, arguing that vision should be viewed symbolically with an emphasis on forming expectations and then using details to fill in slots in those expectations.
Related Papers (5)