scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 1981"


01 Apr 1981
TL;DR: It is shown that components designed with attention to the temporal aspects of reinforcement learning can acquire knowledge about feedback pathways in which they are embedded and can use this knowledge to seek their preferred inputs, thus combining pattern recognition, search, and control functions.
Abstract: : This report assesses the promise of a network approach to adaptive problem solving in which the network components themselves possess considerable adaptive power. We show that components designed with attention to the temporal aspects of reinforcement learning can acquire knowledge about feedback pathways in which they are embedded and can use this knowledge to seek their preferred inputs, thus combining pattern recognition, search, and control functions. A review of adaptive network research shows that networks of components having these capabilities have not been studied previously. We demonstrate that simple networks of these elements can solve types of problems that are beyond the capabilities of networks studied in the past. An associative memory is presented that retains the generalization capabilities and noise resistance of associative memories previously studied but does not require a 'teacher' to provide the desired associations. It conducts active, closed-loop searches for the most rewarding associations. We provide an example in whcih these searches are conducted through the system's external environment and an example in which they are conducted through an internal predictive model of that environment. The latter system is capable of a simple form of latent learning.

40 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a neural network can be described as an automaton and modifications suggested by neurophysiological data are incorporated to improve the speed of convergence.
Abstract: A formal automata-theoretical model for learning neural networks is given. The networks may grow while they learn. It is demonstrated that a neural network can be described as an automaton. Two extreme learning procedures are presented as boundaries for potential learning strategies. An example of a fairly simple automata-theoretical learning procedure is given and modifications suggested by neurophysiological data are incorporated to improve the speed of convergence.

12 citations


01 Jan 1981
TL;DR: In this article, a neural network model for a mechanism of visual pattern recognition is proposed, which is self-organized by learning without a teacher and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions.
Abstract: A neural network model for a mechanism of visual pattern recognition is proposed in this paper The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions This network is given a nickname “neocognitron” After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells The afferent synapses to each S-cell have plasticity and are modifiable The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network The network has been simulated on a digital computer After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern That is, none of the C-cells of the last layer responds to more than one stimulus pattern The response of the C-cells of the last layer is not affected by the pattern's position at all Neither is it affected by a small change in shape nor in size of the stimulus pattern

8 citations


Book ChapterDOI
01 Jan 1981
TL;DR: The emulator objective is to demonstrate to students in biological sciences the properties associated with neural networks in an easy way to understand and manipulate.
Abstract: The emulator objective is to demonstrate to students in biological sciences the eff e cts associated with neural networks (like stimuli convergence and divergence, lateral or recurrent inhibition and memory circuits) in an easy way to understand and manipulate.