scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 1980"


Journal ArticleDOI
TL;DR: A neural network model for a mechanism of visual pattern recognition that is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity of their shapes without affected by their positions.
Abstract: A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname “neocognitron”. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern.

4,713 citations


Book
01 Jan 1980

172 citations


Journal ArticleDOI
TL;DR: This paper considers r-out-of-n:G and dynamic redundant systems with non-i.i.d. units and extends two previously published algorithms for calculating fail-safe and fail-danger probabilities of such systems.
Abstract: This paper considers r-out-of-n:G and dynamic redundant systems with non-i.i.d. units, and extends two previously published algorithms for calculating fail-safe and fail-danger probabilities of such systems.

9 citations


Journal ArticleDOI
TL;DR: This work explicitly introduces assemblies of neurons within the network, and obtains fully interpretable, physiological solutions (in the large fluctuation limit) which lead to readily testable predictions.
Abstract: Previously, Little and Shaw developed a model of memory in neural networks using the Hebb learning hypothesis and explicitly incorporating the known statistical fluctuations at the chemical synapses. They solved exactly the large fluctuation limit of the model and were able to examine the capacity for memory storage. However, the solutions were not physiologically interpretable. We now explicitly introduce assemblies of neurons within the network, and obtain fully interpretable, physiological solutions (again, in the large fluctuation limit) which lead to readily testable predictions.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the boundary of a neural network is considered, and the Hartline-Ratliff equations are used as a model system, and solutions are obtained by means of the Wiener-Hopf technique.
Abstract: The boundary of a neural network is considered, and the Hartline–Ratliff equations are used as a model system. Solutions are obtained by means of the Wiener–Hopf technique. Specific examples are calculated and comparison with experiment is made.

7 citations



Journal ArticleDOI
TL;DR: A neural network model is proposed for the understanding of the receptive field properties of the complex cell and is proved to be functionally identical with Hubel's and Wiesel's hierarchy model though the two models are structurally quite different.
Abstract: A neural network model is proposed for the understanding of the receptive field properties of the complex cell. The model is based on recent neurophysiological findings on the visual cortical network. The model is proved to be functionally identical with Hubel's and Wiesel's hierarchy model though the two models are structurally quite different.

4 citations


Journal ArticleDOI
TL;DR: Starting from the properties of a neural network with backward lateral inhibitions, a new algorithm suitable for non-linear image processing is defined, after an initial smoothing effect, either a sharpening effect, or a spatial sampling adapted to the singularities of the input pattern.
Abstract: Starting from the properties of a neural network with backward lateral inhibitions, we define a new algorithm suitable for non-linear image processing. From a convenient choice of the network parameters we obtain, after an initial smoothing effect, either a sharpening effect, or a spatial sampling adapted to the singularities of the input pattern.

4 citations


Dissertation
01 Jan 1980

1 citations


Journal ArticleDOI
TL;DR: The characteristics of the proposed hardware neuron model suggest that the model can be used as functional elements in the pre-processing network of an information processing system for spatial patterns.
Abstract: In the nervous system there are two kinds of information processing: intra-cellular analog operation and digital transmission between cells. These properties are considered to be indispensable in the construction of a large-scale parallel information processing system having reasonable redundancy and which incorporates a number of spreading spatial and temporal patterns. As a simplified model having these two processing capabilities we propose a hardware neuron model using a voltage-controlled oscillator IC. The model characteristics are easily controlled and it can supply a number of models as components of the hardware neural network. The fundamental characteristics for neural operation between multichannel asynchronous inputs are investigated with the model. It is shown that the mean impulse frequencies of the asynchronous inputs are linearly added or subtracted at the model despite the non-linear property characterized by threshold in the output pulse generation. The characteristics of a backward lateral inhibition connection are also investigated for input pulse sequences. It has an input excitation frequency threshold and shows variable gain or hysteresis characteristics, depending on whether the weighting coefficients of the inhibition inputs are large or small. The model can be connected directly to a microprocessor and can process spatial analog information without an A-D converter. These characteristics suggest that the model can be used as functional elements in the pre-processing network of an information processing system for spatial patterns.

1 citations


Journal ArticleDOI
TL;DR: This paper presents results which resolve a potential conflict in choosing the number of inputs per network element between increasing the network stability and reducing its capacity for discrimination.
Abstract: An important factor in the application of dynamic cellular logic networks to pattern processing tasks is the stability of the network, defined in terms of the number of modes of activity which it can exhibit. This paper presents results which resolve a potential conflict in choosing the number of inputs per network element between increasing the network stability and reducing its capacity for discrimination. Implications for pattern processing in practical pattern classifiers and in neural networks are discussed.

Journal ArticleDOI
TL;DR: The most disturbing feature of the Selverston paper is its pessimistic tone and its final leap into nihilism, which is nevertheless especially distressing when an experimentalist such as SelverSTON embraces nihilism.
Abstract: and ion channels must be functionally described in order to obtain a full understanding of a CPG, we will not have a detailed, mechanistic explanation for some considerable length of time. A complete compilation of the detailed molecular biophysics of neurons will long remain the "quark" of cellular and integrative neurobiology. The most disturbing feature of the Selverston paper is its pessimistic tone and its final leap into nihilism. Science has been marked by the episodic appearance of nihilists. Often these negative spirits have had a positive influence upon empiricism by raising doubts regarding the state of knowledge at the time, and the empiricists would respond with better experiments, more rigorous controls, and more conclusive results. Neurobiology has probably had more nihilism than most other sciences simply because of its subject-the nervous system-and its long-term goal, an understanding of brain function. As early as 1933, Niels Bohr raised the fundamental philosophical question of whether any brain was capable of understanding those connections which were responsible for its functioning, including its ability to understand. Gunther Stent (1969) examined a series of historic trends in both the arts and the sciences and predicted an end to progress as we know it (implying, of course, an end to the systematic investigation of neuro-biological problems). Jerome Lettvin has repeatedly argued in public forum that the nervous system is so complicatedly and redundantly wired that no amount of experimentation will produce an understanding of its functions, and we should, therefore, simply not try. I will refrain from the temptation of waving the banner of neurobiol-ogy by listing some of the (ever-increasing) examples of progress. Putting philosophical and theoretical considerations aside, it is patently clear that we can, and have, learned a great deal about the nervous system in general and about CPGs in particular. While there may well be some limit as to what we can understand about the functions of the nervous system set by "uncertainty principles" or by "social evolution ," it is nevertheless especially distressing when an experimentalist such as Selverston embraces nihilism. One can but wonder what motivated Selverston to write the target article: either he is playing devil's advocate and attempting to evoke some patriotic response from the neurobiological community, or he believes what he has written. If the former is true, the article is unnecessary; if the latter is true, one wonders why he persists in his chosen (futile) …



Journal ArticleDOI
TL;DR: Spontaneous firing in neural nets can be processed and interpreted as patterns of behavior if the spontaneous firing rates in a neural net are sufficiently high.
Abstract: Summary.-Spontaneous firing in neural nets can be processed and interpreted as patterns of behavior. Such patterns can be learned through learning mechanisms present in synapses. This implies that behavioral patterns can form without a stimulus if the spontaneous firing rates in a neural net are sufficiently high.