scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 1982"


Journal ArticleDOI
TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Abstract: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.

16,652 citations


Book ChapterDOI
01 Jan 1982
TL;DR: A neural network model, called a “neocognitron”, is proposed for a mechanism of visual pattern recognition that has characteristics similar to those of visual systems of vertebrates.
Abstract: A neural network model, called a “neocognitron”, is proposed for a mechanism of visual pattern recognition. It is demonstrated by computer simulation that the neocognitron has characteristics similar to those of visual systems of vertebrates.

969 citations


Book
01 Jan 1982
TL;DR: A Neural Theory of Punishment and Avoidance is proposed in this paper, where a neural model of Attention, Reinforcement and Discrimination Learning is used to train a neural network with adaptive pattern classification.
Abstract: 1. How Does a Brain Build a Cognitive Code?.- 2. Some Physiological and Biochemical Consequences of Psychological Postulates.- 3. Classical and Instrumental Learning by Neural Networks.- 4. Pattern Learning by Functional-Differential Neural Networks with Arbitrary Path Weights.- 5. A Neural Theory of Punishment and Avoidance. II: Quantitative Theory.- 6. A Neural Model of Attention, Reinforcement and Discrimination Learning.- 7. Neural Expectation: Cerebellar and Retinal Analogs of Cells Fired by Learnable or Unlearned Pattern Classes.- 8. Contour Enhancement, Short Term Memory, and Constancies in Reverberating Neural Networks.- 9. Biological Competition: Decision Rules, Pattern Formation, and Oscillations.- 10. Competition, Decision, and Consensus.- 11. Behavioral Contrast in Short Term Memory: Serial Binary Memory Models or Parallel Continuous Memory Models?.- 12. Adaptive Pattern Classification and Universal Recoding. I: Parallel Development and Coding of Neural Feature Detectors.- 13. A Theory of Human Memory: Self-Organization and Performance of Sensory-Motor Codes, Maps, and Plans.- List of Publications.

391 citations


Book
09 Aug 1982
TL;DR: This chapter contains some general remarks on organizations and cooperativity and introduces the matchbox algorithm, which is interpreted as an algorithm for survival and thus as a model of an animal.
Abstract: Contents, with Outline.- I.- 1 The Flow of Information.- An introduction to the brain with emphasis on the transmission of information. Digressions 1 and 2 start from here..- 2 Thinking as Seen from Within and from Without.- Some problems in thinking about thinking are presented the behavioristic approach to such problems is introduced: What in the observable behavior of somebody else makes us think that he is thinking? This leads to the Turing test for artificial intelligence..- 3 How to Build Well-Behaving Machines.- "Behavior" is understood as the total stimulus (or situation) ? response mapping. For a finite number of different inputs, any such mapping can be constructed. This statement is demonstrated by.- 1. coding of any finite set into finite 0, 1-sequences.- 2. showing that any mapping between finite sets of 0.1-sequences can be built from logical and-, or-, and not-gates..- 3. representing the and-, or-, and not-gate as special threshold neurons of the McCulloch and Pitts type..- Digression 3 may be of some help here..- 4 Organizations, Algorithms, and Flow Diagrams.- The chapter contains some general remarks on organizations and cooperativity and introduces the matchbox algorithm..- 5 The Improved Matchbox Algorithm.- The matchbox algorithm is improved by the incorporation of the look-ahead algorithm (e.g., for chess-playing machines) and the associative matrix memory. Appendix 1 starts from here. Chapters 5 and 7 contain the basic constructions needed for the survival algorithm..- 6 The Survival Algorithm as a Model of an Animal.- The improved matchbox algorithm is interpreted as an algorithm for survival and thus as a model of an animal If such an algorithm is implemented in terms of neuron-like elements, the result can be checked against experimental data from the neurosciences. Conversely, such data cannot really be understood without a theory (in line with a more general argument as for example in Kuhn 1962)..- 7 Specifying the Survival Algorithm.- Some further specifications of the survival algorithm are given that are necessary in order to implement the algorithm in terms of neurons. A neural realization of the survival algorithm is finally discussed in connection with some basic data on the brain (from Chap. 1) and in order to stimulate interest in further data as supplied in the following chapters. Digression 4 may be entered from here..- II.- 8 The Anatomy of the Cortical Connectivity.- Further data on the connectivity between neurons in the cerebral cortex are presented, leading to some speculations on the flow of neural activity in the cortex..- 9 The Visual Input to the Cortex.- The projection from the retina onto the visual cortex is outlined, to exemplify how sensory input information enters the cortex..- 10 Changes in the Cortex with Learning.- Various experiments correlate differences in the cortex with differences in the environments which had been experienced by experimental animals, and possibly with "learning". Some of these experiments are discussed with the object of obtaining evidence for Hebb'Contents, with Outline.- I.- 1 The Flow of Information.- An introduction to the brain with emphasis on the transmission of information. Digressions 1 and 2 start from here..- 2 Thinking as Seen from Within and from Without.- Some problems in thinking about thinking are presented the behavioristic approach to such problems is introduced: What in the observable behavior of somebody else makes us think that he is thinking? This leads to the Turing test for artificial intelligence..- 3 How to Build Well-Behaving Machines.- "Behavior" is understood as the total stimulus (or situation) ? response mapping. For a finite number of different inputs, any such mapping can be constructed. This statement is demonstrated by.- 1. coding of any finite set into finite 0, 1-sequences.- 2. showing that any mapping between finite sets of 0.1-sequences can be built from logical and-, or-, and not-gates..- 3. representing the and-, or-, and not-gate as special threshold neurons of the McCulloch and Pitts type..- Digression 3 may be of some help here..- 4 Organizations, Algorithms, and Flow Diagrams.- The chapter contains some general remarks on organizations and cooperativity and introduces the matchbox algorithm..- 5 The Improved Matchbox Algorithm.- The matchbox algorithm is improved by the incorporation of the look-ahead algorithm (e.g., for chess-playing machines) and the associative matrix memory. Appendix 1 starts from here. Chapters 5 and 7 contain the basic constructions needed for the survival algorithm..- 6 The Survival Algorithm as a Model of an Animal.- The improved matchbox algorithm is interpreted as an algorithm for survival and thus as a model of an animal If such an algorithm is implemented in terms of neuron-like elements, the result can be checked against experimental data from the neurosciences. Conversely, such data cannot really be understood without a theory (in line with a more general argument as for example in Kuhn 1962)..- 7 Specifying the Survival Algorithm.- Some further specifications of the survival algorithm are given that are necessary in order to implement the algorithm in terms of neurons. A neural realization of the survival algorithm is finally discussed in connection with some basic data on the brain (from Chap. 1) and in order to stimulate interest in further data as supplied in the following chapters. Digression 4 may be entered from here..- II.- 8 The Anatomy of the Cortical Connectivity.- Further data on the connectivity between neurons in the cerebral cortex are presented, leading to some speculations on the flow of neural activity in the cortex..- 9 The Visual Input to the Cortex.- The projection from the retina onto the visual cortex is outlined, to exemplify how sensory input information enters the cortex..- 10 Changes in the Cortex with Learning.- Various experiments correlate differences in the cortex with differences in the environments which had been experienced by experimental animals, and possibly with "learning". Some of these experiments are discussed with the object of obtaining evidence for Hebb's synaptic rule. Digression 4 may be consulted here..- 11 From Neural Dynamics to Cell Assemblies.- Several papers on neural dynamics are discussed in order.- 1. to obtain a more detailed image of the flow of activity in the neural network of the brain (or the cortex).- 2. to get a better understanding of the learning- and information processing capabilities of such networks (especially in comparison with the requirements of the survival algorithm of Chaps. 6 and 7)..- The resulting image is fixed in the language of cell assemblies. Appendix 2 and Digression 4 start from here..- 12 Introspection and the Rules of Threshold Control.- The same language of cell assemblies is used to describe some introspections of the author in a more systematic way. This leads to a few strategies for controlling the thresholds of the neurons in a neural network that is used as an associative memory (for example by a survival robot). Appendices 3 and 4 and Digression 5 start from here..- 13 Further Speculations.- The ideas of cell assemblies and threshold control are carried out further and in a more speculative way. Digression 6 starts from here. Chapters 12 and 13 (together with Digression 5) contain a speculative, algorithmic picture of the information processing in an animal's brain..- 14 Men, Monkeys, and Machines.- It is argued that this picture carries over to humans as well. The acquisition of language, in particular, is regarded as a phenomenon of cultural evolution..- 15 Why all These Speculations?.- The whole book can be understood as an attempt to reduce human behavior to electrophysiological events in the brain and finally to physics, which, of course, does not preclude a heuristic use of teleological arguments (the final purpose being survival and proliferation). Some ethical and epistemological consequences of this attempt are briefly discussed..- References.- Digressions.- 1 Electrical Signal Transmission in a Single Neuron.- 2 Basic Information Theory.- 3 Sets and Mappings.- 4 Local Synaptic Rules.- 5 Flow Diagram for a Survival Algorithm.- 6 Suggestions for Further Reading.- Appendices.- 1 On the Storage Capacity of Associative Memories.- 2 Neural Modeling.- 3 Cell Assemblies: the Basic Ideas.- 4 Cell Assemblies and Graph Theory.- Author and Subject Index.

319 citations


Book ChapterDOI
01 Jan 1982
TL;DR: This article reviews some of my main theoretical advances before 1973 in a self-contained and nontechnical exposition that describes some predictions which still need to be tested.
Abstract: This article reviews some of my main theoretical advances before 1973 in a self-contained and nontechnical exposition. Among other features, the article describes some predictions which still need to be tested.

147 citations


Book ChapterDOI
TL;DR: This paper proves the universal theorem on associative learning that culminates my 1967–1972 articles on this subject, which says that if my associativelearning laws were invented at a prescribed time during the evolutionary process, then they could be used to guarantee unbiased Associative learning in essentially any later evolutionary specialization.
Abstract: This paper proves the universal theorem on associative learning that culminates my 1967–1972 articles on this subject. The theorem is universal in the following sense. It says that if my associative learning laws were invented at a prescribed time during the evolutionary process, then they could be used to guarantee unbiased associative learning in essentially any later evolutionary specialization. That is, the laws are capable of learning arbitrary spatial patterns in arbitrarily many, simultaneously active sampling channels that are activated by arbitrary continuous data preprocessing in an essentially arbitrary anatomy. The learning of arbitrary space-time patterns is also guaranteed given modest requirements on the temporal regularity of stimulus sampling, as in avalanches and generalizations thereof.

35 citations




Book ChapterDOI
01 Jan 1982
TL;DR: The aim of this paper is to make clear the organizing capability of the neural network under the general environment, and the mechanism of the generation of the cycle mode of this network.
Abstract: The aim of this paper is to make clear the organizing capability of the neural network under the general environment. But as the first step, we discuss the characteristics of the neural network with uniform structure. Main problems of this paper am to consider the dynamic behavior, and the mechanism of the generation of the cycle mode of this network.