scispace - formally typeset
Search or ask a question

Showing papers on "Recurrent neural network published in 1986"


Journal ArticleDOI
TL;DR: The author finds that the asymmetry in the synaptic strengths may be crucial for the process of learning.
Abstract: Studies the influence of a strong asymmetry of the synaptic strengths on the behavior of a neural network which works as an associative memory. The author finds that the asymmetry in the synaptic strengths may be crucial for the process of learning.

164 citations


Proceedings ArticleDOI
13 Feb 1986
TL;DR: In this article, the authors describe models of associative pattern learning, adaptive pattern recognition, and parallel decision-making by neural networks and show that a small set of real-time non-linear neural equations within a larger set of specialized neural circuits can be used to study a wide variety of such problems.
Abstract: This article describes models of associative pattern learning, adaptive pattern recognition, and parallel decision-making by neural networks. It is shown that a small set of real-time non-linear neural equations within a larger set of specialized neural circuits can be used to study a wide variety of such problems. Models of energy minimization, cooperative-competitive decision making, competitive learning, adaptive resonance, interactive activation, and back propagation are discussed and compared.

28 citations


Book ChapterDOI
01 Jan 1986
TL;DR: There is now the expectation that the implementation of neural network models using VLSI technology may lead to significant computational hardware for a number of image and signal processing applications and for optimisation problems.
Abstract: Neural networks are massively parallel computational models which attempt to capture the “intelligent” processing faculties of the nervous system. They have been studied extensively for more than thirty years [1]. Apart from the longer term goal of understanding the nervous system, the current upsurge of interest in such models is driven by at least three factors. First, seminal papers by Hopfield [2] and by Hinton, Rumelhardt, Sejnowski and collaborators [3] exposed many salient properties of the models and extended their richness and potential in a significant way. Second, the developments in the theory of spin-glasses [4] and the discovery of replica symmetry breaking [5] in the long-range Sherrington-Kirkpatrick model [6] have led to an understanding in some depth of the Hopfield model [7]. Finally, there is now the expectation that the implementation of neural network models using VLSI technology may lead to significant computational hardware for a number of image and signal processing applications and for optimisation problems.

19 citations


Journal ArticleDOI
TL;DR: A system for simulating neural networks has been written in the LISP dialect, Scheme, using an object-oriented style of program ming, rather than the standard numerical techniques used in previous studies, which allows the construction of hierarchical networks with several interacting levels.
Abstract: A system for simulating neural networks has been written in the LISP dialect, Scheme, using an object-oriented style of program ming, rather than the standard numerical techniques used in previous studies. Each node in the Scheme network represents either a neuron or a functional group of neurons, and can pass messages which trigger computations and actions in other nodes.The Scheme modeling approach overcomes two major problems inherent to the standard numerical approach. First, it provides a flexible environment for systematically studying the effects of perturbing a network's structure, response, or updating param eters. In fact, the Scheme system can recreate any previously studied neural network. Second, it allows the construction of hierarchical networks with several interacting levels. This system can handle hierarchical organization in a natural way, because a single node in a Scheme network can contain a model of an entire lower level of neural processing. The implementation of neural networks wi...

10 citations


Proceedings ArticleDOI
13 Feb 1986
TL;DR: A single homogeneous layer of neural network is reviewed and a vector outer product model of neuralnetwork is fully explored and is characterized to be quasi-linear (QL).
Abstract: A single homogeneous layer of neural network is reviewed. For optical computing, a vector outer product model of neural network is fully explored and is characterized to be quasi-linear (QL). The relationships among the hetero-associative memory [AM], the ill-posed inverse association (solved by annealing algorithm Boltzmann machine (BM)), and the symmetric interconnect [T] of Hopfield's model E(N) are found by applying Wiener's criterion to the output feature f and setting [EQUATION].

10 citations


Proceedings ArticleDOI
23 Mar 1986
TL;DR: A series of computer simulations performed on a 100-node Hopfield network examined the sources of confusion and led to a preprocessing approach which substantially reduces the confusion.
Abstract: The performance of an associative memory based on the Hopfield Model of a neural network is data dependent. When programmed memories are too similar (a small Hamming distance between memories) the associative memory system is easily confused; settling either to incorrect or in some cases, undefined states. This paper describes a series of computer simulations performed on a 100-node Hopfield network. The programs were written in the APL language, which is highly efficient for this type of system. The simulations examined the sources of confusion and led to a preprocessing approach which substantially reduces the confusion. The simulations were also extended in the direction of coupling several small neural networks to form one integrated low-confusion associative memory. The coupling of the neural subnetworks was through a voting scheme wherein each node of a subnetwork consulted the analogous node of the other subnetworks; the decision to change state or remain the same is based on majority rule. The performance of these two associative memory systems is detailed and compared to a conventional Hopfield system.

5 citations


29 Sep 1986
TL;DR: An outline of a speech recognition system that uses neural network modules for learning and recognition is proposed, based on the layered structure of existing speech recognition systems, and uses forced learning (feedback) for conditioning the neural modules at the various levels.
Abstract: : Organizations of computing elements that follow the principles of physiological neurons, called neural network models, have been shown to have the capability of learning to recognize patterns and to retrieve complete patterns from partial representations. The implementation of neural network models as VLSI or USLI chips within a few years is certain. This report reviews a number of published papers on neural network models and their capabilities. Then, an outline of a speech recognition system that uses neural network modules for learning and recognition is proposed. It is based on the layered structure of existing speech recognition systems, and uses forced learning (feedback) for conditioning the neural modules at the various levels. (Author)

1 citations