scispace - formally typeset
Search or ask a question

Showing papers on "Deep learning published in 1986"


Proceedings ArticleDOI
13 Feb 1986
TL;DR: In this article, the authors describe models of associative pattern learning, adaptive pattern recognition, and parallel decision-making by neural networks and show that a small set of real-time non-linear neural equations within a larger set of specialized neural circuits can be used to study a wide variety of such problems.
Abstract: This article describes models of associative pattern learning, adaptive pattern recognition, and parallel decision-making by neural networks. It is shown that a small set of real-time non-linear neural equations within a larger set of specialized neural circuits can be used to study a wide variety of such problems. Models of energy minimization, cooperative-competitive decision making, competitive learning, adaptive resonance, interactive activation, and back propagation are discussed and compared.

28 citations


Book ChapterDOI
01 Jan 1986
TL;DR: There is now the expectation that the implementation of neural network models using VLSI technology may lead to significant computational hardware for a number of image and signal processing applications and for optimisation problems.
Abstract: Neural networks are massively parallel computational models which attempt to capture the “intelligent” processing faculties of the nervous system. They have been studied extensively for more than thirty years [1]. Apart from the longer term goal of understanding the nervous system, the current upsurge of interest in such models is driven by at least three factors. First, seminal papers by Hopfield [2] and by Hinton, Rumelhardt, Sejnowski and collaborators [3] exposed many salient properties of the models and extended their richness and potential in a significant way. Second, the developments in the theory of spin-glasses [4] and the discovery of replica symmetry breaking [5] in the long-range Sherrington-Kirkpatrick model [6] have led to an understanding in some depth of the Hopfield model [7]. Finally, there is now the expectation that the implementation of neural network models using VLSI technology may lead to significant computational hardware for a number of image and signal processing applications and for optimisation problems.

19 citations


Journal ArticleDOI
TL;DR: A system for simulating neural networks has been written in the LISP dialect, Scheme, using an object-oriented style of program ming, rather than the standard numerical techniques used in previous studies, which allows the construction of hierarchical networks with several interacting levels.
Abstract: A system for simulating neural networks has been written in the LISP dialect, Scheme, using an object-oriented style of program ming, rather than the standard numerical techniques used in previous studies. Each node in the Scheme network represents either a neuron or a functional group of neurons, and can pass messages which trigger computations and actions in other nodes.The Scheme modeling approach overcomes two major problems inherent to the standard numerical approach. First, it provides a flexible environment for systematically studying the effects of perturbing a network's structure, response, or updating param eters. In fact, the Scheme system can recreate any previously studied neural network. Second, it allows the construction of hierarchical networks with several interacting levels. This system can handle hierarchical organization in a natural way, because a single node in a Scheme network can contain a model of an entire lower level of neural processing. The implementation of neural networks wi...

10 citations


29 Sep 1986
TL;DR: An outline of a speech recognition system that uses neural network modules for learning and recognition is proposed, based on the layered structure of existing speech recognition systems, and uses forced learning (feedback) for conditioning the neural modules at the various levels.
Abstract: : Organizations of computing elements that follow the principles of physiological neurons, called neural network models, have been shown to have the capability of learning to recognize patterns and to retrieve complete patterns from partial representations. The implementation of neural network models as VLSI or USLI chips within a few years is certain. This report reviews a number of published papers on neural network models and their capabilities. Then, an outline of a speech recognition system that uses neural network modules for learning and recognition is proposed. It is based on the layered structure of existing speech recognition systems, and uses forced learning (feedback) for conditioning the neural modules at the various levels. (Author)

1 citations