scispace - formally typeset
Search or ask a question

Showing papers on "Deep learning published in 1987"


Journal ArticleDOI
TL;DR: High-order neural networks have been shown to have impressive computational, storage, and learning capabilities because the order or structure of a high- order neural network can be tailored to the order of a problem.
Abstract: High-order neural networks have been shown to have impressive computational, storage, and learning capabilities. This performance is because the order or structure of a high-order neural network can be tailored to the order or structure of a problem. Thus, a neural network designed for a particular class of problems becomes specialized but also very efficient in solving those problems. Furthermore, a priori knowledge, such as geometric invariances, can be encoded in high-order networks. Because this knowledge does not have to be learned, these networks are very efficient in solving problems that utilize this knowledge.

702 citations



Proceedings Article
01 Jan 1987
TL;DR: This paper demonstrates that for certain applications neural networks can achieve significantly higher numerical accuracy than more conventional techniques, and shows that prediction of future values of a chaotic time series can be performed with exceptionally high accuracy.
Abstract: There is presently great interest in the abilities of neural networks to mimic "qualitative reasoning" by manipulating neural incodings of symbols. Less work has been performed on using neural networks to process floating point numbers and it is sometimes stated that neural networks are somehow inherently inaccurate and therefore best suited for "fuzzy" qualitative reasoning. Nevertheless, the potential speed of massively parallel operations make neural net "number crunching" an interesting topic to explore. In this paper we discuss some of our work in which we demonstrate that for certain applications neural networks can achieve significantly higher numerical accuracy than more conventional techniques. In particular, prediction of future values of a chaotic time series can be performed with exceptionally high accuracy. We analyze how a neural net is able to do this, and in the process show that a large class of functions from Rn → Rm may be accurately approximated by a backpropagation neural net with just two "hidden" layers. The network uses this functional approximation to perform either interpolation (signal processing applications) or extrapolation (symbol processing applications). Neural nets therefore use quite familiar methods to perform their tasks. The geometrical viewpoint advocated here seems to be a useful approach to analyzing neural network operation and relates neural networks to well studied topics in functional approximation.

367 citations



Proceedings Article
01 Jan 1987
TL;DR: The back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the output and input neurons as probabilities and varying the synaptic weights in the gradient direction of the log likelihood, rather than the 'error'.
Abstract: We propose that the back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the output and input neurons as probabilities and varying the synaptic weights in the gradient direction of the log likelihood, rather than the 'error'.

186 citations


Journal ArticleDOI
Günther Palm1
06 Mar 1987-Science

126 citations


Book ChapterDOI
TL;DR: The paper discusses models which have an energy function but depart from the simple Hebb rule, which includes networks with static synaptic noise, dilute networks and synapses that are nonlinear functions of the HebbRule.
Abstract: Recent studies of the statistical mechanics of neural network models of associative memory are reviewed The paper discusses models which have an energy function but depart from the simple Hebb rule This includes networks with static synaptic noise, dilute networks and synapses that are nonlinear functions of the Hebb rule (eg, clipped networks) The properties of networks that employ the projection method are reviewed

72 citations


Book
01 Jan 1987
TL;DR: The aim of this monograph is to clarify the role of Boolean functions in the development of neural networks and to provide some examples of how these functions have changed since their introduction in the 1970s.
Abstract: Preface 1. Artificial Neural Networks 2. Boolean Functions 3. Threshold Functions 4. Number of Threshold Functions 5. Sizes of Weights for Threshold Functions 6. Threshold Order 7. Threshold Networks and Boolean Functions 8. Specifying Sets 9. Neural Network Learning 10. Probabilistic Learning 11. VC-Dimensions of Neural Networks 12. The Complexity of Learning 13. Boltzmann Machines and Combinatorial Optimization Bibliography Index.

66 citations


01 Mar 1987

60 citations


01 Mar 1987
TL;DR: Optical analogs of 2−D distribution of idealized neurons (2−D neural net) based on partitioning of the resulting 4−D connectivity matrix are discussed and super‐resolved recognition from partial information that can be as low as 20% of the sinogram data is demonstrated.
Abstract: Optical analogs of 2−D distribution of idealized neurons (2−D neural net) based on partitioning of the resulting 4−D connectivity matrix are discussed. These are desirable because of compatibility with 2−D feature spaces and ability to realize denser networks. An example of their use with sinogram classifiers derived from ralistic radar data of scale models of three aerospace objects as learning set is given. Super‐resolved recognition from partial information that can be as low as 20% of the sinogram data is demonstrated together with a capacity for error correction and generalization.

19 citations


Book ChapterDOI
01 Jan 1987
TL;DR: A short introduction to simple models of neural networks is presented, based on attractors in configuration space, which outlines some recent results.
Abstract: A short introduction to simple models of neural networks is presented. Information processing is based on attractors in configuration space. Some recent results are outlined.

Proceedings Article
01 Jan 1987
TL;DR: Learning rules for recurrent neural networks with high-order interactions between some or all neurons exhibit the desired associative memory function: perfect storage and retrieval of pieces of information and/or sequences of information of any complexity.
Abstract: We propose learning rules for recurrent neural networks with high-order interactions between some or all neurons. The designed networks exhibit the desired associative memory function: perfect storage and retrieval of pieces of information and/or sequences of information of any complexity.

Journal ArticleDOI
01 Oct 1987
TL;DR: Consideration of the eigenproblem of the synaptic matrix in Hopfield's model of neural networks suggests to introduce a matrix built up from an Orthogonal set, orthogonal to the original memories, with significantly enhanced capacity storage and robustness at least conserved.
Abstract: 2014 Consideration of the eigenproblem of the synaptic matrix in Hopfield’s model of neural networks suggests to introduce a matrix built up from an orthogonal set, orthogonal to the original memories. With this new scheme, capacity storage is significantly enhanced and robustness at least conserved. Revue Phys. Appl. 22 (1987) 1321-1325 OCTOBRE 1987, PAGE Classification Physics Abstracts 42.30 42.80 84.20P

01 Mar 1987
TL;DR: In this article, the authors developed a particular error correction code which can be effectively decoded by a relatively simple neural network, which is comparable to that used at present in deep space communications.
Abstract: This paper develops a particular error correction code which can be effectively decoded by a relatively simple neural network. In high noise situations, this code is comparable to that used at present in deep space communications. The neural decoder has N! stable states with only N2 neurons, and can quickly extract information from analog noise. This example illustrates the effectiveness of neural networks in solving real problems when the problem can be cast in such a fashion that it fits gracefully on the network.

Journal ArticleDOI
TL;DR: It is shown that it is possible in general to calculate analytically the memory capacity by solving the random walk problem associated to a given learning rule, and estimations 2014 done for several learning rules 2014 are in excellent agreement with numerical and analytical statistical mechanics results.
Abstract: 2014 We present a model of long term memory : learning within irreversible bounds. The best bound values and memory capacity are determined numerically. We show that it is possible in general to calculate analytically the memory capacity by solving the random walk problem associated to a given learning rule. Our estimations 2014 done for several learning rules 2014 are in excellent agreement with numerical and analytical statistical mechanics results. J. Physique 48 (1987) 2053-2058 DTCEMBRE 1987, 1 Classification Physics Abstracts 75.10H 64.60 87.30 In the last few years, a great amount of work has been done on the properties of networks of formal neurons, proposed by Hopfield [1] as models of associative memories. In these models, each neuron i is represented by a spin variable oi which can take only two values ai =1 or ai = I . Any state of the system is defined by the values {o-i, U2, ..., UN} == U taken by each one of the N spins or neurons. Pairs of neurons i, j interact with strengths Cij, the synaptic efficacies, which are modified by learning. As usual, we denote 6’ (v = 1, 2, ...) the learnt states or patterns. Retrieval of patterns is a dynamic process in which each spin takes the sign of the local field : acting on it. The primed sum means that terms j = i should be ignored. A learnt state ç v is said to be memorized or retrieved if, starting with the network in state ç v it relaxes towards a final state close to ç v. In general, the final state can be very different from ç v, and will be denoted lv. The overlap between both : gives a measure of retrieval quality. The simplest local learning prescription [2] for p learnt patterns is Hebb’s rule : Assuming that the values of )I’ are random and uncorrelated, it has been shown [1-3] that the maximum number of patterns p that can be memorized with Hebb’s learning rule is proportional to the number of neurons : p = aN, with a = 0.145 ± 0.009. If more than aN patterns are learnt, memory breaks down and none of the learnt patterns are retrieved. In order to avoid this catastrophic effect, different modifications of Hebb’s rule were proposed [4-6]. The simplest one is the so-called learning within bounds [5] : synaptic efficacies are modified by learning in the same way as Hebb’s rule, but their values are constrained to remain within some chosen range. In the version proposed by Parisi [4] bounds are reversible : once a Cij reaches a barrier, it remains at its value until a pattern is learnt that returns it inside the allowed range. This is a model of Article published online by EDP Sciences and available at http://dx.doi.org/10.1051/jphys:0198700480120205300

Proceedings ArticleDOI
06 Jun 1987
TL;DR: This paper describes the implementation of PUNNS (Perception Using Neural Network Simulation), a massively parallel computer architecture which is evolving to allow the execution of certain visual functions in constant time, regardless of the size and complexity of the image.
Abstract: The sequential processing paradigm limits current solutions for computer vision by restricting the number of functions which naturally map onto Von Neumann computing architectures. A variety of physical computing structures underlie the massive parallelism inherent in many visual functions. Therefore, further advances in general purpose vision must assume inseparability of function from structure. To combine function and structure we are investigating connectionist architectures using PUNNS (Perception Using Neural Network Simulation). Our approach is inspired and constrained by the analysis of visual functions that are computed in the neural networks of living things. PUNNS represents a massively parallel computer architecture which is evolving to allow the execution of certain visual functions in constant time, regardless of the size and complexity of the image. Due to the complexity and cost of building a neural net machine, a flexible neural net simulator is needed to invent, study and understand the behavior of complex vision algorithms. Some of the issues involved in building a simulator are how to compactly describe the interconnectivity of the neural network, how to input image data, how to program the neural network, and how to display the results of the network. This paper describes the implementation of PUNNS. Simulation examples and a comparison of PUNNS to other neural net simulators will be presented.

Proceedings Article
Hisashi Mizuno1
01 Jan 1987

01 Mar 1987
TL;DR: The close relationship between entropy learning and simulated annealing in network solutions to combinatorial optimization problems is discussed and a network learning paradigm, called entropy learning, based upon the Principle of Maximum Entropy (PME), is proposed.
Abstract: Neural network architectures and algorithms provide an anthropomorphic framework for analysis and synthesis of learning networks for correlation, tracking and identification applications Many researchers in neuroscience believe that through evolution nature has developed efficient structures for multi‐sensor integration and data fusion Consequently, innovations in electronic surveillance and advanced computing may result from current interdisciplinary research in neural networks and natural intelligence (NI) In this paper we propose a network learning paradigm, called entropy learning, based upon the Principle of Maximum Entropy (PME) The close relationship between entropy learning and simulated annealing in network solutions to combinatorial optimization problems is discussed

01 Mar 1987
TL;DR: The authors proposed a multilayered neural network model which has the ability of rapid self-organization and has modifiable inhibitory feedback connections, as well as conventional modifiable excitatory feedforward connections between the cells of adjoining layers.
Abstract: We propose a new multilayered neural network model which has the ability of rapid self‐organization This model is a modified version of the cognitron It has modifiable inhibitory feedback connections, as well as conventional modifiable excitatory feedforward connections, between the cells of adjoining layers We also discuss the role of context information for pattern recognition, and propose a symbol‐processing model which can send the context signal to the pattern‐recognizing network

Journal ArticleDOI
TL;DR: A susceptibility-like quantity which characterises the sensibility of neural models can be calculated without the replicas in the Hopfield model and may be useful in the investigation of highly nonlinear learning rules.
Abstract: The author introduces and investigates a susceptibility-like quantity which characterises the sensibility of neural models. It can be calculated without the replicas in the Hopfield model and may be useful in the investigation of highly nonlinear learning rules.