scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 1987"


Journal ArticleDOI
TL;DR: This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Abstract: Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.

7,798 citations


PatentDOI
TL;DR: ART 2, a class of adaptive resonance architectures which rapidly self-organize pattern recognition categories in response to arbitrary sequences of either analog or binary input patterns, is introduced.
Abstract: A neural network includes a feature representation field which receives input patterns. Signals from the feature representation field select a category from a category representation field through a first adaptive filter. Based on the selected category, a template pattern is applied to the feature representation field, and a match between the template and the input is determined. If the angle between the template vector and a vector within the representation field is too great, the selected category is reset. Otherwise the category selection and template pattern are adapted to the input pattern as well as the previously stored template. A complex representation field includes signals normalized relative to signals across the field and feedback for pattern contrast enhancement.

1,865 citations


Journal ArticleDOI
TL;DR: A hierarchical neural network model which accounts for the learning and control capability of the CNS and provides a promising parallel-distributed control scheme for a large-scale complex object whose dynamics are only partially known is proposed.
Abstract: In order to control voluntary movements, the central nervous system (CNS) must solve the following three computational problems at different levels: the determination of a desired trajectory in the visual coordinates, the transformation of its coordinates to the body coordinates and the generation of motor command. Based on physiological knowledge and previous models, we propose a hierarchical neural network model which accounts for the generation of motor command. In our model the association cortex provides the motor cortex with the desired trajectory in the body coordinates, where the motor command is then calculated by means of long-loop sensory feedback. Within the spinocerebellum — magnocellular red nucleus system, an internal neural model of the dynamics of the musculoskeletal system is acquired with practice, because of the heterosynaptic plasticity, while monitoring the motor command and the results of movement. Internal feedback control with this dynamical model updates the motor command by predicting a possible error of movement. Within the cerebrocerebellum — parvocellular red nucleus system, an internal neural model of the inverse-dynamics of the musculo-skeletal system is acquired while monitoring the desired trajectory and the motor command. The inverse-dynamics model substitutes for other brain regions in the complex computation of the motor command. The dynamics and the inverse-dynamics models are realized by a parallel distributed neural network, which comprises many sub-systems computing various nonlinear transformations of input signals and a neuron with heterosynaptic plasticity (that is, changes of synaptic weights are assumed proportional to a product of two kinds of synaptic inputs). Control and learning performance of the model was investigated by computer simulation, in which a robotic manipulator was used as a controlled system, with the following results: (1) Both the dynamics and the inverse-dynamics models were acquired during control of movements. (2) As motor learning proceeded, the inverse-dynamics model gradually took the place of external feedback as the main controller. Concomitantly, overall control performance became much better. (3) Once the neural network model learned to control some movement, it could control quite different and faster movements. (4) The neural netowrk model worked well even when only very limited information about the fundamental dynamical structure of the controlled system was available. Consequently, the model not only accounts for the learning and control capability of the CNS, but also provides a promising parallel-distributed control scheme for a large-scale complex object whose dynamics are only partially known.

1,508 citations


01 Dec 1987
TL;DR: In this article, the stability-plasticity dilemma and Adaptive Resonance Theory are discussed in the context of self-organizing learning and recognition systems, and the three R's: Recognition, Reinforcement, and Recall.
Abstract: : Partial Contents: Attention and Expectation in Self-Organizing Learning and Recognition Systems; The Stability-Plasticity Dilemma and Adaptive Resonance Theory; Competitive Learning Models; Self-Stabilized Learning by an ART Architecture in an Arbitrary Input Environment; Attentional Priming and Prediction: Matching by the 2/3 Rule; Automatic Control of Hypothesis Testing by Attentional-Orienting Interactions; Learning to Recognize an Analog World; Invariant Visual Pattern Recognition; The Three R's: Recognition, Reinforcement, and Recall; Self-Stabilization of Speech Perception and Production Codes: New Light on Motor Theory; and Psychophysiological and Neurophysiological Predictions of ART.

1,196 citations


Journal ArticleDOI
TL;DR: High-order neural networks have been shown to have impressive computational, storage, and learning capabilities because the order or structure of a high- order neural network can be tailored to the order of a problem.
Abstract: High-order neural networks have been shown to have impressive computational, storage, and learning capabilities. This performance is because the order or structure of a high-order neural network can be tailored to the order or structure of a problem. Thus, a neural network designed for a particular class of problems becomes specialized but also very efficient in solving those problems. Furthermore, a priori knowledge, such as geometric invariances, can be encoded in high-order networks. Because this knowledge does not have to be learned, these networks are very efficient in solving problems that utilize this knowledge.

702 citations


01 Jun 1987
TL;DR: It is demonstrated that the backpropagation learning algorithm for neural networks may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method.
Abstract: The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing Simulations are presented that document the additional performance achieved by using nonlinear neural networks First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series Furthermore, the neural net more » seems to be extremely parsimonious in its requirements for data points from the time series We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps 24 refs, 13 figs « less

584 citations



Journal ArticleDOI
TL;DR: The authors motivate this proposal and provide optimal stability learning rules for two different choices of normalisation for the synaptic matrix (Jij) and numerical work is presented which gives the value of the optimal stability for random uncorrelated patterns.
Abstract: To ensure large basins of attraction in spin-glass-like neural networks of two-state elements xi imu =+or-1. The authors propose to study learning rules with optimal stability Delta , where delta is the largest number satisfying Delta

398 citations


Proceedings Article
01 Jan 1987
TL;DR: This paper demonstrates that for certain applications neural networks can achieve significantly higher numerical accuracy than more conventional techniques, and shows that prediction of future values of a chaotic time series can be performed with exceptionally high accuracy.
Abstract: There is presently great interest in the abilities of neural networks to mimic "qualitative reasoning" by manipulating neural incodings of symbols. Less work has been performed on using neural networks to process floating point numbers and it is sometimes stated that neural networks are somehow inherently inaccurate and therefore best suited for "fuzzy" qualitative reasoning. Nevertheless, the potential speed of massively parallel operations make neural net "number crunching" an interesting topic to explore. In this paper we discuss some of our work in which we demonstrate that for certain applications neural networks can achieve significantly higher numerical accuracy than more conventional techniques. In particular, prediction of future values of a chaotic time series can be performed with exceptionally high accuracy. We analyze how a neural net is able to do this, and in the process show that a large class of functions from Rn → Rm may be accurately approximated by a backpropagation neural net with just two "hidden" layers. The network uses this functional approximation to perform either interpolation (signal processing applications) or extrapolation (symbol processing applications). Neural nets therefore use quite familiar methods to perform their tasks. The geometrical viewpoint advocated here seems to be a useful approach to analyzing neural network operation and relates neural networks to well studied topics in functional approximation.

367 citations


Journal ArticleDOI
TL;DR: An analog model neural network that can solve a general problem of recognizing patterns in a time-dependent signal is presented and can be understood from consideration of an energy function that is being minimized as the circuit computes.
Abstract: An analog model neural network that can solve a general problem of recognizing patterns in a time-dependent signal is presented. The networks use a patterned set of delays to collectively focus stimulus sequence information to a neural state at a future time. The computational capabilities of the circuit are demonstrated on tasks somewhat similar to those necessary for the recognition of words in a continuous stream of speech. The network architecture can be understood from consideration of an energy function that is being minimized as the circuit computes. Neurobiological mechanisms are known for the generation of appropriate delays.

353 citations


Proceedings Article
13 Jul 1987
TL;DR: This paper describes away of coupling autoassociative learning modules Into hierarchies that should greatly improve the performance of learning algorithms in large-scale systems.
Abstract: In the development of large-scale knowledge networks much recent progress has been inspired by connections to neurobiology. An important component of any "neural" network is an accompanying learning algorithm. Such an algorithm, to be biologically plausible, must work for very large numbers of units. Studies of large-scale systems have so far been restricted to systems Without internal units (units With no direct connections to the input or output). Internal units are crucial to such systems as they are the means by which a system can encode high-order regularities (or invariants) that are Implicit in its inputs and outputs. Computer simulations of learning using internal units have been restricted to small-scale systems. This paper describes away of coupling autoassociative learning modules Into hierarchies that should greatly improve the performance of learning algorithms in large-scale systems. The Idea has been tested experimentally with positive results.

Journal ArticleDOI
TL;DR: The neural model embodies a solution to two key design problems of conditioning, the synchronization and persistence problems, and is compared with data and models of invertebrate learning.
Abstract: Selective information processing in neural networks is studied through computer simulations of Pavlovian conditioning data. The model reproduces properties of blocking, inverted-U in learning as a function of interstimulus interval, anticipatory conditioned responses, secondary reinforcement, attentional focusing by conditioned motivational feedback, and limited capacity short-term memory processing. Conditioning occurs from sensory to drive representations (conditioned reinforcer learning), from drive to sensory representations (incentive motivational learning), and from sensory to motor representations (habit learning).The conditionable pathwas contain long-term memory traces that obey a non-Hebbian associative law. The neural model embodies a solution to two key design problems of conditioning, the synchronization and persistence problems. This model of vertebrate learning is compared with data and models of invertebrate learning. Predictions derived from models of vertebrate learning are compared with data about invertebrate learning, including data from Aplysia about facilitator neurons and data from Hermissenda about voltage-dependent Ca(2+) currents. A prediction is stated about classical conditioning in all species, called the secondary conditioning alternative, and if confirmed would constitute an evolutionary invariant of learning.


Journal ArticleDOI
15 Aug 1987-EPL
TL;DR: The upper storage capacity of a neural network for patterns of fixed magnetization m is calculated and the optimal capacity increases with the correlation m2 between the patterns.
Abstract: The upper storage capacity of a neural network for patterns of fixed magnetization m is calculated. The optimal capacity increases with the correlation m2 between the patterns.

Journal ArticleDOI
TL;DR: A network architecture composed of three layers of neuronal clusters is shown to exhibit active recognition and learning of time sequences by selection: the network spontaneously produces prerepresentations that are selected according to their resonance with the input percepts.
Abstract: A model for formal neural networks that learn temporal sequences by selection is proposed on the basis of observations on the acquisition of song by birds, on sequence-detecting neurons, and on allosteric receptors. The model relies on hypothetical elementary devices made up of three neurons, the synaptic triads, which yield short-term modification of synaptic efficacy through heterosynaptic interactions, and on a local Hebbian learning rule. The functional units postulated are mutually inhibiting clusters of synergic neurons and bundles of synapses. Networks formalized on this basis display capacities for passive recognition and for production of temporal sequences that may include repetitions. Introduction of the learning rule leads to the differentiation of sequence-detecting neurons and to the stabilization of ongoing temporal sequences. A network architecture composed of three layers of neuronal clusters is shown to exhibit active recognition and learning of time sequences by selection: the network spontaneously produces prerepresentations that are selected according to their resonance with the input percepts. Predictions of the model are discussed.

Journal ArticleDOI
TL;DR: A partir d'une configuration arbitraire de liaisons synaptiques, on peut mettre en memoire jusqu'a N schemas par des modifications successives des efficacites synaptique dans un reseau de type verre de spins.
Abstract: Two simple storing prescriptions are presented for neural network models of N two-state neurons. These rules are local and allow the embedding of correlated patterns without errors in a network of spin-glass type. Starting from an arbitrary configuration of synaptic bonds, up to N patterns can be stored by successive modification of the synaptic efficacies. Proofs for the convergence are given. Extensions of these rules are possible.

Journal ArticleDOI
TL;DR: An improved version of the earlier model of selective attention, where the ability of segmentation is improved by lateral inhibition and the model can recall the complete pattern in which the noise has been eliminated and the defects corrected.
Abstract: A neural network model of selective attention is discussed. When two patterns or more are presented simultaneously, the model successively pays selective attention to each one, segmenting it from the rest and recognizing it separately. In the presence of noise or defects, the model can recall the complete pattern in which the noise has been eliminated and the defects corrected. These operations can be successfully performed regardless of deformation of the input patterns. This is an improved version of the earlier model proposed by the author: the ability of segmentation is improved by lateral inhibition.

01 Mar 1987
TL;DR: An algorithm for adaptive spatial sampling of line‐structured images using a model of neural networks based on a network of fully interconnected processors, like neurons in a small volume of the Central Nervous System, which controls the weights of the interconnections.
Abstract: Part I Starting from the properties of networks with backward lateral inhibitions, we define an algorithm for adaptive spatial sampling of line‐structured images Applications to character recognition are straightforwardPart II Let be an array of n sensors, each sensitive to an unknown linear combination of n sources This is a classical problem in Signal Processing But what is less classical is to extract each source signal without any knowledge either about those signals or about their combination in the sensors outputs The only assumption is that the sources are independentThis problem emerged from recent studies on neural networks where any message appears as an unknown mixing of primary entities which are to be ‘‘discovered’’ According to the model of neural networks, we propose an algorithm based on: i − a network of fully interconnected processors (like neurons in a small volume of the Central Nervous System) ii − A law which controls the weights of the interconnections, derived from the Hebb concept for ‘‘Synaptic plasticity’’ in Physiology, and very close to the well know ‘‘stochastic gradient algorithm’’ in Signal ProcessingThis asociation result in a permanent selfearning mechanism which leads to a continuously up‐dating model of the sensor array information structureAfter convergence, the algorithm provides output signals directly proportional to the independent primitive source signals


Proceedings Article
01 Jan 1987
TL;DR: The back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the output and input neurons as probabilities and varying the synaptic weights in the gradient direction of the log likelihood, rather than the 'error'.
Abstract: We propose that the back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the output and input neurons as probabilities and varying the synaptic weights in the gradient direction of the log likelihood, rather than the 'error'.

Journal ArticleDOI
TL;DR: A new approach to learning in a multilayer optical neural network based on holographically interconnected nonlinear devices that performs an approximate implementation of the backpropagation learning procedure in a massively parallel high-speed nonlinear optical network.
Abstract: A new approach to learning in a multilayer optical neural network based on holographically interconnected nonlinear devices is presented. The proposed network can learn the interconnections that form a distributed representation of a desired pattern transformation operation. The interconnections are formed in an adaptive and self-aligning fashioias volume holographic gratings in photorefractive crystals. Parallel arrays of globally space-integrated inner products diffracted by the interconnecting hologram illuminate arrays of nonlinear Fabry-Perot etalons for fast thresholding of the transformed patterns. A phase conjugated reference wave interferes with a backward propagating error signal to form holographic interference patterns which are time integrated in the volume of a photorefractive crystal to modify slowly and learn the appropriate self-aligning interconnections. This multilayer system performs an approximate implementation of the backpropagation learning procedure in a massively parallel high-speed nonlinear optical network.

Journal ArticleDOI
TL;DR: This paper contains an attempt to describe certain adaptive and cooperative functions encountered in neural networks, to reason what functions are readily amenable to analytical modeling and which phenomena seem to ensue from the more complex interactions that take place in the brain.
Abstract: This paper contains an attempt to describe certain adaptive and cooperative functions encountered in neural networks. The approach is a compromise between biological accuracy and mathematical clarity. Two types of differential equation seem to describe the basic effects underlying the formation of these functions: the equation for the electrical activity of the neuron and the adaptation equation that describes changes in its input connectivities. Various phenomena and operations are derivable from them: clustering of activity in a laterally interconnected network; adaptive formation of feature detectors; the autoassociative memory function; and self-organized formation of ordered sensory maps. The discussion tends to reason what functions are readily amenable to analytical modeling and which phenomena seem to ensue from the more complex interactions that take place in the brain.

Proceedings Article
01 Jan 1987
TL;DR: It is demonstrated that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions.
Abstract: Previous work on nets with continuous-valued inputs led to generative procedures to construct convex decision regions with two-layer perceptrons (one hidden layer) and arbitrary decision regions with three-layer perceptrons (two hidden layers). Here we demonstrate that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions. Such classifiers are robust, train rapidly, and provide good performance with simple decision regions. When complex decision regions are required, however, convergence time can be excessively long and performance is often no better than that of k-nearest neighbor classifiers. Three neural net classifiers are presented that provide more rapid training under such situations. Two use fixed weights in the first one or two layers and are similar to classifiers that estimate probability density functions using histograms. A third "feature map classifier" uses both unsupervised and supervised training. It provides good performance with little supervised training in situations such as speech recognition where much unlabeled training data is available. The architecture of this classifier can be used to implement a neural net k-nearest neighbor classifier.

Proceedings Article
01 Jan 1987
TL;DR: A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks and to a constrained dynamical system for training a content addressable memory.
Abstract: A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks is introduced. The propagation of activation in these networks is determined by dissipative differential equations. The error signal is backpropagated by integrating an associated differential equation. The method is introduced by applying it to the recurrent generalization of the feedforward backpropagation network. The method is extended to the case of higher order networks and to a constrained dynamical system for training a content addressable memory. The essential feature of the adaptive algorithms is that adaptive equation has a simple outer product form. Preliminary experiments suggest that learning can occur very rapidly in networks with recurrent connections. The continuous formalism makes the new approach more suitable for implementation in VLSI.

Journal ArticleDOI
TL;DR: In this article, a massively parallel neural network architecture, called a masking field, is characterized through systematic computer simulations, which is a multiple-scale self-similar automatically gain-controlled cooperative-competitive feedback network F2.
Abstract: A massively parallel neural network architecture, called a masking field, is characterized through systematic computer simulations. A masking field is a multiple-scale self-similar automatically gain-controlled cooperative–competitive feedback network F2. Network F2 receives input patterns from an adaptive filter F1 → F2 that is activated by a prior processing level F1. Such a network F2 behaves like a content-addressable memory. It activates compressed recognition codes that are predictive with respect to the activation patterns flickering across the feature detectors of F1 and competitively inhibits, or masks, codes which are unpredictive with respect to the F1 patterns. In particular, a masking field can simultaneously detect multiple groupings within its input patterns and assign activation weights to the codes for these groupings which are predictive with respect to the contextual information embedded within the patterns and the prior learning of the system. A masking field automatically rescales its sensitivity as the overall size of an input pattern changes, yet also remains sensitive to the microstructure within each input pattern. In this way, a masking field can more strongly activate a code for the whole F1 pattern than for its salient parts, yet amplifies the code for a pattern part when it becomes a pattern whole in a new input context. A masking field can also be primed by inputs from F1: it can activate codes which represent predictions of how the F1 pattern may evolve in the subsequent time interval. Network F2 can also exhibit an adaptive sharpening property: repetition of a familiar F1 pattern can tune the adaptive filter to elicit a more focal spatial activation of its F2 recognition code than does an unfamiliar input pattern. The F2 recognition code also becomes less distributed when an input pattern contains more contextual information on which to base an unambiguous prediction of which the F1 pattern is being processed. Thus a masking field suggests a solution of the credit assignment problem by embodying a real-time code for the predictive evidence contained within its input patterns. Such capabilities are useful in speech recognition, visual object recognition, and cognitive information processing. An absolutely stable design for a masking field is disclosed through an analysis of the computer simulations. This design suggests how associative mechanisms, cooperative–competitive interactions, and modulatory gating signals can be joined together to regulate the learning of compressed recognition codes. Data about the neural substrates of learning and memory are compared to these mechanisms.

Journal ArticleDOI
TL;DR: A model which can perform learning, formation of memory without teacher for successive memory recalls is presented and it is shown that positive and negative global feedbacks by the field effect play an essential role in the successive recall of stored patterns.
Abstract: A model which can perform learning, formation of memory without teacher for successive memory recalls is presented. The philosophical background of the study is summarized. The investigated network consists of two sets both composed of asynchronously firing model neurons. One set of neurons is responsible for the field effect, and the other is introduced as an input/ output module. The field effect is given in the form of the system's self·response. It is shown that positive and negative global feedbacks by the field effect play an essential role in the successive recall of stored patterns. The possibility that these proposed mechanisms are implemented in the brain is discussed. We obtained a quasi-deterministic law on the level of a macrovariable concerning a random successive recall of memory representations by taking a Lorenz·plot of this macrovariable. We show that this macroscopic order is deterministic chaos steming from collapse of tori and this type of chaos can be an effective gadget for memory traces. § 1. General introduction -philosophical background of the study

Proceedings ArticleDOI
12 Oct 1987
TL;DR: A primary advantage of this on-line learning algorithm is that the number of mistakes that it makes is relatively little affected by the presence of large numbers of irrelevant attributes in the examples.
Abstract: Valiant and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss on-line learning of these functions. In on-line learning, the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in the on-line setting is the number of mistakes the learner makes. For suitable classes of functions, on-line learning algorithms are available that make a bounded number of mistakes, with the bound independent of the number of examples seen by the learner. We present one such algorithm, which learns disjunctive Boolean functions, and variants of the algorithm for learning other classes of Boolean functions. The algorithm can be expressed as a linear-threshold algorithm. A primary advantage of this algorithm is that the number of mistakes that it makes is relatively little affected by the presence of large numbers of irrelevant attributes in the examples; we show that the number of mistakes grows only logarithmically with the number of irrelevant attributes. At the same time, the algorithm is computationaUy time and space efficient.

Journal ArticleDOI
01 Jul 1987-EPL
TL;DR: An energy function similar to the one used in a previous paper for solving the subgraph retrieval problem is constructed and a neuronal model for invariant pattern recognition is presented, based on this solution to sub graph retrieval and graph matching.
Abstract: We consider the problem of recognizing a shifted and distorted 2-dimensional shape. This task is formalized as a problem of labelled graph matching. To solve this problem, we construct an energy function similar to the one used in a previous paper for solving the subgraph retrieval problem. We present a neuronal model for invariant pattern recognition, based on this solution to subgraph retrieval and graph matching.

Journal ArticleDOI
TL;DR: In this article, a neural network implementation consisting of switches, capacitors and inverters is proposed, and a number of potentially attractive features of the implementation are pointed out, such as the ability of the network to learn from the inputs and outputs of the switches and capacitors.
Abstract: A neural network implementation is proposed, consisting of switches, capacitors and inverters. A number of potentially attractive features of the implementation are pointed out.

Journal ArticleDOI
TL;DR: In this paper, the authors present the results of analytical and numerical calculations for the zero temperature parallel dynamics of spin glass and neural network models and find regimes for which the system learns better after a few time steps than in the infinite time limit.
Abstract: 2014 We present the results of analytical and numerical calculations for the zero temperature parallel dynamics of spin glass and neural network models. We use an analytical approach to calculate the magnetization and the overlaps after a few time steps. For the long time behaviour, the analytical approach becomes too complicated and we use numerical simulations. For the Sherrington-Kirkpatrick model, we measure the remanent magnetization and the overlaps at different times and we observe power law decays towards the infinite time limit. When one iterates two configurations in parallel, their distance d(~) in the limit of infinite time depends on their initial distance d(0). Our numerical results suggest that d(~) has a finite limit when d(0) ~ 0. This result can be regarded as a collective effect between an infinite number of spins. For the Little-Hopfield model, we compute the time evolution of the overlap with a stored pattern. We find regimes for which the system learns better after a few time steps than in the infinite time limit. J. Physique 48 (1987) 741-755 MAI 1987, Classification Physics Abstracts 05.20