scispace - formally typeset
Search or ask a question
Topic

Random neural network

About: Random neural network is a research topic. Over the lifetime, 855 publications have been published within this topic receiving 23246 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than other neural network models based on McCulloch Pitts neurons and sigmoidal gates.

1,731 citations

Journal Article
TL;DR: It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models based on McCulloch Pitts neurons, respectively, sigmoidal gates.
Abstract: -The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values o f its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list o f references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology. © 1997 Elsevier Science Ltd. All rights reserved. Keywords--Spiking neuron, Integrate-and-fire neutron, Computational complexity, Sigmoidal neural nets, Lower bounds. 1. D E F I N I T I O N S AND M O T I V A T I O N S If one classifies neural network models according to their computational units, one can distinguish three different generations. The f irst generation is based on M c C u l l o c h P i t t s neurons as computational units. These are also referred to as perceptrons or threshold gates. They give rise to a variety of neural network models such as multilayer perceptrons (also called threshold circuits), Hopfield nets, and Boltzmann machines. A characteristic feature of these models is that they can only give digital output. In fact they are universal for computations with digital input and output, and every boolean function can be computed by some multilayer perceptron with a single hidden layer. The second generation is based on computational units that apply an "activation function" with a continuous set of possible output values to a weighted sum (or polynomial) of the inputs. Common activation functions are the s igmoid func t ion a(y) = 1/(1 + e -y) and the linear Acknowledgements: I would like to thank Eduardo Sontag and an anonymous referee for their helpful comments. Written under partial support by the Austrian Science Fund. Requests for reprints should be sent to W. Maass, Institute for Theoretical Computer Science, Technische Universit~it Graz, Klosterwiesgasse 32/2, A-8010, Graz, Austria; tel. +43 316 873-5822; fax: +43 316 873-5805; e-mail: maass@igi,tu-graz.ac.at saturated function 7r with 7r(y) = y for 0 --< y --< 1, 7r(y) = 0 for y < 0, lr(y) = 1 for y > 1. Besides piecewise polynomial activation functions we consider in this paper also "piecewise exponential" activation functions, whose pieces can be defined by expressions involving exponentiation (such as the definition of a). Typical examples for networks from this second generation are feedforward and recurrent sigmoidal neural nets, as well as networks of radial basis function units. These nets are also able to compute (with the help of thresholding at the network output) arbitrary boolean functions. Actually it has been shown that neural nets from the second generation can compute certain boolean functions with f e w e r gates than neural nets from the first generation (Maass, Schnitger, & Sontag, 1991; DasGupta & Schnitger, 1993). In addition, neural nets from the second generation are able to compute functions with analog input and output. In fact they are universal for analog computations in the sense that any continuous function with a compact domain and range can be approximated arbitrarily well (with regard to uniform convergence, i.e., the L= norm) by a network of this type with a single hidden layer. Another characteristic feature of this second generation of neural network models is that they support learning algorithms that are based on gradient descent such as backprop.

1,235 citations

Journal ArticleDOI
TL;DR: It is demonstrated that temporal coding requires significantly less neurons than instantaneous rate-coding, and a supervised learning rule, \emph{SpikeProp}, akin to traditional error-backpropagation, is derived.

831 citations

Journal ArticleDOI
TL;DR: A procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network, which seems particularly suited for temporally continuous domains.
Abstract: Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown. This type of recurrent network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.

750 citations

Journal ArticleDOI
Erol Gelenbe1
TL;DR: A new class of random neural networks in which signals are either negative or positive, and this model, with exponential signal emission intervals, Poisson external signal arrivals, and Markovian signal movements between neurons, has a product form leading to simple analytical expressions for the system state.
Abstract: We introduce a new class of random neural networks in which signals are either negative or positive. A positive signal arriving at a neuron increases its total signal count or potential by one; a negative signal reduces it by one if the potential is positive, and has no effect if it is zero. When its potential is positive, a neuron fires, sending positive or negative signals at random intervals to neurons or to the outside. Positive signals represent excitatory signals and negative signals represent inhibition. We show that this model, with exponential signal emission intervals, Poisson external signal arrivals, and Markovian signal movements between neurons, has a product form leading to simple analytical expressions for the system state.

597 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
82% related
Deep learning
79.8K papers, 2.1M citations
76% related
Robustness (computer science)
94.7K papers, 1.6M citations
75% related
Cluster analysis
146.5K papers, 2.9M citations
75% related
Support vector machine
73.6K papers, 1.7M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
202111
202019
201915
201818
201750