scispace - formally typeset
Search or ask a question

Showing papers on "Random neural network published in 1997"


Journal ArticleDOI
TL;DR: It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than other neural network models based on McCulloch Pitts neurons and sigmoidal gates.

1,731 citations


Journal ArticleDOI
TL;DR: It is shown how the random neural network (RNN) can be used to significantly improve the quality of the Steiner trees delivered by the best available heuristics which are the minimum spanning tree heuristic and the average distance heuristic.
Abstract: Future networks must be adequately equipped to handle multipoint communication in a fast and economical manner. Services requiring such support include desktop video conferencing, tele-classrooms, distributed database applications, etc. In networks employing the asynchronous transfer mode (ATM) technology, routing a multicast is achieved by constructing a minimum cost tree that spans the source and all the destinations. When the network is modeled as a weighted, undirected graph, the problem is that of finding a minimal Steiner tree for the graph, given a set of destinations. The problem is known to be NP-complete. Consequently, several heuristics exist which provide approximate solutions to the Steiner problem in networks, We show how the random neural network (RNN) can be used to significantly improve the quality of the Steiner trees delivered by the best available heuristics which are the minimum spanning tree heuristic and the average distance heuristic. We provide an empirical comparison and find that the heuristics which are modified using the neural network yield significantly improved trees.

95 citations


Journal ArticleDOI
TL;DR: A new recurrent dynamic neural network approach to solve noisy signal representation and processing problems by solving for the sets of representation coefficients required to model a given signal in terms of basis elementary signals.

20 citations


Proceedings ArticleDOI
09 Jun 1997
TL;DR: In this study, the neuron of the random neural network (RNN) model (Gelenbe 1989) is designed using digital circuitry using Circuitmaker and Pspice digital simulation packages and the neuron is realized digitally by using LS-TTL ICs.
Abstract: In this study, the neuron of the random neural network (RNN) model (Gelenbe 1989) is designed using digital circuitry. In the RNN model, each neuron accumulates arriving pulses and can fire if its potential at a given instant of time is strictly positive. Firing occurs at random, the intervals between successive firing instants following an exponential distribution of constant rate. When a neuron fires, it routes the generated pulses to the appropriate output lines in accordance with the connection probabilities. In the digital circuitry the fundamental parts of the neuron are simulated by realizing input module, neuron potential module, firing module and routing module. The neuron potential module accumulates incoming signals collected by the input module at the input site. The firing module generates random pulses with an exponential distribution of fixed rate. The pulses generated by the firing module are distributed to the other neurons through the routing module at the output side. A network of neurons can be constructed by using the digital circuitry presented for the single neuron. All the parts of the random neuron circuit are simulated by using Circuitmaker and Pspice digital simulation packages and the neuron is realized digitally by using LS-TTL ICs.

19 citations


Journal ArticleDOI
TL;DR: It is concluded that this spontaneous emergence of clusters in artificial neural networks, performing a temporal integration, is due to computational constraints, with a restricted space of solutions, and information processing could induce the emergence of iterated patterns in biological neural networks.
Abstract: The neural integrator of the oculomotor system is a privileged field for artificial neural network simulation. In this paper, we were interested in an improvement of the biologically plausible features of the Arnold-Robinson network. This improvement was done by fixing the sign of the connection weights in the network (in order to respect the biological Dale's Law). We also introduced a notion of distance in the network in the form of transmission delays between its units. These modifications necessitated the introduction of a general supervisor in order to train the network to act as a leaky integrator. When examining the lateral connection weights of the hidden layer, the distribution of the weights values was found to exhibit a conspicuous structure: the high-value weights were grouped in what we call clusters. Other zones are quite flat and characterized by low-value weights. Clusters are defined as particular groups of adjoining neurons which have strong and privileged connections with another neighborhood of neurons. The clusters of the trained network are reminiscent of the small clusters or patches that have been found experimentally in the nucleus prepositus hypoglossi, where the neural integrator is located. A study was conducted to determine the conditions of emergence of these clusters in our network: they include the fixation of the weight sign, the introduction of a distance, and a convergence of the information from the hidden layer to the motoneurons. We conclude that this spontaneous emergence of clusters in artificial neural networks; performing a temporal integration, is due to computational constraints, with a restricted space of solutions. Thus, information processing could induce the emergence of iterated patterns in biological neural networks.

18 citations


Journal ArticleDOI
TL;DR: A reinforcement learning strategy is proposed to make a sequence of cascading decisions to achieve a goal while aiming to optimize the total cost of the cascaded decisions.
Abstract: The Random Neural Network (RNN) model, in which signals travel as voltage spikes rather than as fixed signal levels, represents more closely the manner in which signals are transmitted in biophysical neural networks. In this paper a reinforcement learning strategy is proposed to make a sequence of cascaded decisions to achieve a goal while aiming to optimize the total cost of the cascaded decisions. For this purpose, RANs are used to model the system and a weight update rule together with a reinforcement function is provided. The performance of the learning strategy is analysed by applying it to the maze learning problem. The simulation results show that the performance of the system is highly dependent on the chosen reinforcement function and quite satisfactory results are obtained when the reinforcement function takes the recency effect into consideration.

18 citations


Book ChapterDOI
08 Oct 1997
TL;DR: A neural approach based on the Random Neural Network (RNN) model is proposed to detect shaped targets with the help of multiple neural networks whose outputs are combined for making decisions.
Abstract: In this paper we propose a neural approach based on the Random Neural Network (RNN) model (Gelenbe 1989, 1990, 1991, 1993 [3, 4, 6, 5]), to detect shaped targets with the help of multiple neural networks whose outputs are combined for making decisions.

15 citations



Proceedings ArticleDOI
09 Jun 1997
TL;DR: The evolutionary learning is based in a hybrid algorithm that trains the random neural network by integrating a genetic algorithm with the gradient descent rule-based learning algorithm of therandom neural network.
Abstract: Gelenbe (1989) has modeled the neural network using an analogy with queuing theory. This model (called random neural network) calculates the probability of activation of the neurons in the network. Recently, we have proposed a recognition algorithm based on the random neural network. In this paper, we propose to solve the patterns recognition problem using an evolutionary learning on the random neural network. The evolutionary learning is based in a hybrid algorithm that trains the random neural network by integrating a genetic algorithm with the gradient descent rule-based learning algorithm of the random neural network. This hybrid learning algorithm optimizes the random neural network on the basis of its topology and its weights distribution.

13 citations


Journal ArticleDOI
TL;DR: In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights, leading to a representation equivalent to the non-linear recursive polynomial model.

10 citations


Book ChapterDOI
08 Oct 1997
TL;DR: A neural network based approach to sensor fusion, to detect mine locations from electromagnetic induction (EMI) data is proposed, which uses the Random Neural Network (RNN) model which is closer to biophysical reality and mathematically more tractable than standard neural methods.
Abstract: In this paper we propose a neural network based approach to sensor fusion, to detect mine locations from electromagnetic induction (EMI) data. Our results use the Random Neural Network (RNN) model [2, 4, 5] which is closer to biophysical reality and mathematically more tractable than standard neural methods. The network is trained to produce an error minimizing non-linear mapping from three sensor output images to the fused image. The result is thresholded to point to likely mine locations.

Proceedings ArticleDOI
02 Jul 1997
TL;DR: Novel approaches for image enlargement and fusion using the RNN are discussed, after successful results with still and video compression and image segmentation.
Abstract: We discuss novel approaches for image enlargement and fusion using the RNN, after successful results with still and video compression and image segmentation. In the RNN model signals in the form of spikes of unit amplitude circulate among the neurons. Positive signals represent excitation and negative signals represent inhibition. Each neuron's state is a non-negative integer called its potential, which increases when an excitation signal arrives to it, and decreases when an inhibition signal arrives. An excitatory spike is interpreted as a "+1" signal at a receiving neuron, while an inhibitory spike is interpreted as a "-1" signal.

Journal ArticleDOI
TL;DR: The role of inhibitory connections in odor discrimination is studied and the network dynamics as a function of the parameters which quantify the strengths of both inhibitory and excitatory connections are studied.
Abstract: A model is proposed to describe the collective behavior of a biologically plausible neural network, composed of interconnected spiking neurons which separately receive external stationary stimulations. The spiking dynamics of each neuron is represented by an hourglass metaphor. This network model was first studied in a special case where the connections are only inhibitory (Cottrell, 1988, 1992). We study the network dynamics as a function of the parameters which quantify the strengths of both inhibitory and excitatory connections. We show that the model exhibits two kinds of limit states. In the first states (convergent case), the system is ergodic and all neurons have a positive mean firing rate. In the other states (divergent case), some neurons become definitively inactive while the sub-network of the active neurons is ergodic. The patterns which result from these divergent states can be seen as a neural coding of the external stimulation by the network. This property is applied to the olfactory system to produce a code for an odor. The role of inhibitory connections in odor discrimination is studied.


Journal ArticleDOI
TL;DR: A neural network, which is formed by groups of neurons with nonlinear interactions, which can be formulated in terms of statistical physics and has an exact analytical solution is proposed.

Patent
26 Nov 1997
TL;DR: In this article, a rule-based expert system is generated from a trained neural network which is expressed as network data stored in a computer-readable medium by a symbolic representation generator.
Abstract: A computer-implemented apparatus and method for generating a rule-based expert system from a trained neural network which is expressed as network data stored in a computer-readable medium. The rule-based expert system represents an interconnected network of neurons with associated weights data and threshold data. A network configuration extractor is provided for accessing the network data and for ascertaining the interconnection structure of the trained neural network by examining the network data. A transformation system is utilized to alter the algebraic sign of at least a portion of the weights data to eliminate differences in the algebraic sign among the weights data while selectively adjusting the threshold data to preserve the logical relationships defined by the neural network. A symbolic representation generator applies a sum-of-products search upon each neuron in the network to generate a multivalued logic representation for each neuron. A propagation mechanism combines the multivalued logic representation of each neuron through network propagation to yield a final logical expression corresponding to a rule-based expert system of the trained neural network. The resulting apparatus permits the knowledge incorporated in the connection strengths of neurons to be expressed as rule-based expert system.

Journal ArticleDOI
TL;DR: Two low complexity methods for neural network construction, that are applicable to various neural network models, are introduced and evaluated for high order perceptrons, based on a Boolean approximation of real-valued data.
Abstract: Two low complexity methods for neural network construction, that are applicable to various neural network models, are introduced and evaluated for high order perceptrons. The methods are based on a Boolean approximation of real-valued data. This approximation is used to construct an initial neural network topology which is subsequently trained on the original (real-valued) data. The methods are evaluated for their effectiveness in reducing the network size and increasing the network‘s generalization capabilities in comparison to fully connected high order perceptrons.

Proceedings ArticleDOI
03 Aug 1997
TL;DR: A new recurrent dynamic neural network to solve signal analysis and processing problems that offers the possibility of handling time-varying signals with uncertainties and a closed analytical form of the recurrent neural network solution is presented.
Abstract: This paper presents a new recurrent dynamic neural network to solve signal analysis and processing problems. The neural network is essentially composed of feedback-type connections and arrays of integrators, linear gains, and nonlinear activation functions. By seeking a minimum global energy state, the network solves for the best set of representation coefficients required to model a given signal in terms of suitable elementary basis signals. An analytical model of the recurrent neural network is obtained through discretization of the integrator blocks and linearization of the activation function. Continuity of the algorithm when segment boundaries are crossed is accomplished by varying the slope of the linearized activation function. The proposed approach results in a closed analytical form of the recurrent neural network solution. The perceived advantages of using the network are estimation of robustness, prediction of convergence by examining the eigenvalues of the analytical state matrix, and increase of computational speed. Moreover, unlike traditional numerical methods, the new approach offers the possibility of handling time-varying signals with uncertainties.

Journal ArticleDOI
TL;DR: The phase diagram for Symmetric Realistic Neural Network with an external input of the constrained in time action is constructed and the mean-field approximation for RNN of general form is obtained.
Abstract: The role of symmetry in the dynamics of the Realistic Neural Network model is studied. The phase diagram for Symmetric Realistic Neural Network with an external input of the constrained in time action is constructed. The oscillation regimes are investigated. For RNN of general form, the mean-field approximation is obtained.

Patent
03 Feb 1997
TL;DR: In this article, a neural network has an external sensor to receive a parameter representation from the environment, and the parameter representation is fed back directly from the sensor of the master node, and then the sum of the parameter representations fed back is formed, resulting in quick and very complex learning of the neural network which, after learning, can perform tasks without using feedback signals which are influenced by the environment.
Abstract: A neural network has an external sensor (2) to receive a parameter representation from the environment. The parameter representation is weighted and fed to the input of a master node (1), whose output transfers the parameter representation in an additionally weighted form to the input of an external effector (5). The output from the external effector is fed back to the input of the external sensor after influence on the environment. Furthermore, the parameter representation is fed back directly from the sensor of the master node, and then the sum of the parameter representations fed back is formed. This results in quick and very complex learning of the neural network which, after learning, can perform tasks without using feedback signals which are influenced by the environment. Thus, the invention provides a neural network where analogies may be drawn to the human consciousness.

Book ChapterDOI
01 Oct 1997
TL;DR: In this paper, the authors use a mean-field theoretical statement to determine the spontaneous dynamics of an assymetric recurrent neural network and propose a Hebb-like learning rule to store a pattern as a limit cycle or strange attractor.
Abstract: Freeman’s investigations on the olfactory bulb of the rabbit showed that its dynamics was chaotic, and that recognition of a learned pattern is linked to a dimension reduction of the dynamics on a much simpler attractor (near limit cycle). We adress here the question wether this behaviour is specific of this particular architecture or if this kind of behaviour observed is an important property of chaotic neural network using a Hebb- like learning rule. In this paper, we use a mean-field theoretical statement to determine the spontaneous dynamics of an assymetric recurrent neural network. In particular we determine the range of random weight matrix for which the network is chaotic. We are able to explain the various changes observed in the dynamical regime when sending static random patterns. We propose a Hebb-like learning rule to store a pattern as a limit cycle or strange attractor. We numerically show the dynamics reduction of a finite-size chaotic network during learning and recognition of a pattern. Though associative learning is actually performed the low storage capacity of the system leads to the consideration of more realistic architecture.

Journal ArticleDOI
TL;DR: The weight configuration of the network is investigated, resulting in analytical weight expressions, which are compared with numerical weight estimates obtained by training the network on the desired trajectories and can explain the asymptotic properties of the networks studied.
Abstract: A recurrent two-node neural network producing oscillations is analyzed. The network has no true inputs and the outputs from the network exhibit a circular phase portrait. The weight configuration of the network is investigated, resulting in analytical weight expressions, which are compared with numerical weight estimates obtained by training the network on the desired trajectories. The values predicted by the analytical expressions agree well with the findings from the numerical study, and can also explain the asymptotic properties of the networks studied.

Proceedings ArticleDOI
28 Oct 1997
TL;DR: The threshold can be dynamically changed by inverse-deviation function to adjust time-and-space accumulation effect of signals in neural network and improve neural network learning condition.
Abstract: An approach of inverse-deviation threshold is presented. The approach introduces an inverse-deviation function into the output neuron threshold in neural network controllers. The threshold can be dynamically changed by inverse-deviation function to adjust time-and-space accumulation effect of signals in neural network and improve neural network learning condition. This approach is advantageous to high speed response of neural network control systems.