scispace - formally typeset
Search or ask a question

Showing papers by "Hava T. Siegelmann published in 1996"


Journal ArticleDOI
TL;DR: The techniques can be applied to a much more general class of “sigmoidal-like” activation functions, suggesting that Turing universality is a relatively common property of recurrent neural network models.
Abstract: We investigate the computational power of recurrent neural networks that apply the sigmoid activation function?(x)=2/(1+e?x)]?1. These networks are extensively used in automatic learning of non-linear dynamical behavior. We show that in the noiseless model, there exists a universal architecture that can be used to compute any recursive (Turing) function. This is the first result of its kind for the sigmoid activation function; previous techniques only applied to linearized and truncated version of this function. The significance of our result, besides the proving technique itself, lies in the popularity of the sigmoidal function both in engineering applications of artificial neural networks and in biological modelling. Our techniques can be applied to a much more general class of “sigmoidal-like” activation functions, suggesting that Turing universality is a relatively common property of recurrent neural network models.

126 citations


Journal ArticleDOI
TL;DR: A possible model, constituting a chaotic dynamical system, is presented, which is term as the analog shift map, when viewed as a computational model has super-Turing power and is equivalent to neural networks and the class of analog machines.

34 citations


Journal ArticleDOI
01 Nov 1996
TL;DR: Finite size networks that consist of interconnections of synchronously evolving processors are studied to prove that any function for which the left and right limits exist can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton.
Abstract: This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application.

27 citations


Journal ArticleDOI
TL;DR: A way to make the neural networks friendly to users by formally defining a high level language, called Neural Information Processing Programming Langage, which is rich enough to express any computer algorithm or rule-based system.
Abstract: Analog recurrent neural networks have attracted much attention lately as powerful tools of automatic learning However, they are not as popular in industry as should be justified by their usefulness The lack of any programming tool for networks and their vague internal representation, leave the networks for the use of experts only We propose a way to make the neural networks friendly to users by formally defining a high level language, called Neural Information Processing Programming Langage, which is rich enough to express any computer algorithm or rule-based system We show how to compile a NIL program into a network which computes exactly as the original program and requires the same computation/convergence time and physical size Allowing for a natural neural evolution after the construction, the neural networks are both capable of dynamical continuous learning and represent any given symbolic knowledge Thus, the language along with its compiler may be thought of as the ultimate bridge from symbolic to analog computation

17 citations


Journal ArticleDOI
19 Jan 1996-Science
TL;DR: In one of three experiments, this research successfully reproduced the effects of perfused Zn2+ on sIPSCs, presumably through the release of Zn 2+ from sprouted mossy fibers through the use ofexcitatory amino acid receptors with the aim to release endogenous Zn1 onto the granule cells.
Abstract: excitatory amino acid receptors with the aim to release endogenous Zn2 onto the granule cells. In one of three experiments, we successfully reproduced the effects of perfused Zn2+ on sIPSCs, presumably through the release of Zn2+ from sprouted mossy fibers. Repetitive stimuli delivered to the same location in control slices had no effect on sIPSCs (n = 6). In slices, Zn2+ release experiments are difficult to control because even low-frequency stimuli used to test evoked responses can inadvertently release the bulk of Zn2+ from the mossy fibers. In the absence of any exogenous Zn2+ added to the ACSF, the lost Zn2+ cannot be replenished (C. J. Frederickson, personal communication). 38. D. J. Maconochie, J. M. Zempel, J. H. Steinbach, Neuron 12, 61 (1994). 39. We thank R. W. Olsen and P. Somogyi for critical comments on the manuscript, C. J. Frederickson for technical advice, J. Gruneich for technical assistance, I. Parada for the Timm's staining, and S. Vietla for carrying out some of the zolpidem experiments. This research was supported by National Institute of Neurological Disorders and Stroke grants NS 12151 and NS 30549 (I.M.).

2 citations