scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Networks of spiking neurons: the third generation of neural network models

01 Dec 1997-Neural Networks (Society for Computer Simulation International)-Vol. 14, Iss: 4, pp 1659-1671
TL;DR: It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than other neural network models based on McCulloch Pitts neurons and sigmoidal gates.
About: This article is published in Neural Networks.The article was published on 1997-12-01. It has received 1731 citations till now. The article focuses on the topics: Spiking neural network & Random neural network.
Citations
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: The Computational Brain this paper provides a broad overview of neuroscience and computational theory, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research.

1,472 citations

Book
20 Nov 1998
TL;DR: This book presents the complete spectrum of current research in pulsed neural networks and includes the most important work from many of the key scientists in the field.
Abstract: From the Publisher: This book presents the complete spectrum of current research in pulsed neural networks and includes the most important work from many of the key scientists in the field.

1,046 citations

Journal ArticleDOI
TL;DR: Recent progress in electronic skin or e‐skin research is broadly reviewed, focusing on technologies needed in three main applications: skin‐attachable electronics, robotics, and prosthetics.
Abstract: Recent progress in electronic skin or e-skin research is broadly reviewed, focusing on technologies needed in three main applications: skin-attachable electronics, robotics, and prosthetics. First, since e-skin will be exposed to prolonged stresses of various kinds and needs to be conformally adhered to irregularly shaped surfaces, materials with intrinsic stretchability and self-healing properties are of great importance. Second, tactile sensing capability such as the detection of pressure, strain, slip, force vector, and temperature are important for health monitoring in skin attachable devices, and to enable object manipulation and detection of surrounding environment for robotics and prosthetics. For skin attachable devices, chemical and electrophysiological sensing and wireless signal communication are of high significance to fully gauge the state of health of users and to ensure user comfort. For robotics and prosthetics, large-area integration on 3D surfaces in a facile and scalable manner is critical. Furthermore, new signal processing strategies using neuromorphic devices are needed to efficiently process tactile information in a parallel and low power manner. For prosthetics, neural interfacing electrodes are of high importance. These topics are discussed, focusing on progress, current challenges, and future prospects.

881 citations

Journal ArticleDOI
27 Nov 2019-Nature
TL;DR: An overview of the developments in neuromorphic computing for both algorithms and hardware is provided and the fundamentals of learning and hardware frameworks are highlighted, with emphasis on algorithm–hardware codesign.
Abstract: Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign. The authors review the advantages and future prospects of neuromorphic computing, a multidisciplinary engineering concept for energy-efficient artificial intelligence with brain-inspired functionality.

877 citations

References
More filters
Book
01 Jan 2007
TL;DR: A circular cribbage board having a circular base plate on which a circular counter disc, bearing a circular scale having 122 divisions numbered consecutively from 0, is mounted for rotation.
Abstract: From the Publisher: Dramatically updating and extending the first edition, published in 1995, the second edition of The Handbook of Brain Theory and Neural Networks presents the enormous progress made in recent years in the many subfields related to the two great questions: How does the brain work? and, How can we build intelligent machines? Once again, the heart of the book is a set of almost 300 articles covering the whole spectrum of topics in brain theory and neural networks. The first two parts of the book, prepared by Michael Arbib, are designed to help readers orient themselves in this wealth of material. Part I provides general background on brain modeling and on both biological and artificial neural networks. Part II consists of "Road Maps" to help readers steer through articles in part III on specific topics of interest. The articles in part III are written so as to be accessible to readers of diverse backgrounds. They are cross-referenced and provide lists of pointers to Road Maps, background material, and related reading. The second edition greatly increases the coverage of models of fundamental neurobiology, cognitive neuroscience, and neural network approaches to language. It contains 287 articles, compared to the 266 in the first edition. Articles on topics from the first edition have been updated by the original authors or written anew by new authors, and there are 106 articles on new topics.

3,487 citations

Book
05 Jun 1975
TL;DR: Introduction to synaptic circuits, Gordon M.Shepherd and Christof Koch membrane properties and neurotransmitter actions, David A.Brown and Anthony M.Brown.
Abstract: Introduction to synaptic circuits, Gordon M.Shepherd and Christof Koch membrane properties and neurotransmitter actions, David A.McCormick peripheral ganglia, Paul R.Adams and Christof Koch spinal cord - ventral horn, Robert E.Burke olfactory bulb, Gordon M.Shepherd, and Charles A.Greer retina, Peter Sterling cerebellum, Rodolfo R.Llinas and Kerry D.Walton thalamus, S.Murray Sherman and Christof Koch basal ganglia, Charles J.Wilson olfactory cortex, Lewis B.Haberly hippocampus, Thomas H.Brown and Anthony M.Zador neocortex, Rodney J.Douglas and Kevan A.C.Martin Gordon M.Shepherd. Appendix: Dendretic electrotonus and synaptic integration.

3,241 citations

Book
15 Nov 1996
TL;DR: Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory about the representation of sensory signals in neural spike trains and a quantitative framework is used to pose precise questions about the structure of the neural code.
Abstract: Our perception of the world is driven by input from the sensory nerves. This input arrives encoded as sequences of identical spikes. Much of neural computation involves processing these spike trains. What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons.The authors invite the reader to play the role of a hypothetical observer inside the brain who makes decisions based on the incoming spike trains. Rather than asking how a neuron responds to a given stimulus, the authors ask how the brain could make inferences about an unknown stimulus from a given neural response. The flavor of some problems faced by the organism is captured by analyzing the way in which the observer can make a running reconstruction of the sensory stimulus as it evolves in time. These ideas are illustrated by examples from experiments on several biological systems. Intended for neurobiologists with an interest in mathematical analysis of neural data as well as the growing number of physicists and mathematicians interested in information processing by "real" nervous systems, Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory. A quantitative framework is used to pose precise questions about the structure of the neural code. These questions in turn influence both the design and analysis of experiments on sensory neurons.

2,811 citations

Journal ArticleDOI
09 Jun 1995-Science
TL;DR: Data suggest a low intrinsic noise level in spike generation, which could allow cortical neurons to accurately transform synaptic input into spike sequences, supporting a possible role for spike timing in the processing of cortical information by the neocortex.
Abstract: It is not known whether the variability of neural activity in the cerebral cortex carries information or reflects noisy underlying mechanisms. In an examination of the reliability of spike generation using recordings from neurons in rat neocortical slices, the precision of spike timing was found to depend on stimulus transients. Constant stimuli led to imprecise spike trains, whereas stimuli with fluctuations resembling synaptic activity produced spike trains with timing reproducible to less than 1 millisecond. These data suggest a low intrinsic noise level in spike generation, which could allow cortical neurons to accurately transform synaptic input into spike sequences, supporting a possible role for spike timing in the processing of cortical information by the neocortex.

1,846 citations

Journal ArticleDOI
TL;DR: In this article, the authors show that most of the characterizations that were reported thus far in the literature are special cases of the following general result: a standard multilayer feedforward network with a locally bounded piecewise continuous activation function can approximate any continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial.

1,581 citations

Trending Questions (1)
How do you determine the number of neurons in Lstm?

In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models.