scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A million spiking-neuron integrated circuit with a scalable communication network and interface

TL;DR: Inspired by the brain’s structure, an efficient, scalable, and flexible non–von Neumann architecture is developed that leverages contemporary silicon technology and is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification.
Abstract: Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.
Citations
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


Additional excerpts

  • ...Future energy-efficient hardware for DL in NNsmay implement aspects of such models (e.g., Fieres, Schemmel, & Meier, 2008; Glackin, McGinnity, Maguire, Wu, & Belatreche, 2005; Indiveri et al., 2011; Jin et al., 2010; Khan et al., 2008; Liu et al., 2001; Merolla et al., 2014; Neil & Liu, 2014; Roggen, Hofmann, Thoma, & Floreano, 2003; Schemmel, Grubl,Meier, &Mueller, 2006; SerranoGotarredona et al., 2009)....

    [...]

  • ...…& Meier, 2008; Glackin, McGinnity, Maguire, Wu, & Belatreche, 2005; Indiveri et al., 2011; Jin et al., 2010; Khan et al., 2008; Liu et al., 2001; Merolla et al., 2014; Neil & Liu, 2014; Roggen, Hofmann, Thoma, & Floreano, 2003; Schemmel, Grubl,Meier, &Mueller, 2006; SerranoGotarredona et al.,…...

    [...]

Journal ArticleDOI
20 Nov 2017
TL;DR: In this paper, the authors provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs, and discuss various hardware platforms and architectures that support DNN, and highlight key trends in reducing the computation cost of deep neural networks either solely via hardware design changes or via joint hardware and DNN algorithm changes.
Abstract: Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems. This article aims to provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic codesigns, being proposed in academia and industry. The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the tradeoffs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.

2,391 citations


Cites background from "A million spiking-neuron integrated..."

  • ...An example of a project that was inspired by the spiking of the brain is the IBM TrueNorth [8]....

    [...]

Journal ArticleDOI
TL;DR: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon, and can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area.
Abstract: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.

2,331 citations

Journal ArticleDOI
07 May 2015-Nature
TL;DR: The experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification).
Abstract: Despite much progress in semiconductor integrated circuit technology, the extreme complexity of the human cerebral cortex, with its approximately 10(14) synapses, makes the hardware implementation of neuromorphic networks with a comparable number of devices exceptionally challenging. To provide comparable complexity while operating much faster and with manageable power dissipation, networks based on circuits combining complementary metal-oxide-semiconductors (CMOSs) and adjustable two-terminal resistive devices (memristors) have been developed. In such circuits, the usual CMOS stack is augmented with one or several crossbar layers, with memristors at each crosspoint. There have recently been notable improvements in the fabrication of such memristive crossbars and their integration with CMOS circuits, including first demonstrations of their vertical integration. Separately, discrete memristors have been used as artificial synapses in neuromorphic networks. Very recently, such experiments have been extended to crossbar arrays of phase-change memristive devices. The adjustment of such devices, however, requires an additional transistor at each crosspoint, and hence these devices are much harder to scale than metal-oxide memristors, whose nonlinear current-voltage curves enable transistor-free operation. Here we report the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification). The network can be taught in situ using a coarse-grain variety of the delta rule algorithm to perform the perfect classification of 3 × 3-pixel black/white images into three classes (representing letters). This demonstration is an important step towards much larger and more complex memristive neuromorphic networks.

2,222 citations

Journal ArticleDOI
TL;DR: The Computational Brain this paper provides a broad overview of neuroscience and computational theory, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research.

1,472 citations

References
More filters
Journal ArticleDOI
TL;DR: The Computational Brain this paper provides a broad overview of neuroscience and computational theory, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research.

1,472 citations

Journal ArticleDOI
TL;DR: The discovery of a Ag(2)S inorganic synapse is reported, which emulates the synaptic functions of both STP and LTP characteristics through the use of input pulse repetition time and indicates a breakthrough in mimicking synaptic behaviour essential for the further creation of artificial neural systems that emulate characteristics of human memory.
Abstract: The electronic properties of inorganic devices such as memristors can be used to simulate neurological behaviour. In particular, ionic and electronic effects in a silver sulphide device are now shown to mimic short- and long-term synaptic functions. Memory is believed to occur in the human brain as a result of two types of synaptic plasticity: short-term plasticity (STP) and long-term potentiation (LTP; refs 1, 2, 3, 4). In neuromorphic engineering5,6, emulation of known neural behaviour has proven to be difficult to implement in software because of the highly complex interconnected nature of thought processes. Here we report the discovery of a Ag2S inorganic synapse, which emulates the synaptic functions of both STP and LTP characteristics through the use of input pulse repetition time. The structure known as an atomic switch7,8, operating at critical voltages, stores information as STP with a spontaneous decay of conductance level in response to intermittent input stimuli, whereas frequent stimulation results in a transition to LTP. The Ag2S inorganic synapse has interesting characteristics with analogies to an individual biological synapse, and achieves dynamic memorization in a single device without the need of external preprogramming. A psychological model related to the process of memorizing and forgetting is also demonstrated using the inorganic synapses. Our Ag2S element indicates a breakthrough in mimicking synaptic behaviour essential for the further creation of artificial neural systems that emulate characteristics of human memory.

1,404 citations

Journal ArticleDOI
22 Jun 2000-Nature
TL;DR: The model of cortical processing is presented as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex.
Abstract: Digital circuits such as the flip-flop use feedback to achieve multistability and nonlinearity to restore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the positive feedback inherent in its recurrent connections. Strong positive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplification of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex.

1,212 citations

Journal ArticleDOI
24 Apr 2014
TL;DR: Neurogrid as discussed by the authors is a real-time neuromorphic system for simulating large-scale neural models in real time using 16 Neurocores, including axonal arbor, synapse, dendritic tree, and soma.
Abstract: In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated or shared electronic circuits; 2) whether to implement these electronic circuits in an analog or digital manner; and 3) whether to interconnect arrays of these silicon neurons with a mesh or a tree network. The choices we made were: 1) we emulated all neural elements except the soma with shared electronic circuits; this choice maximized the number of synaptic connections; 2) we realized all electronic circuits except those for axonal arbors in an analog manner; this choice maximized energy efficiency; and 3) we interconnected neural arrays in a tree network; this choice maximized throughput. These three choices made it possible to simulate a million neurons with billions of synaptic connections in real time-for the first time-using 16 Neurocores integrated on a board that consumes three watts.

978 citations

Journal ArticleDOI
26 Sep 2003-Science
TL;DR: The authors are beginning to understand some of the geometric, biophysical, and energy constraints that have governed the evolution of cortical networks and how the brain exploits the adaptability of biological systems to reconfigure in response to changing needs.
Abstract: Brains perform with remarkable efficiency, are capable of prodigious computation, and are marvels of communication. We are beginning to understand some of the geometric, biophysical, and energy constraints that have governed the evolution of cortical networks. To operate efficiently within these constraints, nature has optimized the structure and function of cortical networks with design principles similar to those used in electronic networks. The brain also exploits the adaptability of biological systems to reconfigure in response to changing needs.

892 citations