scispace - formally typeset
Open AccessProceedings ArticleDOI

Designing an analog crossbar based neuromorphic accelerator

TLDR
An analog crossbar has a fundamental O(N) energy scaling advantage over a digital system because the crossbar performs its entire computation in one step, charging all the capacitances only once.
Abstract
Resistive memory crossbars can dramatically reduce the energy required to perform computations in neural algorithms by three orders of magnitude when compared to an optimized digital ASIC [1]. For data intensive applications, the computational energy is dominated by moving data between the processor, SRAM, and DRAM. Analog crossbars overcome this by allowing data to be processed directly at each memory element. Analog crossbars accelerate three key operations that are the bulk of the computation in a neural network as illustrated in Fig 1: vector matrix multiplies (VMM), matrix vector multiplies (MVM), and outer product rank 1 updates (OPU)[2]. For an NxN crossbar the energy for each operation scales as the number of memory elements O(N2) [2]. This is because the crossbar performs its entire computation in one step, charging all the capacitances only once. Thus the CV2 energy of the array scales as array size. This fundamentally better than trying to read or write a digital memory. Each row of any NxN digital memory must be accessed one at a time, resulting in N columns of length O(N) being charged N times, requiring O(N3) energy to read a digital memory. Thus an analog crossbar has a fundamental O(N) energy scaling advantage over a digital system. Furthermore, if the read operation is done at low voltage and is therefore noise limited, the read energy can even be independent of the crossbar size, O(1) [2].

read more

Citations
More filters
Journal ArticleDOI

Organic electronics for neuromorphic computing

TL;DR: This Review Article examines the development of organic neuromorphic devices, considering the different switching mechanisms used in the devices and the challenges the field faces in delivering neuromorphic computing applications.
Book ChapterDOI

Neuromorphic computing systems based on flexible organic electronics

TL;DR: This chapter reviews the development of organic neuromorphic devices, and highlights efforts to mimic essential brain functions, such as spiking phenomena, spatiotemporal processing, homeostasis, and functional connectivity, and demonstrates related applications.
Book ChapterDOI

Truly Heterogeneous HPC: Co-design to Achieve What Science Needs from HPC

TL;DR: In this paper, the authors explore the example of mapping the connectome of the brain to illustrate the advantages of using a heterogeneous system that incorporates neuromorphic hardware, which is such an emerging technology which would interest the HPC community, due to its potential for implementing large-scale calculations with an extremely low power footprint.
Proceedings ArticleDOI

Comparative analysis of spin based memories for neuromorphic computing

TL;DR: This paper compares the performance metrics of spin devices with other non-volatile memories for the implementation of neural network architecture with a single hidden layer and finds spin device architecture consumes less area and much lower leakage power while to achieve the same level of accuracy as other devices.
Proceedings ArticleDOI

ATHENA: Enabling Codesign for Next-Generation AI/ML Architectures

TL;DR: In this article , the authors present a codesign ecosystem that leverages an analytical tool, ATHENA, to accelerate design space exploration and evaluation of novel architectures for artificial intelligence (AI) algorithms.
References
More filters
Posted Content

Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1

TL;DR: A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Journal ArticleDOI

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing

TL;DR: This work describes an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors, opening a path towards extreme interconnectivity comparable to the human brain.
Journal ArticleDOI

Li‐Ion Synaptic Transistor for Low Power Analog Computing

TL;DR: Nonvolatile redox transistors based upon Li-ion battery materials are demonstrated as memory elements for neuromorphic computer architectures with multi-level analog states, "write" linearity, low-voltage switching, and low power dissipation.
Proceedings ArticleDOI

Resistive memory device requirements for a neural algorithm accelerator

TL;DR: A general purpose neural architecture that can accelerate many different algorithms and determine the device properties that will be needed to run backpropagation on the neural architecture is proposed.
Journal ArticleDOI

Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator

TL;DR: In this article, a detailed circuit and device analysis of a training accelerator may serve as a foundation for further architecture-level studies, and the possible gains over a similar digital-only version of this accelerator block suggest that continued optimization of analog resistive memories is valuable.
Related Papers (5)