scispace - formally typeset
Journal ArticleDOI

Neurocube: a programmable digital neuromorphic architecture with high-density 3D memory

TLDR
The basic architecture of the Neurocube is presented and an analysis of the logic tier synthesized in 28nm and 15nm process technologies are presented and the performance is evaluated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.
Abstract
This paper presents a programmable and scalable digital neuromorphic architecture based on 3D high-density memory integrated with logic tier for efficient neural computing. The proposed architecture consists of clusters of processing engines, connected by 2D mesh network as a processing tier, which is integrated in 3D with multiple tiers of DRAM. The PE clusters access multiple memory channels (vaults) in parallel. The operating principle, referred to as the memory centric computing, embeds specialized state-machines within the vault controllers of HMC to drive data into the PE clusters. The paper presents the basic architecture of the Neurocube and an analysis of the logic tier synthesized in 28nm and 15nm process technologies. The performance of the Neurocube is evaluated and illustrated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.

read more

Citations
More filters
Proceedings ArticleDOI

An Automated Approach to Compare Bit Serial and Bit Parallel In-Memory Computing for DNNs

TL;DR: In this paper , the authors compared bit serial arithmetic (BSA) and bit-parallel arithmetic with CACTI for In-Memory Computing in SRAM and found that BSA yields at least 25% lower energy delay product (EDP) as compared to BPA for large sub-array sizes.
Patent

Systems, methods and devices for data quantization

TL;DR: In this paper, the memory quantization unit is configured to obtain, via the first interface, a first weight value from the at least one memory bank; quantize the first value to generate at least 1 quantized weight value having a shorter bit length than a bit length of the first weight values; and communicate the quantized value to the data requesting unit via the second interface.
Proceedings ArticleDOI

Implementing binary neural networks in memory with approximate accumulation

TL;DR: This work proposes BitNAP (Binarized neural network acceleration with in-memory ThreSholding), which performs optimization at operation, peripheral, and architecture levels for an efficient BNN accelerator and presents a memory sense amplifier sharing scheme and also, a novel operation pipelining to reduce the latency overhead of sharing.
Posted Content

PIM-DRAM:Accelerating Machine Learning Workloads using Processing in Memory based on DRAM Technology

TL;DR: A DRAM-based processing-in-memory (PIM) multiplication primitive coupled with intra-bank accumulation to accelerate matrix vector operations in ML workloads is proposed and system evaluations show that the proposed architecture, mapping, and data flow can provide up to 23x and 6.5x benefits over a GPU and an ideal conventional baseline architecture with infinite compute bandwidth.
Proceedings ArticleDOI

Efficient Management of Scratch-Pad Memories in Deep Learning Accelerators

TL;DR: OnSRAM as discussed by the authors proposes a novel SPM management framework integrated with a DL accelerator runtime, which works on static graphs to identify data structures that should be held on-chip based on their properties, and targets an eager execution model (no graph) and uses a speculative scheme to hold/discard data structures.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Book

Neural Networks And Learning Machines

Simon Haykin
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Journal ArticleDOI

Cellular neural networks: theory

TL;DR: In this article, a class of information processing systems called cellular neural networks (CNNs) are proposed, which consist of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly through their nearest neighbors.
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TL;DR: Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Related Papers (5)