scispace - formally typeset
Journal ArticleDOI

Neurocube: a programmable digital neuromorphic architecture with high-density 3D memory

TLDR
The basic architecture of the Neurocube is presented and an analysis of the logic tier synthesized in 28nm and 15nm process technologies are presented and the performance is evaluated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.
Abstract
This paper presents a programmable and scalable digital neuromorphic architecture based on 3D high-density memory integrated with logic tier for efficient neural computing. The proposed architecture consists of clusters of processing engines, connected by 2D mesh network as a processing tier, which is integrated in 3D with multiple tiers of DRAM. The PE clusters access multiple memory channels (vaults) in parallel. The operating principle, referred to as the memory centric computing, embeds specialized state-machines within the vault controllers of HMC to drive data into the PE clusters. The paper presents the basic architecture of the Neurocube and an analysis of the logic tier synthesized in 28nm and 15nm process technologies. The performance of the Neurocube is evaluated and illustrated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.

read more

Citations
More filters
Journal ArticleDOI

Partitioned Persist Ordering

TL;DR: This work proposes Partitioned Persist Ordering (PPO) that ensures a correct persist ordering between CPU and NDP devices, as well as among multiple NDP devices and prototype an NDP system, NearPM, on an FPGA platform.
Posted ContentDOI

FC_ACCEL: Enabling Efficient, Low-Latency and Flexible Inference in DNN Fully Connected Layers, using Optimized Checkerboard Block matrix decomposition, fast scheduling, and a resource efficient 1D PE array with a custom HBM2 memory subsystem

TL;DR: A novel low latency CMOS hardware accelerator for fully connected (FC) layers in deep neural networks (DNNs) using 128 8x8 or 16x16 processing elements for matrix-vector multiplication, and 128 multiply-accumulate units integrated with 16 High Bandwidth Memory (HBM) stack units for storing the pre-trained weights.
Proceedings ArticleDOI

INCA: Input-stationary Dataflow at Outside-the-box Thinking about Deep Learning Accelerators

TL;DR: In this paper , an input-stationary (IS) implemented crossbar accelerator (INCA) was proposed to support inference and training for deep neural networks (DNNs).
Journal ArticleDOI

KAPLA: Pragmatic Representation and Fast Solving of Scalable NN Accelerator Dataflow

Zhiyao Li, +1 more
- 09 Jun 2023 - 
TL;DR: In this article , a set of formal tensor-centric directives accurately express various inter-layer and intra-layer dataflow schemes, and allow for quickly determining their validity and efficiency.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Book

Neural Networks And Learning Machines

Simon Haykin
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Journal ArticleDOI

Cellular neural networks: theory

TL;DR: In this article, a class of information processing systems called cellular neural networks (CNNs) are proposed, which consist of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly through their nearest neighbors.
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TL;DR: Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Related Papers (5)