scispace - formally typeset
Journal ArticleDOI

Neurocube: a programmable digital neuromorphic architecture with high-density 3D memory

TLDR
The basic architecture of the Neurocube is presented and an analysis of the logic tier synthesized in 28nm and 15nm process technologies are presented and the performance is evaluated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.
Abstract
This paper presents a programmable and scalable digital neuromorphic architecture based on 3D high-density memory integrated with logic tier for efficient neural computing. The proposed architecture consists of clusters of processing engines, connected by 2D mesh network as a processing tier, which is integrated in 3D with multiple tiers of DRAM. The PE clusters access multiple memory channels (vaults) in parallel. The operating principle, referred to as the memory centric computing, embeds specialized state-machines within the vault controllers of HMC to drive data into the PE clusters. The paper presents the basic architecture of the Neurocube and an analysis of the logic tier synthesized in 28nm and 15nm process technologies. The performance of the Neurocube is evaluated and illustrated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.

read more

Citations
More filters
Journal ArticleDOI

Enhancing Utilization of SIMD-Like Accelerator for Sparse Convolutional Neural Networks

TL;DR: A data screening and task mapping (DSTM) accelerator which integrates a series of techniques, including software refinement and hardware modules, to address the intra-PE load imbalance and enhance the average PE utilization.
Journal ArticleDOI

NNBench-X: Benchmarking and Understanding Neural Network Workloads for Accelerator Designs

TL;DR: A novel approach to understand the performance characteristic of NN workloads for accelerator designs and helps users select representative applications out of the large pool of possible applications, while providing insightful guidelines for the design of Nn accelerators.
Posted Content

ORIGAMI: A Heterogeneous Split Architecture for In-Memory Acceleration of Learning.

TL;DR: ORIGAMI as mentioned in this paper is a heterogeneous set of in-memory accelerators to support compute demands of different ML algorithms, and also uses an off-the-shelf compute platform (e.g.,FPGA,GPU,TPU, etc.) to utilize bandwidth without violating strict area and power budgets.
Posted Content

Polynesia: Enabling Effective Hybrid Transactional/Analytical Databases with Specialized Hardware/Software Co-Design.

TL;DR: Polynesia as mentioned in this paper divides the HTAP system into transactional and analytical processing islands, implements custom algorithms and hardware to reduce the costs of update propagation and consistency, and exploits processing-in-memory for the analytical islands to alleviate data movement.
Proceedings ArticleDOI

POSTER: Application-Driven Near-Data Processing for Similarity Search

TL;DR: This work proposes a near-data processing accelerator for similarity search: the similarity search associative memory (SSAM).
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Book

Neural Networks And Learning Machines

Simon Haykin
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Journal ArticleDOI

Cellular neural networks: theory

TL;DR: In this article, a class of information processing systems called cellular neural networks (CNNs) are proposed, which consist of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly through their nearest neighbors.
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TL;DR: Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Related Papers (5)