scispace - formally typeset
Journal ArticleDOI

Neurocube: a programmable digital neuromorphic architecture with high-density 3D memory

TLDR
The basic architecture of the Neurocube is presented and an analysis of the logic tier synthesized in 28nm and 15nm process technologies are presented and the performance is evaluated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.
Abstract
This paper presents a programmable and scalable digital neuromorphic architecture based on 3D high-density memory integrated with logic tier for efficient neural computing. The proposed architecture consists of clusters of processing engines, connected by 2D mesh network as a processing tier, which is integrated in 3D with multiple tiers of DRAM. The PE clusters access multiple memory channels (vaults) in parallel. The operating principle, referred to as the memory centric computing, embeds specialized state-machines within the vault controllers of HMC to drive data into the PE clusters. The paper presents the basic architecture of the Neurocube and an analysis of the logic tier synthesized in 28nm and 15nm process technologies. The performance of the Neurocube is evaluated and illustrated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.

read more

Citations
More filters
Journal ArticleDOI

A Bit-Serial, Compute-in-SRAM Design Featuring Hybrid-Integrating ADCs and Input Dependent Binary Scaled Precharge Eliminating DACs for Energy-Efficient DNN Inference

TL;DR: In this paper , a binary-weighted-bitline-precharge scheme utilizing dedicated reference voltages to perform input bit-serial multiplication in the charge domain, eliminating the need for dedicated DAC circuits, leakage-tolerant, input-dependent-bit-line-keeper circuits that maintain the local-bitlines voltages, and hybrid-charge-sharing-based integrating-ADCs to improve the ADC conversion time by leveraging the reference voltage, thereby improving ADC latency while achieving a compact ADC design.
Posted Content

NERO: A Near High-Bandwidth Memory Stencil Accelerator for Weather Prediction Modeling

TL;DR: In this article, the use of near-memory acceleration using a reconfigurable fabric with high-bandwidth memory (HBM) was proposed and evaluated for weather and climate modeling.
Journal ArticleDOI

<i>Dandelion</i>: Boosting DNN Usability Under Dataset Scarcity

TL;DR: In this article , the authors proposed an inter-network architecture support for data augmentation that trains DNNs with rare images generated by the generative adversarial network (GAN) with orthogonal attributes modified (Dandelion-function).
Posted Content

TRIM: A Design Space Exploration Model for Deep Neural Networks Inference and Training Accelerators.

TL;DR: TRIM as mentioned in this paper is an infrastructure to help hardware architects explore the design space of deep neural network accelerators for both inference and training in the early design stages, considering both inter-layer and intra-layer activities.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Book

Neural Networks And Learning Machines

Simon Haykin
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Journal ArticleDOI

Cellular neural networks: theory

TL;DR: In this article, a class of information processing systems called cellular neural networks (CNNs) are proposed, which consist of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly through their nearest neighbors.
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TL;DR: Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Related Papers (5)