Journal ArticleDOI
Neurocube: a programmable digital neuromorphic architecture with high-density 3D memory
Duckhwan Kim,Jaeha Kung,Sek M. Chai,Sudhakar Yalamanchili,Saibal Mukhopadhyay +4 more
- Vol. 44, Iss: 3, pp 380-392
TLDR
The basic architecture of the Neurocube is presented and an analysis of the logic tier synthesized in 28nm and 15nm process technologies are presented and the performance is evaluated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.Abstract:
This paper presents a programmable and scalable digital neuromorphic architecture based on 3D high-density memory integrated with logic tier for efficient neural computing. The proposed architecture consists of clusters of processing engines, connected by 2D mesh network as a processing tier, which is integrated in 3D with multiple tiers of DRAM. The PE clusters access multiple memory channels (vaults) in parallel. The operating principle, referred to as the memory centric computing, embeds specialized state-machines within the vault controllers of HMC to drive data into the PE clusters. The paper presents the basic architecture of the Neurocube and an analysis of the logic tier synthesized in 28nm and 15nm process technologies. The performance of the Neurocube is evaluated and illustrated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.read more
Citations
More filters
Journal ArticleDOI
Partitioned Persist Ordering
TL;DR: This work proposes Partitioned Persist Ordering (PPO) that ensures a correct persist ordering between CPU and NDP devices, as well as among multiple NDP devices and prototype an NDP system, NearPM, on an FPGA platform.
Posted ContentDOI
FC_ACCEL: Enabling Efficient, Low-Latency and Flexible Inference in DNN Fully Connected Layers, using Optimized Checkerboard Block matrix decomposition, fast scheduling, and a resource efficient 1D PE array with a custom HBM2 memory subsystem
Nick Iliev,Amit Ranjan Trivedi +1 more
TL;DR: A novel low latency CMOS hardware accelerator for fully connected (FC) layers in deep neural networks (DNNs) using 128 8x8 or 16x16 processing elements for matrix-vector multiplication, and 128 multiply-accumulate units integrated with 16 High Bandwidth Memory (HBM) stack units for storing the pre-trained weights.
Proceedings ArticleDOI
INCA: Input-stationary Dataflow at Outside-the-box Thinking about Deep Learning Accelerators
Bokyung Kim,Shiyu Li,Hai Li +2 more
TL;DR: In this paper , an input-stationary (IS) implemented crossbar accelerator (INCA) was proposed to support inference and training for deep neural networks (DNNs).
Journal ArticleDOI
KAPLA: Pragmatic Representation and Fast Solving of Scalable NN Accelerator Dataflow
Zhiyao Li,Mingyu Gao +1 more
TL;DR: In this article , a set of formal tensor-centric directives accurately express various inter-layer and intra-layer dataflow schemes, and allow for quickly determining their validity and efficiency.
Journal ArticleDOI
Godiva: green on-chip interconnection for DNNs
Arghavan Asad,Farah Mohammadi +1 more
References
More filters
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI
Deep learning in neural networks
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Book
Neural Networks And Learning Machines
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Journal ArticleDOI
Cellular neural networks: theory
Leon O. Chua,L. Yang +1 more
TL;DR: In this article, a class of information processing systems called cellular neural networks (CNNs) are proposed, which consist of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly through their nearest neighbors.
Book ChapterDOI
GradientBased Learning Applied to Document Recognition
Simon Haykin,Bart Kosko +1 more
TL;DR: Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Related Papers (5)
In-Datacenter Performance Analysis of a Tensor Processing Unit
Norman P. Jouppi,Cliff Young,Nishant Patil,David A. Patterson,Gaurav Agrawal,Raminder Bajwa,Sarah Bates,Suresh Bhatia,Nan Boden,Albert T. Borchers,Rick Boyle,Pierre-luc Cantin,Clifford Chao,Christopher Aaron Clark,Jeremy Coriell,Michael J. Daley,Matt Dau,Jeffrey Dean,Ben Gelb,Tara Vazir Ghaemmaghami,Rajendra Gottipati,William John Gulland,Robert Hagmann,C. Richard Ho,Doug Hogberg,John Hu,Robert Hundt,D. Hurt,Julian Ibarz,Aaron Jaffey,Alek Jaworski,Alexander Kaplan,Khaitan Harshit,Daniel Killebrew,Andy Koch,Naveen Kumar,Steve Lacy,James Laudon,James Law,Diemthu Le,Chris Leary,Zhuyuan Liu,Kyle Lucke,Alan Lundin,Gordon MacKean,Adriana Maggiore,Maire Mahony,Kieran Miller,Rahul Nagarajan,Ravi Narayanaswami,Ray Ni,Kathy Nix,Thomas Norrie,Mark Omernick,Narayana Penukonda,Andrew Everett Phelps,Jonathan Ross,Matt Ross,Amir Salek,Emad Samadiani,Chris Severn,Gregory Sizikov,Matthew Snelham,Jed Souter,Dan Steinberg,Andy Swing,Mercedes Tan,Gregory Michael Thorson,Bo Tian,Horia Toma,Erick Tuttle,Vijay K. Vasudevan,Richard Walter,Walter Wang,Eric Wilcox,Doe Hyun Yoon +75 more