scispace - formally typeset
Open AccessProceedings ArticleDOI

In-Datacenter Performance Analysis of a Tensor Processing Unit

TLDR
The Tensor Processing Unit (TPU) as discussed by the authors is a custom ASIC deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN) using a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS).
Abstract
Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU) --- deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -- 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X -- 80X higher. Moreover, using the CPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

7.2 A 12nm Programmable Convolution-Efficient Neural-Processing-Unit Chip Achieving 825TOPS

TL;DR: This NPU is architected to be CONV-efficient under the control of operation-fused coarse-grained instructions that integrates as much computing power as possible via squeezed computation with a large SRAM-only design and delivers programming flexibility via an instruction set architecture with coverage for anticipated forward-looking functionality.
Proceedings ArticleDOI

DNNGuard: An Elastic Heterogeneous DNN Accelerator Architecture against Adversarial Attacks

TL;DR: DNNGuard is proposed, an elastic heterogeneous DNN accelerator architecture that can efficiently orchestrate the simultaneous execution of original (target) DNN networks and the detect algorithm or network that detects adversary sample attacks, and is implemented based on RISC-V and NVDLA.
Proceedings ArticleDOI

On-chip deep neural network storage with multi-level eNVM

TL;DR: This paper proposes a method to use multi-level, embedded nonvolatile memory (eNVM) to eliminate all off-chip weight accesses, and co-design the weights and memories such that their properties complement each other and the faults result in no noticeable NN accuracy loss.
Proceedings ArticleDOI

Accelerating TensorFlow with Adaptive RDMA-Based gRPC

TL;DR: This paper proposes a unified approach to have a single gRPC runtime in TensorFlow with Adaptive and efficient RDMA protocols, and proposes designs such as hybrid communication protocols, message pipelining and coalescing, zero-copy transmission etc to make the runtime be adaptive to different message sizes for Deep Learning workloads.
Journal ArticleDOI

A Scalable FPGA Architecture for Randomly Connected Networks of Hodgkin-Huxley Neurons

TL;DR: A scalable architecture to simulate a randomly connected network of Hodgkin-Huxley neurons is presented, in which each core can update the states of a group of neurons stored in its corresponding memory bank through a novel method of cyclic permutation of a single prestored connectivity vector per core.
References
More filters
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Journal ArticleDOI

Mastering the game of Go with deep neural networks and tree search

TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Book

Computer Architecture: A Quantitative Approach

TL;DR: This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today.
Related Papers (5)