scispace - formally typeset
Open AccessProceedings ArticleDOI

In-Datacenter Performance Analysis of a Tensor Processing Unit

TLDR
The Tensor Processing Unit (TPU) as discussed by the authors is a custom ASIC deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN) using a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS).
Abstract
Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU) --- deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -- 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X -- 80X higher. Moreover, using the CPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Mastering the game of Go without human knowledge

TL;DR: An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
Proceedings ArticleDOI

Searching for MobileNetV3

TL;DR: MobileNetV3 as mentioned in this paper is the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design and achieves state-of-the-art results for mobile classification, detection and segmentation.
Journal ArticleDOI

In-memory computing with resistive switching devices

TL;DR: This Review Article examines the development of in-memory computing using resistive switching devices, where the two-terminal structure of the devices, theirresistive switching properties, and direct data processing in the memory can enable area- and energy-efficient computation.
Journal ArticleDOI

Deep Learning in Mobile and Wireless Networking: A Survey

TL;DR: This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas, and provides an encyclopedic review of mobile and Wireless networking research based on deep learning, which is categorize by different domains.
Journal ArticleDOI

Deep Learning for IoT Big Data and Streaming Analytics: A Survey

TL;DR: In this article, the authors provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain.
References
More filters
Proceedings ArticleDOI

Toward accelerating deep learning at scale using specialized hardware in the datacenter

TL;DR: This article consists of a collection of slides from the authors' conference presentation Are FPGAs a Promising Target in the Datacenter for Deep Learning?
Book ChapterDOI

Design of a 1st Generation Neurocomputer

TL;DR: The proposed neurocomputer concept is sizeable independently to the applicational domain in terms of processing power, memory size and flexibility, and is designed for throughputs that enable the user to access real-world applications in reasonable time.
Journal ArticleDOI

Special-purpose digital hardware for neural networks: an architectural survey

TL;DR: It is concluded that it is important to chose one's problems carefully, and that support software and in general, system integration, is only beginning to reach the level of versatility that many researchers will require.
Patent

Vector computation unit in a neural network processor

TL;DR: A neural network computations for a neural network comprising a plurality of layers, the circuit comprising: activation circuitry configured to receive a vector of accumulated values and configured to apply a function to each accumulated value to generate the vector of activation values; and normalization circuitry coupled to the activation circuitry and configurable to generate a respective normalized value from each activation value as mentioned in this paper.
Proceedings ArticleDOI

Large-Scale Deep Learning For Building Intelligent Computer Systems

TL;DR: Some of the design decisions made in building TensorFlow are highlighted, research results produced within the group are discussed, and ways in which these ideas have been applied to a variety of problems in Google's products are described.
Related Papers (5)