AI Benchmark: All About Deep Learning on Smartphones in 2019
Andrey Ignatov,Radu Timofte,Andrei Kulik,Seung-Soo Yang,Ke Wang,Felix Baum,Max Wu,Lirong Xu,Luc Van Gool +8 more
- pp 3617-3635
Reads0
Chats0
TLDR
In this article, the authors evaluate the performance and compare the results of all chipsets from Qualcomm, HiSilicon, Samsung, MediaTek and Unisoc that are providing hardware acceleration for AI inference.Abstract:
The performance of mobile AI accelerators has been evolving rapidly in the past two years, nearly doubling with each new generation of SoCs. The current 4th generation of mobile NPUs is already approaching the results of CUDA-compatible Nvidia graphics cards presented not long ago, which together with the increased capabilities of mobile deep learning frameworks makes it possible to run complex and deep AI models on mobile devices. In this paper, we evaluate the performance and compare the results of all chipsets from Qualcomm, HiSilicon, Samsung, MediaTek and Unisoc that are providing hardware acceleration for AI inference. We also discuss the recent changes in the Android ML pipeline and provide an overview of the deployment of deep learning models on mobile devices. All numerical results provided in this paper can be found and are regularly updated on the official project website: http://ai-benchmark.com.read more
Citations
More filters
Proceedings ArticleDOI
MLPerf inference benchmark
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian M. Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Satish Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem C. Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou +46 more
TL;DR: This paper presents the benchmarking method for evaluating ML inference systems, MLPerf Inference, and prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures.
Journal ArticleDOI
Pruning and quantization for deep neural network acceleration: A survey
TL;DR: A survey on two types of network compression: pruning and quantization is provided, which compare current techniques, analyze their strengths and weaknesses, provide guidance for compressing networks, and discuss possible future compression techniques.
Proceedings ArticleDOI
SPINN: synergistic progressive inference of neural networks over device and cloud
TL;DR: SPINN is proposed, a distributed inference system that employs synergistic device-cloud computation together with a progressive inference method to deliver fast and robust CNN inference across diverse settings, and provides robust operation under uncertain connectivity conditions and significant energy savings compared to cloud-centric execution.
Proceedings ArticleDOI
Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report
Andrey Ignatov,Radu Timofte,Maurizio Denna,Abdel Younes,Andrew Lek,Mustafa Ayazoglu,Jie Liu,Zongcai Du,Jiaming Guo,Xueyi Zhou,Hao Jia,Youliang Yan,Zexin Zhang,Yixin Chen,Yunbo Peng,Yue Lin,Xindong Zhang,Hui Zeng,Kun Zeng,Peirong Li,Zhihuang Liu,Shiqi Xue,Shengpeng Wang +22 more
TL;DR: In this paper, the authors introduced the first Mobile AI challenge, where the target is to develop an end-to-end deep learning-based image super-resolution solutions that can demonstrate a realtime performance on mobile or edge NPUs.
Proceedings ArticleDOI
SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud
TL;DR: SPINN as mentioned in this paper proposes a distributed inference system that employs synergistic device-cloud computation together with a progressive inference method to deliver fast and robust CNN inference across diverse settings, and introduces a novel scheduler that co-optimises the early exit policy and the CNN splitting at run time, in order to adapt to dynamic conditions and meet user-defined service-level requirements.