W
William Chou
Researcher at Qualcomm
Publications - 8
Citations - 724
William Chou is an academic researcher from Qualcomm. The author has contributed to research in topics: Benchmark (computing) & Mobile device. The author has an hindex of 4, co-authored 6 publications receiving 454 citations.
Papers
More filters
Book ChapterDOI
AI Benchmark: Running Deep Neural Networks on Android Smartphones
TL;DR: A study of the current state of deep learning in the Android ecosystem and describe available frameworks, programming models and the limitations of running AI on smartphones, as well as an overview of the hardware acceleration resources available on four main mobile chipset platforms.
Proceedings ArticleDOI
MLPerf inference benchmark
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian M. Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Satish Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem C. Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou +46 more
TL;DR: This paper presents the benchmarking method for evaluating ML inference systems, MLPerf Inference, and prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures.
Posted Content
MLPerf Inference Benchmark
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian M. Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Satish Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem C. Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou +46 more
TL;DR: MLPerf Inference as mentioned in this paper is a benchmarking method for evaluating ML inference systems with different architectures and architectures. And it is based on the first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities.
Posted Content
AI Benchmark: Running Deep Neural Networks on Android Smartphones
TL;DR: In this article, the authors present a study of the current state of deep learning in the Android ecosystem and describe available frameworks, programming models and the limitations of running AI on smartphones.
Proceedings Article
MLPerf Mobile Inference Benchmark: An Industry-Standard Open-Source Machine Learning Benchmark for On-Device AI
Vijay Janapa Reddi,David Kanter,Peter Mattson,Jared Duke,Thai Nguyen,Ramesh Chukka,Kenneth Shiring,Koan-Sin Tan,Mark Charlebois,William Chou,Mostafa El-Khamy,Jungwook Hong,Tom St. John,Cindy Trinh,Michael H. C. Buch,Mark Mazumder,Relja Markovic,Thomas Atta-Fosu,Fatih Cakir,Masoud Charkhabi,Xiaodong Chen,Cheng-Ming Chiang,Dave Dexter,Terry Heo,Guenther Schmuelling,Maryam Shabani,D Zika +26 more