G
Guenther Schmuelling
Researcher at Microsoft
Publications - 9
Citations - 512
Guenther Schmuelling is an academic researcher from Microsoft. The author has contributed to research in topics: Benchmark (computing) & Software framework. The author has an hindex of 4, co-authored 7 publications receiving 277 citations.
Papers
More filters
Proceedings ArticleDOI
MLPerf inference benchmark
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian M. Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Satish Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem C. Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou +46 more
TL;DR: This paper presents the benchmarking method for evaluating ML inference systems, MLPerf Inference, and prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures.
Journal ArticleDOI
MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance
Peter Mattson,Hanlin Tang,Gu-Yeon Wei,Carole-Jean Wu,Vijay Janapa Reddi,Christine Cheng,Cody Coleman,Greg Diamos,David Kanter,Paulius Micikevicius,David A. Patterson,Guenther Schmuelling +11 more
TL;DR: The design choices behind MLPerf, a machine learning performance benchmark that has become an industry standard, are described, showing growing adoption and improvements to software-stack performance and scalability.
Posted Content
MLPerf Inference Benchmark
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian M. Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Satish Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem C. Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou +46 more
TL;DR: MLPerf Inference as mentioned in this paper is a benchmarking method for evaluating ML inference systems with different architectures and architectures. And it is based on the first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities.
Journal ArticleDOI
The Vision Behind MLPerf: Understanding AI Inference Performance
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu +5 more
TL;DR: MLPerf is an ML benchmark standard driven by academia and industry and establishes a standard benchmark suite with proper metrics and benchmarking methodologies to level the playing field for ML system performance measurement of different ML inference hardware, software, and services.
Proceedings Article
MLPerf Mobile Inference Benchmark: An Industry-Standard Open-Source Machine Learning Benchmark for On-Device AI
Vijay Janapa Reddi,David Kanter,Peter Mattson,Jared Duke,Thai Nguyen,Ramesh Chukka,Kenneth Shiring,Koan-Sin Tan,Mark Charlebois,William Chou,Mostafa El-Khamy,Jungwook Hong,Tom St. John,Cindy Trinh,Michael H. C. Buch,Mark Mazumder,Relja Markovic,Thomas Atta-Fosu,Fatih Cakir,Masoud Charkhabi,Xiaodong Chen,Cheng-Ming Chiang,Dave Dexter,Terry Heo,Guenther Schmuelling,Maryam Shabani,D Zika +26 more