M
Maximilien Breughe
Researcher at Nvidia
Publications - 6
Citations - 429
Maximilien Breughe is an academic researcher from Nvidia. The author has contributed to research in topics: Benchmark (computing) & Microarchitecture. The author has an hindex of 6, co-authored 6 publications receiving 254 citations. Previous affiliations of Maximilien Breughe include Ghent University.
Papers
More filters
Proceedings ArticleDOI
MLPerf inference benchmark
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian M. Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Satish Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem C. Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou +46 more
TL;DR: This paper presents the benchmarking method for evaluating ML inference systems, MLPerf Inference, and prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures.
Posted Content
MLPerf Inference Benchmark
Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian M. Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Satish Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem C. Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou +46 more
TL;DR: MLPerf Inference as mentioned in this paper is a benchmarking method for evaluating ML inference systems with different architectures and architectures. And it is based on the first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities.
Journal ArticleDOI
Mechanistic Analytical Modeling of Superscalar In-Order Processor Performance
TL;DR: A mechanistic analytical performance model is proposed that is built from understanding the internal mechanisms of the processor, and is shown to be accurate with an average performance prediction error of 3.2% compared to detailed cycle-accurate simulation using gem5.
Proceedings ArticleDOI
A mechanistic performance model for superscalar in-order processors
TL;DR: The proposed mechanistic performance model for superscalar in-order processors models the impact of non-unit instruction execution latencies, inter-instruction dependencies, cache/TLB misses and branch mispredictions, and achieves an average performance prediction error of 2.5% compared to detailed cycle-accurate simulation.
Journal ArticleDOI
Selecting representative benchmark inputs for exploring microprocessor design spaces
TL;DR: A number of methods for selecting representative inputs are proposed and evaluated and it is shown that the optimum design point can be found with as few as three inputs.