O
Olivier Temam
Researcher at Google
Publications - 142
Citations - 9198
Olivier Temam is an academic researcher from Google. The author has contributed to research in topics: Compiler & Optimizing compiler. The author has an hindex of 42, co-authored 142 publications receiving 8174 citations. Previous affiliations of Olivier Temam include Leiden University & University of Paris-Sud.
Papers
More filters
Proceedings ArticleDOI
DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
TL;DR: This study designs an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy, and shows that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s in a small footprint.
Proceedings ArticleDOI
DaDianNao: A Machine-Learning Supercomputer
Yunji Chen,Luo Tao,Liu Shaoli,Zhang Shijin,Liqiang He,Jia Wang,Ling Li,Tianshi Chen,Zhiwei Xu,Ninghui Sun,Olivier Temam +10 more
TL;DR: This article introduces a custom multi-chip machine-learning architecture, showing that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system.
Proceedings ArticleDOI
ShiDianNao: shifting vision processing closer to the sensor
Zidong Du,Robert Fasthuber,Tianshi Chen,Paolo Ienne,Ling Li,Luo Tao,Xiaobing Feng,Yunji Chen,Olivier Temam +8 more
TL;DR: This paper proposes an accelerator which is 60x more energy efficient than the previous state-of-the-art neural network accelerator, designed down to the layout at 65 nm, with a modest footprint and consuming only 320 mW, but still about 30x faster than high-end GPUs.
Proceedings ArticleDOI
Rapidly Selecting Good Compiler Optimizations using Performance Counters
TL;DR: This paper proposes a different approach using performance counters as a means of determining good compiler optimization settings by learning a model off-line which can then be used to determine good settings for any new program.
Journal ArticleDOI
Semi-automatic composition of loop transformations for deep parallelism and memory hierarchies
Sylvain Girbal,Nicolas Vasilache,Cédric Bastoul,Albert Cohen,David Parello,Marc Sigler,Olivier Temam +6 more
TL;DR: This work leverages on algorithmic advances in polyhedral code generation and has been implemented in a modern research compiler, using a semi-automatic optimization approach to demonstrate that current compilers suffer from unnecessary constraints and intricacies that can be avoided in a semantically richer transformation framework.