M
Michaela Blott
Researcher at Xilinx
Publications - 88
Citations - 2774
Michaela Blott is an academic researcher from Xilinx. The author has contributed to research in topics: Artificial neural network & Computer science. The author has an hindex of 19, co-authored 74 publications receiving 1827 citations. Previous affiliations of Michaela Blott include University of Sydney & Imperial College London.
Papers
More filters
Proceedings ArticleDOI
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Yaman Umuroglu,Nicholas J. Fraser,Giulio Gambardella,Michaela Blott,Philip H. W. Leong,Magnus Jahre,Kees Vissers +6 more
TL;DR: FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture that implements fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements is presented.
Journal ArticleDOI
FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks
Michaela Blott,Thomas B. Preußer,Nicholas J. Fraser,Giulio Gambardella,Kenneth O'Brien,Yaman Umuroglu,Miriam Leeser,Kees Vissers +7 more
TL;DR: The second generation of the FINN framework is described, an end-to-end tool that enables design-space exploration and automates the creation of fully customized inference engines on FPGAs that optimizes for given platforms, design targets, and a specific precision.
Proceedings ArticleDOI
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Yaman Umuroglu,Nicholas J. Fraser,Giulio Gambardella,Michaela Blott,Philip H. W. Leong,Magnus Jahre,Kees Vissers +6 more
TL;DR: In this article, the authors present FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture, with fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements.
Proceedings ArticleDOI
SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks
TL;DR: In this article, a quantization method is proposed to reduce the information loss from quantization by learning a symmetric codebook for particular weight subgroups, such that the hardware simplicity of the low-precision representations is preserved.
Posted Content
FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks
Michaela Blott,Thomas B. Preusser,Nicholas J. Fraser,Giulio Gambardella,Kenneth O'Brien,Yaman Umuroglu +5 more
TL;DR: The second generation of the FINN framework is described, an end-to-end tool which enables design space exploration and automates the creation of fully customized inference engines on FPGAs that optimizes for given platforms, design targets and a specific precision.