scispace - formally typeset
M

Michaela Blott

Researcher at Xilinx

Publications -  88
Citations -  2774

Michaela Blott is an academic researcher from Xilinx. The author has contributed to research in topics: Artificial neural network & Computer science. The author has an hindex of 19, co-authored 74 publications receiving 1827 citations. Previous affiliations of Michaela Blott include University of Sydney & Imperial College London.

Papers
More filters
Proceedings ArticleDOI

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

TL;DR: FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture that implements fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements is presented.
Journal ArticleDOI

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks

TL;DR: The second generation of the FINN framework is described, an end-to-end tool that enables design-space exploration and automates the creation of fully customized inference engines on FPGAs that optimizes for given platforms, design targets, and a specific precision.
Proceedings ArticleDOI

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

TL;DR: In this article, the authors present FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture, with fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements.
Proceedings ArticleDOI

SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks

TL;DR: In this article, a quantization method is proposed to reduce the information loss from quantization by learning a symmetric codebook for particular weight subgroups, such that the hardware simplicity of the low-precision representations is preserved.
Posted Content

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks

TL;DR: The second generation of the FINN framework is described, an end-to-end tool which enables design space exploration and automates the creation of fully customized inference engines on FPGAs that optimizes for given platforms, design targets and a specific precision.