scispace - formally typeset
N

Nicholas J. Fraser

Researcher at Xilinx

Publications -  42
Citations -  1809

Nicholas J. Fraser is an academic researcher from Xilinx. The author has contributed to research in topics: Artificial neural network & Convolutional neural network. The author has an hindex of 11, co-authored 38 publications receiving 1082 citations. Previous affiliations of Nicholas J. Fraser include Imperial College London & University of Sydney.

Papers
More filters
Proceedings ArticleDOI

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

TL;DR: FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture that implements fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements is presented.
Journal ArticleDOI

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks

TL;DR: The second generation of the FINN framework is described, an end-to-end tool that enables design-space exploration and automates the creation of fully customized inference engines on FPGAs that optimizes for given platforms, design targets, and a specific precision.
Proceedings ArticleDOI

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

TL;DR: In this article, the authors present FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture, with fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements.
Proceedings ArticleDOI

SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks

TL;DR: In this article, a quantization method is proposed to reduce the information loss from quantization by learning a symmetric codebook for particular weight subgroups, such that the hardware simplicity of the low-precision representations is preserved.
Posted Content

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks

TL;DR: The second generation of the FINN framework is described, an end-to-end tool which enables design space exploration and automates the creation of fully customized inference engines on FPGAs that optimizes for given platforms, design targets and a specific precision.