W
Wayne Luk
Researcher at Imperial College London
Publications - 737
Citations - 13643
Wayne Luk is an academic researcher from Imperial College London. The author has contributed to research in topics: Field-programmable gate array & Reconfigurable computing. The author has an hindex of 54, co-authored 703 publications receiving 12517 citations. Previous affiliations of Wayne Luk include Fudan University & University of London.
Papers
More filters
Proceedings ArticleDOI
Parametric reconfigurable designs with Machine Learning Optimizer
Maciej Kurek,Wayne Luk +1 more
TL;DR: The use of meta-heuristics and machine learning to automate reconfigurable application parameter optimization is investigated and a case study of a quadrature based financial application with varied precision is presented.
Posted Content
Efficient Structured Pruning and Architecture Searching for Group Convolution
Ruizhe Zhao,Wayne Luk +1 more
TL;DR: This paper forms group convolution pruning as finding the optimal channel permutation to impose structural constraints and solve it efficiently by heuristics and applies local search to exploring group configuration based on estimated pruning cost to maximise test accuracy.
Proceedings ArticleDOI
Reconfigurable acceleration of neural models with gap junctions
TL;DR: The simulation cost of gap junctions can be reduced by clustering them within the model, which is consistent with evidence of the structure of gap junction networks and allows each cluster to be updated in parallel.
Journal ArticleDOI
Customisable Hardware Compilation
TL;DR: In this article, the authors describe a framework for hardware compilation based on a parallel imperative language, which supports multiple levels of design abstraction, transformational development, optimisation by compiler passes, and metalanguage facilities.
Proceedings ArticleDOI
High-Performance FPGA-based Accelerator for Bayesian Neural Networks
TL;DR: In this article, the authors proposed an FPGA-based hardware architecture to accelerate BNNs inferred through Monte Carlo Dropout, which can achieve up to 4 times higher energy efficiency and 9 times better compute efficiency compared with other state-of-theart BNN accelerators.