W
Wayne Luk
Researcher at Imperial College London
Publications - 737
Citations - 13643
Wayne Luk is an academic researcher from Imperial College London. The author has contributed to research in topics: Field-programmable gate array & Reconfigurable computing. The author has an hindex of 54, co-authored 703 publications receiving 12517 citations. Previous affiliations of Wayne Luk include Fudan University & University of London.
Papers
More filters
Journal ArticleDOI
Regular pipelined multipliers
TL;DR: Two regular processor arrays for multiplying unsigned numbers are described, and the essence is a structure that allows designs with different degrees of pipelining to be synthesised.
Journal ArticleDOI
ADvaNCE – Efficient and Scalable Approximate Density-Based Clustering Based on Hashing
TL;DR: ADvaNCE is developed, a new approach based on approximating DBSCAN that combines locality sensitive hashing to approximate and speed up distance calculations and representative point selection to reduce the number of distance calculations.
Proceedings ArticleDOI
Reducing Underflow in Mixed Precision Training by Gradient Scaling
Abstract: By leveraging the half-precision floating-point format (FP16) well supported by recent GPUs, mixed precision training (MPT) enables us to train larger models under the same or even smaller budget. However, due to the limited representation range of FP16, gradients can often experience severe underflow problems that hinder backpropagation and degrade model accuracy. MPT adopts loss scaling, which scales up the loss value just before backpropagation starts, to mitigate underflow by enlarging the magnitude of gradients. Unfortunately, scaling once is insufficient: gradients from distinct layers can each have different data distributions and require non-uniform scaling. Heuristics and hyperparameter tuning are needed to minimize these sideeffects on loss scaling. We propose gradient scaling, a novel method that analytically calculates the appropriate scale for each gradient on-the-fly. It addresses underflow effectively without numerical problems like overflow and the need for tedious hyperparameter tuning. Experiments on a variety of networks and tasks show that gradient scaling can improve accuracy and reduce overall training effort compared with the state-of-the-art MPT.
Proceedings ArticleDOI
Towards Real Time Radiotherapy Simulation
Nils Voss,P Ziegenhein,Lukas Vermond,Joost Hoozemans,Oskar Mencer,Uwe Oelfke,Wayne Luk,Georgi Gaydadjiev +7 more
TL;DR: A novel reconfigurable hardware architecture is proposed to implement Monte Carlo based simulation of physical dose accumulation for intensity-modulated adaptive radiotherapy to provide accurate online dose calculation in real-time during patient treatment.
Proceedings ArticleDOI
A framework for developing hardware-software systems
Wayne Luk,Peter Y. K. Cheung +1 more
TL;DR: The framework is to support flexible hardware and software partitions, so that designs can be customised to changing performance requirements and resource availability and facilitate rapid design exploration, adaptation and evaluation.