R
Richard Shaw
Researcher at University College London
Publications - 14
Citations - 95
Richard Shaw is an academic researcher from University College London. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 3, co-authored 12 publications receiving 35 citations. Previous affiliations of Richard Shaw include King's College London.
Papers
More filters
Journal ArticleDOI
A k-Space Model of Movement Artefacts: Application to Segmentation Augmentation and Artefact Removal
TL;DR: A method for generating realistic motion artefacts from artefact-free magnitude MRI data to be used in deep learning frameworks, increasing training appearance variability and ultimately making machine learning algorithms such as convolutional neural networks (CNNs) more robust to the presence of motion artefact data.
Journal ArticleDOI
NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results
Eduardo P'erez-Pellitero,Sibi Catley-Chandar,Richard Shaw,Alevs Leonardis,Radu Timofte,Zexin Zhang,Cen Liu,Yunbo Peng,Yue Lin,G. Yu,Jin Zhang,Zhe Ma,Hongbin Wang,Xiangyu Chen,Haiwei Wu,Lin Liu,Chao Dong,Jiantao Zhou,Qingsen Yan,Song Zhang,Weiye Chen,Yuhang Liu,Zhen Zhong Zhang,Yanning Zhang,Javen Shi,Dong Gong,Dan Zhu,Mengdi Sun,Guannan Chen,Yang Hu,Hao Li,Bao Jun Zou,Zhen Liu,Wen-Qing Lin,Ting Jiang,Chengzhi Jiang,Xinpeng Li,Mingyan Han,Haoqiang Fan,Jian Sun,Shuaicheng Liu,Juan Mar'in-Vega,Michael Sloth,Peter Schneider-Kamp,R Rottger,Chunyan Li,Longyi Bao,Gang He,Ziya Xu,Li Xu,Gen Zhan,Ming Sun,X. Y. Wen,Junlin Li,Jin-jin Li,Chenghua Li,Ruipeng Gang,Fang Li,Chenming Liu,S. Feng,Fei Lei,Ruiqiang Li,Jun-Xia Ruan,Tianhong Dai,Wei Li,Zhan Guo Lu,Hengyan Liu,P-Y Huang,Guangyu Ren,Yonglin Luo,Chang Liu,Qiang Tu,Saisai Ma,Yi Cao,S. Tel,Barthélémy Heyrman,Dominique Ginhac,Chul Lee,Gahyeon Kim,Seon-Joo Park,An Gia Vien,T.-T. Mai,H. Yoon,Tu Van Vo,Alexander M. Holston,Sheir Afgen Zaheer,Chan-Young Park +86 more
TL;DR: This paper reviews the challenge on constrained high dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2022, and the competition set-up, datasets, proposed methods and their results.
MRI k-Space Motion Artefact Augmentation: Model Robustness and Task-Specific Uncertainty
TL;DR: In this article, a method for generating realistic motion artefacts from artefact-free data is presented to increase training appearance variability and ultimately make machine learning algorithms such as convolutional neural networks (CNNs) robust to the presence of motion artifacts.
MRI k-Space Motion Artefact Augmentation: Model Robustness and Task-Specific Uncertainty
TL;DR: This work model patient movement as a sequence of randomly-generated, ‘de-meaned’, rigid 3D affine transforms which, by resampling artefact-free volumes, are then combined in k-space to generate realistic motion artefacts.
Posted Content
A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image Quality
TL;DR: A novel cascading CNN architecture based on a student-teacher framework is proposed to decouple sources of uncertainty related to different k-space augmentations in an entirely self-supervised manner to predict separate uncertainty quantities for the different types of data degradation.