C
Christian Etmann
Researcher at University of Cambridge
Publications - 22
Citations - 931
Christian Etmann is an academic researcher from University of Cambridge. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 7, co-authored 18 publications receiving 353 citations. Previous affiliations of Christian Etmann include University of Bremen & University of Bath.
Papers
More filters
Journal ArticleDOI
Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans
Michael S. Roberts,Michael S. Roberts,Derek Driggs,Matthew Thorpe,Julian D. Gilbey,Michael Yeung,Stephan Ursprung,Angelica I. Aviles-Rivero,Christian Etmann,Cathal McCague,Lucian Beer,Jonathan R. Weir-McCall,Jonathan R. Weir-McCall,Zhongzhao Teng,Effrossyni Gkrania-Klotsas,James H.F. Rudd,Evis Sala,Carola-Bibiane Schönlieb +17 more
TL;DR: It is found that none of the models identified are of potential clinical use due to methodological flaws and/or underlying biases, which is a major weakness, given the urgency with which validated COVID-19 models are needed.
Journal ArticleDOI
Deep Learning for Tumor Classification in Imaging Mass Spectrometry
TL;DR: An adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis are proposed.
Proceedings Article
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
TL;DR: In this paper, the alignment between the input image and the saliency map is investigated and it is shown that as the distance to the decision boundary grows, so does the alignment, which is strictly true in the case of linear models.
Posted Content
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
TL;DR: This work hypothesizes that as the distance to the decision boundary grows, so does the alignment between input image and saliency map, and identifies where the non-linear nature of neural networks weakens the relation.
Journal ArticleDOI
Structure-preserving deep learning
Elena Celledoni,Matthias J. Ehrhardt,Christian Etmann,Robert I.. McLachlan,Brynjulf Owren,Carola-Bibiane Schönlieb,Ferdia Sherry +6 more
TL;DR: A number of directions in deep learning are reviewed: some deep neural networks can be understood as discretisations of dynamical systems, neural Networks can be designed to have desirable properties such as invertibility or group equivariance, and new algorithmic frameworks based on conformal Hamiltonian systems and Riemannian manifolds to solve the optimisation problems have been proposed.