scispace - formally typeset
C

Christopher J. Anders

Researcher at Technical University of Berlin

Publications -  19
Citations -  1307

Christopher J. Anders is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 10, co-authored 17 publications receiving 388 citations.

Papers
More filters
Journal ArticleDOI

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

TL;DR: In this paper, the authors provide a timely overview of explainable AI, with a focus on 'post-hoc' explanations, explain its theoretical foundations, and put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations.
Journal ArticleDOI

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

TL;DR: In this paper, the authors provide a timely overview of post hoc explanations and explain its theoretical foundations, and put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations, and demonstrate successful usage of XAI in a representative selection of application scenarios.
Posted Content

Explanations can be manipulated and geometry is to blame

TL;DR: It is shown that explanations can be manipulated arbitrarily by applying visually hardly perceptible perturbations to the input that keep the network's output approximately constant, and theoretically this phenomenon can be related to certain geometrical properties of neural networks.
Posted Content

Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond

TL;DR: This work aims to provide a timely overview of this active emerging field of machine learning and explain its theoretical foundations, put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations, and outline best practice aspects.
Proceedings Article

Explanations can be manipulated and geometry is to blame

TL;DR: In this paper, the authors show that explanations can be manipulated arbitrarily by applying visually hardly perceptible perturbations to the input that keep the network's output approximately constant, which is disconcerting for both trust and interpretability.