scispace - formally typeset
S

Sebastian Lapuschkin

Researcher at Heinrich Hertz Institute

Publications -  66
Citations -  6636

Sebastian Lapuschkin is an academic researcher from Heinrich Hertz Institute. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 23, co-authored 47 publications receiving 3724 citations.

Papers
More filters
Journal ArticleDOI

Explaining nonlinear classification decisions with deep Taylor decomposition

TL;DR: A novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements by backpropagating the explanations from the output to the input layer is introduced.
Journal ArticleDOI

Evaluating the Visualization of What a Deep Neural Network Has Learned

TL;DR: In this article, a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps is presented, and the authors compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets.
Journal ArticleDOI

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

TL;DR: The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.
Book ChapterDOI

Layer-Wise Relevance Propagation: An Overview

TL;DR: This chapter gives a concise introduction to LRP with a discussion of how to implement propagation rules easily and efficiently, how the propagation procedure can be theoretically justified as a ‘deep Taylor decomposition’, how to choose the propagation rules at each layer to deliver high explanation quality, and how LRP can be extended to handle a variety of machine learning scenarios beyond deep neural networks.
Journal ArticleDOI

Unmasking Clever Hans predictors and assessing what machines really learn.

TL;DR: In this article, the authors apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games, and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.