Open AccessProceedings Article
Learning important features through propagating activation differences
Avanti Shrikumar,Peyton Greenside,Anshul Kundaje +2 more
- pp 3145-3153
Reads0
Chats0
TLDR
DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input, is presented.Abstract:
The purported "black box" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http://goo.gl/qKb7pL, code: http://goo.gl/RM8jvH.read more
Citations
More filters
Journal ArticleDOI
A Survey of Methods for Explaining Black Box Models
Riccardo Guidotti,Anna Monreale,Salvatore Ruggieri,Franco Turini,Fosca Giannotti,Dino Pedreschi +5 more
TL;DR: In this paper, the authors provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box decision support systems, given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work.
Proceedings Article
Axiomatic attribution for deep networks
TL;DR: In this article, the authors identify two fundamental axioms (sensitivity and implementation invariance) that attribution methods ought to satisfy and use them to guide the design of a new attribution method called Integrated Gradients.
Posted Content
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Javier Del Ser,Adrien Bennetot,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-Lopez,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera +13 more
TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Posted Content
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh,Percy Liang +1 more
TL;DR: This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.
Journal ArticleDOI
Opportunities and obstacles for deep learning in biology and medicine.
Travers Ching,Daniel Himmelstein,Brett K. Beaulieu-Jones,Alexandr A. Kalinin,Brian T. Do,Gregory P. Way,Enrico Ferrero,Paul-Michael Agapow,Michael Zietz,Michael M. Hoffman,Michael M. Hoffman,Wei Xie,Gail L. Rosen,Benjamin J. Lengerich,Johnny Israeli,Jack Lanchantin,Stephen Woloszynek,Anne E. Carpenter,Avanti Shrikumar,Jinbo Xu,Evan M. Cofer,Evan M. Cofer,Christopher A. Lavender,Srinivas C. Turaga,Amr Alexandari,Zhiyong Lu,David J. Harris,Dave DeCaprio,Yanjun Qi,Anshul Kundaje,Yifan Peng,Laura K. Wiley,Marwin H. S. Segler,Simina M. Boca,S. Joshua Swamidass,Austin Huang,Anthony Gitter,Anthony Gitter,Casey S. Greene +38 more
TL;DR: It is found that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art.
References
More filters
Book ChapterDOI
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler,Rob Fergus +1 more
TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Proceedings Article
Striving for Simplicity: The All Convolutional Net
TL;DR: It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.
Journal ArticleDOI
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.
Sebastian Bach,Alexander Binder,Grégoire Montavon,Frederick Klauschen,Klaus-Robert Müller,Wojciech Samek +5 more
TL;DR: This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks.
Posted Content
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler,Rob Fergus +1 more
TL;DR: In this article, the authors introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier, and perform an ablation study to discover the performance contribution from different model layers.
Posted Content
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
TL;DR: The authors compute the gradient of the class score with respect to the input image and compute a class saliency map, which can be used for weakly supervised object segmentation using classification ConvNets.