scispace - formally typeset
Open AccessJournal ArticleDOI

Pruning by explaining: A novel criterion for deep neural network pruning

Reads0
Chats0
TLDR
This paper proposes a novel criterion for CNN pruning inspired by neural network interpretability: the most relevant elements, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI).
About
This article is published in Pattern Recognition.The article was published on 2021-07-01 and is currently open access. It has received 131 citations till now. The article focuses on the topics: Pruning (decision trees) & Convolutional neural network.

read more

Citations
More filters
Journal Article

Quantum-Chemical Insights from Deep Tensor Neural Networks

TL;DR: An efficient deep learning approach is developed that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems, and unifies concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate chemical space predictions.
Proceedings Article

Pruning neural networks without any data by iteratively conserving synaptic flow

TL;DR: The data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important, and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models, datasets, and sparsity constraints.
Posted Content

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

TL;DR: A taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models is proposed.
Book ChapterDOI

Explainable AI Methods - A Brief Overview

TL;DR: In this article , explainable artificial intelligence (xAI) is an established field with a vibrant community that has developed a variety of very successful approaches to explain and interpret predictions of complex machine learning models such as deep neural networks.
Posted Content

Pruning neural networks without any data by iteratively conserving synaptic flow

TL;DR: In this article, the authors propose an iterative synaptic flow pruning (SynFlow) algorithm, which can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal Article

Scikit-learn: Machine Learning in Python

TL;DR: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems, focusing on bringing machine learning to non-specialists using a general-purpose high-level language.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)