scispace - formally typeset
Open AccessPosted Content

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

TLDR
Experimental results show that the zero-shot random tickets outperform or attain a similar performance compared to existing "initial tickets", and a new method called "hybrid tickets", which achieves further improvement, is proposed.
Abstract
Network pruning is a method for reducing test-time computational resource requirements with minimal performance degradation. Conventional wisdom of pruning algorithms suggests that: (1) Pruning methods exploit information from training data to find good subnetworks; (2) The architecture of the pruned network is crucial for good performance. In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call "initial tickets"), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance. These findings inspire us to choose a series of simple \emph{data-independent} prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork (which we call "random tickets"). Experimental results show that our zero-shot random tickets outperform or attain a similar performance compared to existing "initial tickets". In addition, we identify one existing pruning method that passes our sanity checks. We hybridize the ratios in our random ticket with this method and propose a new method called "hybrid tickets", which achieves further improvement. (Our code is publicly available at this https URL)

read more

Citations
More filters
Proceedings Article

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

TL;DR: There is larger-thanexpected room for sparse training at scale, and the benefits of sparsity might be more universal beyond carefully designed pruning, according to the results of this paper.
Proceedings Article

Rare Gems: Finding Lottery Tickets at Initialization

TL;DR: G EM -M INER is proposed, which proposes lottery tickets at initialization that beat current baselines that train to better accuracy compared to simple baselines, and does so up to 19 × faster.
Proceedings ArticleDOI

Advancing Model Pruning via Bi-level Optimization

TL;DR: It is demonstrated that B I P can be better winning tickets than IMP in most cases, and is computationally as efficient as the one-shot pruning schemes, demonstrating 2-7 × speedup over IMP for the same level of model accuracy and sparsity.
Proceedings ArticleDOI

Training Your Sparse Neural Network Better with Any Mask

TL;DR: This paper demonstrates an alternative opportunity: one can carefully customize the sparse training techniques to deviate from the default dense network training protocols, con-sisting of introducing “ghost” neurons and skip connections at the early stage of training, and strategically modifying the initialization as well as labels.
Proceedings ArticleDOI

Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?

TL;DR: It is shown that—at higher sparsities—pairs of pruned networks at successive pruning iterations are connected by a linear path with zero error barrier if and only if they are matching.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Proceedings Article

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Proceedings ArticleDOI

Learning Transferable Architectures for Scalable Image Recognition

TL;DR: NASNet as discussed by the authors proposes to search for an architectural building block on a small dataset and then transfer the block to a larger dataset, which enables transferability and achieves state-of-the-art performance.
Related Papers (5)