scispace - formally typeset
Open AccessBook ChapterDOI

AMC: AutoML for Model Compression and Acceleration on Mobile Devices

Reads0
Chats0
TLDR
This paper proposes AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality and achieves state-of-the-art model compression results in a fully automated way without any human efforts.
Abstract
Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4\(\times \) FLOPs reduction, we achieved 2.7% better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53\(\times \) on the GPU (Titan Xp) and 1.95\(\times \) on an Android phone (Google Pixel 1), with negligible loss of accuracy.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection

TL;DR: In this paper, a neural conditional branch is constructed with a trigger detector and several operators and injected into the victim model as a malicious payload, which can be flexibly customized by the attacker, and scalable as it does not require any prior knowledge from the original model.
Journal ArticleDOI

Iterative-AMC: a novel model compression and structure optimization method in mechanical system safety monitoring

TL;DR: In this article , an iterative automatic machine compression method, named Iterative-AMC, is proposed to automatically compress and optimize the structure of the large-scale neural networks, and the proposed method is successfully deployed on a small-scale FPGA chip.
Posted Content

Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices

TL;DR: In this paper, a unified framework with joint channel pruning and weight pruning is proposed to balance the computational resource consumption and the accuracy, where unimportant connections can be removed either channel-wise or randomly with a minimal impact on model accuracy.
Proceedings ArticleDOI

Distributed Deep Learning in An Edge Computing System

TL;DR: Zhang et al. as mentioned in this paper proposed both heuristic and reinforcement learning (RL) based DL job schedulers by leveraging DL job features, which achieved up to 82% improvement on training time and 70% on consumed energy.
Posted Content

Bayesian Sparsification Methods for Deep Complex-valued Networks.

Ivan Nazarov, +1 more
- 25 Mar 2020 - 
TL;DR: The proposed Bayesian technique is verified by conducting a large numerical study of the performance-compression trade-off of C-valued networks on two tasks: image recognition on MNIST-like and CIFAR10 datasets and music transcription on MusicNet.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Related Papers (5)