ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Xiangyu Zhang,Xinyu Zhou,Mengxiao Lin,Jian Sun +3 more
- pp 6848-6856
Reads0
Chats0
TLDR
ShuffleNet as discussed by the authors utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy, and achieves an actual speedup over AlexNet while maintaining comparable accuracy.Abstract:
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13A— actual speedup over AlexNet while maintaining comparable accuracy.read more
Citations
More filters
Posted Content
Growing Efficient Deep Networks by Structured Continuous Sparsification
TL;DR: This work develops an approach to training deep networks while dynamically adjusting their architecture, driven by a principled combination of accuracy and sparsity objectives, that yields efficient networks that are smaller and more accurate than those produced by competing pruning methods.
Journal ArticleDOI
Model Compression for IoT Applications in Industry 4.0 via Multiscale Knowledge Transfer
TL;DR: In this article, the authors introduce multiscale representations to knowledge transfer, which facilitates the generalization ability of student and teacher in the context of the Internet of Things (IoT).
Posted Content
A Comprehensive Survey of Machine Learning Applied to Radar Signal Processing.
TL;DR: This paper aims at helping researchers and practitioners to better understand the application of ML techniques to RSP-related problems by providing a comprehensive, structured and reasoned literature overview of ML-based RSP techniques.
Proceedings ArticleDOI
Differentiable Learning-to-Group Channels via Groupable Convolutional Neural Networks
TL;DR: Groupable ConvNet (GroupNet) as mentioned in this paper uses a dynamic grouping convolution (DGConv) operation to learn the number of groups in an end-to-end manner.
Proceedings ArticleDOI
Quantisation and Pruning for Neural Network Compression and Regularisation
TL;DR: The results show that pruning and quantisation compresses these networks to less than half their original size and improves their efficiency, particularly on MobileNet with a 7x speedup.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.