ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Xiangyu Zhang,Xinyu Zhou,Mengxiao Lin,Jian Sun +3 more
- pp 6848-6856
Reads0
Chats0
TLDR
ShuffleNet as discussed by the authors utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy, and achieves an actual speedup over AlexNet while maintaining comparable accuracy.Abstract:
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13A— actual speedup over AlexNet while maintaining comparable accuracy.read more
Citations
More filters
Posted Content
CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection
TL;DR: This paper presents an end-to-end collaborative aggregation-and-distribution network (CoADNet) to capture both salient and repetitive visual patterns from multiple images, and develops a group consistency preserving decoder tailored for the CoSOD task.
Journal ArticleDOI
Thinning of convolutional neural network with mixed pruning
TL;DR: A method of combining weight pruning and filter pruning, which can achieve higher compression ratio of the model parameters and fine-tuning to recover the model's accuracy.
Posted Content
Attention Based Pruning for Shift Networks
TL;DR: Shift Attention Layers are introduced, which extend SLs by using an attention mechanism that learns which shifts are the best at the same time the network function is trained, and are able to outperform vanilla SLs on various object recognition benchmarks while significantly reducing the number of float operations and parameters for the inference.
Posted Content
Learning Architectures for Binary Networks
TL;DR: This work proposes to search architectures for binary networks (BNAS) by defining a new search space for binary architectures and a novel search objective, and designs a new cell template and proposes to use the Zeroise layer instead of using it as a placeholder.
Posted Content
Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net
TL;DR: In this article, the authors proposed an approach that exploits a cumbersome net to help train the lightweight net for prediction, dubbed the whole process rocket launching, where the cumbersome booster net is used to guide the learning of the target light net throughout the whole training process.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.