scispace - formally typeset
Open AccessProceedings ArticleDOI

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

Xiangyu Zhang, +3 more
- pp 6848-6856
Reads0
Chats0
TLDR
ShuffleNet as discussed by the authors utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy, and achieves an actual speedup over AlexNet while maintaining comparable accuracy.
Abstract
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13A— actual speedup over AlexNet while maintaining comparable accuracy.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

AR-Net: Adaptive Frame Resolution for Efficient Action Recognition

TL;DR: A novel approach, called AR-Net (Adaptive Resolution Network), that selects on-the-fly the optimal resolution for each frame conditioned on the input for efficient action recognition in long untrimmed videos.
Book ChapterDOI

IVD-Net: Intervertebral Disc Localization and Segmentation in MRI with a Multi-modal UNet

TL;DR: In this article, the authors proposed an architecture for intervertebral disc (IVD) localization and segmentation in multi-modal magnetic resonance images (MRI), which extends the well-known UNet.
Proceedings ArticleDOI

Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation

TL;DR: Li et al. as mentioned in this paper proposed a self-knowledge distillation method, Feature Refinement via Self-Knowledge Distillation (FRSKD), which utilizes an auxiliary self-teacher network to transfer a refined knowledge for the classifier network.
Proceedings ArticleDOI

Scaling Up Your Kernels to 31×31: Revisiting Large Kernel Design in CNNs

TL;DR: RepLKNet as discussed by the authors proposes to use a few large convolutional kernels instead of a stack of small kernels to close the performance gap between CNNs and ViTs, achieving comparable or superior results than Swin Transformer on ImageNet.
Posted Content

Structured Probabilistic Pruning for Convolutional Neural Network Acceleration

TL;DR: A novel progressive parameter pruning method, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner and can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Related Papers (5)
Trending Questions (1)
Can convolutional neural networks run on mobile phones?\?

Yes, convolutional neural networks can run on mobile phones. The paper specifically mentions that ShuffleNet is designed for mobile devices with limited computing power.