scispace - formally typeset
Open AccessProceedings ArticleDOI

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

Xiangyu Zhang, +3 more
- pp 6848-6856
Reads0
Chats0
TLDR
ShuffleNet as discussed by the authors utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy, and achieves an actual speedup over AlexNet while maintaining comparable accuracy.
Abstract
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13A— actual speedup over AlexNet while maintaining comparable accuracy.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

1.2 Intelligence on Silicon: From Deep-Neural-Network Accelerators to Brain Mimicking AI-SoCs

TL;DR: Artificial-Intelligence technology is widely used in most information hardware, software, and networking, underlying all consumer technology: smartphones, home appliances, and the Web.
Journal ArticleDOI

Image-based segmentation and quantification of weak interlayers in rock tunnel face via deep learning

TL;DR: An advanced integrated pixel-level method based on the deep convolutional neural network (DCNN) approach named DeepLabv3+ is proposed for weak interlayers detection and quantification that can efficiently segment damage for rock tunnel faces, eliminate more noises, and consequently provide a much faster running speed.
Proceedings ArticleDOI

Driver Anomaly Detection: A Dataset and Contrastive Learning Approach

TL;DR: In this article, a contrastive learning approach was proposed to learn a metric to differentiate normal driving from anomalous driving, which achieved a 0.9673 AUC on the test set.
Journal ArticleDOI

A dual attention network based on efficientNet-B2 for short-term fish school feeding behavior analysis in aquaculture

TL;DR: Wang et al. as discussed by the authors proposed a dual attention network with EfficientNet-B2 for fine-grained short-term feeding behavior analysis of fish school, which includes two parallel attention modules, which focus on the feature extraction of the feeding region.
Posted Content

Taxonomy and Evaluation of Structured Compression of Convolutional Neural Networks

TL;DR: This work introduces a new way to categorize all published compression methods, based on the amount of data and compute needed to make the methods work in practice, and shows that SVD and probabilistic compression or pruning methods are complementary and give the best results of all the considered methods.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Related Papers (5)
Trending Questions (1)
Can convolutional neural networks run on mobile phones?\?

Yes, convolutional neural networks can run on mobile phones. The paper specifically mentions that ShuffleNet is designed for mobile devices with limited computing power.