scispace - formally typeset
Open AccessProceedings ArticleDOI

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

Xiangyu Zhang, +3 more
- pp 6848-6856
Reads0
Chats0
TLDR
ShuffleNet as discussed by the authors utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy, and achieves an actual speedup over AlexNet while maintaining comparable accuracy.
Abstract
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13A— actual speedup over AlexNet while maintaining comparable accuracy.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

LRNNET: A Light-Weighted Network with Efficient Reduced Non-Local Operation for Real-Time Semantic Segmentation

TL;DR: LRNNet as discussed by the authors proposes a factorized convolutional block in ResNet-style encoder to achieve more lightweight, efficient and powerful feature extraction, and utilizes spatial regional dominant singular vectors to achieve reduced and more representative non-local feature integration with much lower computation and memory cost.
Posted Content

IVD-Net: Intervertebral disc localization and segmentation in MRI with a multi-modal UNet.

TL;DR: This paper proposes an architecture for IVD localization and segmentation in multi-modal MRI, which extends the well-known UNet, and improves standard U-Net modules by extending inception modules with two dilated convolutions blocks of different scale, which helps handling multi-scale context.
Journal ArticleDOI

Modular Lightweight Network for Road Object Detection Using a Feature Fusion Approach

TL;DR: A modular lightweight network model for road objects detection, such as car, pedestrian, and cyclist, especially when they are far away from the camera and their sizes are small is presented, using a fast and efficient network architecture for detecting small objects.
Journal ArticleDOI

CNN-Grinder: From Algorithmic to High-Level Synthesis descriptions of CNNs for Low-end-low-cost FPGA SoCs

TL;DR: CNN-Grinder is presented, a template-driven workflow for converting algorithmic descriptions of mobile-friendly convolutional neural networks, such as SqueezeNet v1.1 and ZynqNet, to HLS code which can be used for programming low-end-low-cost FPGA SoCs.
Journal ArticleDOI

Learning-to-augment strategy using noisy and denoised data: Improving generalizability of deep CNN for the detection of COVID-19 in X-ray images.

TL;DR: Momeny et al. as mentioned in this paper proposed a learning-to-augment approach that generates new noisy variants of the original image data with optimized noise density to improve the robustness and generalization of deep CNNs for COVID-19 detection.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Related Papers (5)
Trending Questions (1)
Can convolutional neural networks run on mobile phones?\?

Yes, convolutional neural networks can run on mobile phones. The paper specifically mentions that ShuffleNet is designed for mobile devices with limited computing power.