ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Xiangyu Zhang,Xinyu Zhou,Mengxiao Lin,Jian Sun +3 more
- pp 6848-6856
Reads0
Chats0
TLDR
ShuffleNet as discussed by the authors utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy, and achieves an actual speedup over AlexNet while maintaining comparable accuracy.Abstract:
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13A— actual speedup over AlexNet while maintaining comparable accuracy.read more
Citations
More filters
Proceedings ArticleDOI
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
TL;DR: SqueezeBERT as mentioned in this paper replaces self-attention layers with grouped convolutions, and uses this technique in a novel network architecture, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test set.
Journal ArticleDOI
One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting
TL;DR: The experiments on the common NAS search space demonstrate that NSAS and it variants improve the predictive ability of supernet training in one-shot NAS with remarkable and efficient performance on the CIFAR-10, CIFar-100, and ImageNet datasets.
Journal ArticleDOI
s-LWSR: Super Lightweight Super-Resolution Network
TL;DR: This article proposes a flexibly adjustable super-lightweight SISR pipeline, with limited parameters and operations, that can achieve a similar performance to that of other cumbersome but state-of-the-art (SOTA) deep SR methods.
Posted Content
Progressive DARTS: Bridging the Optimization Gap for NAS in the Wild
TL;DR: A progressive method that gradually increases the network depth during the search stage, which leads to the Progressive DARTS (P-DARTS) algorithm, which achieves improved performance on both the proxy dataset (CIFAR10) and a few target problems.
Proceedings Article
Meta Architecture Search
TL;DR: The Bayesian Meta Architecture SEarch (BASE) framework is proposed which takes advantage of a Bayesian formulation of the architecture search problem to learn over an entire set of tasks simultaneously and will open up new possibilities for efficient and massively scalable architecture search research across multiple tasks.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.