Open AccessPosted Content
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
Reads0
Chats0
TLDR
This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).Abstract:
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here: this https URLread more
Citations
More filters
Book ChapterDOI
Bi-Real Net: Enhancing the Performance of 1-Bit CNNs with Improved Representational Capability and Advanced Training Algorithm
TL;DR: In this paper, the authors proposed a Bi-Real Network (Bi-Net) which connects the real activations (after the 1-bit convolution and/or batchNorm layer, before the sign function) to activations of the consecutive block, through an identity shortcut.
Proceedings ArticleDOI
A configurable cloud-scale DNN processor for real-time AI
Jeremy Fowers,Kalin Ovtcharov,Michael K. Papamichael,Todd Massengill,Ming Liu,Lo Daniel,Shlomi Alkalay,Michael Haselman,Logan Adams,Mahdi Ghandi,Stephen F. Heil,Prerak Patel,Adam Sapek,Gabriel Weisz,Lisa Woods,Sitaram Lanka,Steven K. Reinhardt,Adrian M. Caulfield,Eric S. Chung,Doug Burger +19 more
TL;DR: This paper describes the NPU architecture for Project Brainwave, a production-scale system for real-time AI, and achieves more than an order of magnitude improvement in latency and throughput over state-of-the-art GPUs on large RNNs at a batch size of 1.5 teraflops.
Proceedings ArticleDOI
SemanticFusion: Dense 3D semantic mapping with convolutional neural networks
TL;DR: In this paper, the authors combine CNNs and a state-of-the-art dense SLAM system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories.
Book ChapterDOI
Data-Driven Sparse Structure Selection for Deep Neural Networks
Zehao Huang,Naiyan Wang +1 more
TL;DR: A simple and effective framework to learn and prune deep models in an end-to-end manner by adding sparsity regularizations on factors, and solving the optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method.
Journal ArticleDOI
COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images.
Ferhat Ucar,Deniz Korkmaz +1 more
TL;DR: This study demonstrates an AI-based structure to outperform the existing studies and shows how fine-tuned hyperparameters and augmented dataset make the proposed network perform much better than existing network designs and to obtain a higher COVID-19 diagnosis accuracy.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.