Open AccessPosted Content
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
Reads0
Chats0
TLDR
This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).Abstract:
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here: this https URLread more
Citations
More filters
Journal ArticleDOI
How to Correctly Detect Face-Masks for COVID-19 from Visual Information?
TL;DR: A comprehensive experimental evaluation of several recent face detectors for their performance with masked-face images is conducted and the usefulness of multiple off-the-shelf deep-learning models for recognizing correct face-mask placement is investigated.
Journal ArticleDOI
Real-Time Vehicle Make and Model Recognition with the Residual SqueezeNet Architecture
TL;DR: A novel deep learning approach for MMR using the SqueezNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes the MMR system more efficient.
Proceedings ArticleDOI
OpenEI: An Open Framework for Edge Intelligence
TL;DR: An Open Framework for Edge Intelligence (OpenEI), which is a lightweight software platform to equip edges with intelligent processing and data sharing capability and analyzes four fundamental EI techniques used to build OpenEI and identifies several open problems based on potential research directions.
Posted Content
SpinalNet: Deep Neural Network with Gradual Input.
H M Dipu Kabir,Moloud Abdar,Seyed Mohammad Jafar Jalali,Abbas Khosravi,Amir F. Atiya,Saeid Nahavandi,Dipti Srinivasan +6 more
TL;DR: The human somatosensory system is studied and the SpinalNet is proposed to achieve higher accuracy with less computational resources and the vanishing gradient problem does not exist.
Posted Content
Edge Intelligence: Architectures, Challenges, and Applications
TL;DR: This survey article provides a comprehensive introduction to edge intelligence and its application areas and presents a systematic classification of the state of the solutions by examining research results and observations for each of the four components.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.