scispace - formally typeset
Open AccessPosted Content

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size

Reads0
Chats0
TLDR
This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).
Abstract
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet). The SqueezeNet architecture is available for download here: this https URL

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness

TL;DR: A micro aerial vehicle (MAV) system, built with inexpensive off-the-shelf hardware, for autonomously following trails in unstructured, outdoor environments such as forests, introduces a deep neural network called TrailNet for estimating the view orientation and lateral offset of the MAV with respect to the trail center.
Posted Content

Do ImageNet Classifiers Generalize to ImageNet

TL;DR: This article showed that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.
Book ChapterDOI

SqueezeSegV3: Spatially-Adaptive Convolution for Efficient Point-Cloud Segmentation

TL;DR: Li et al. as mentioned in this paper proposed Spatially-Adaptive Convolution (SAC) to adopt different filters for different locations according to the input image, which can be implemented as a series of element-wise multiplications, im2col, and standard convolution.
Journal ArticleDOI

Performance of deep learning vs machine learning in plant leaf disease detection

TL;DR: This article is comparing the performance of ML (Support Vector Machine, Random Forest), Random Forest, Stochastic Gradient Descent (SGD), & DL (Inception-v3, V GG-16, VGG-19) in terms of citrus plant disease detection as DL methods perform better than that of ML methods in case of disease detection.
Journal ArticleDOI

A comprehensive survey on model compression and acceleration

TL;DR: A survey of various techniques suggested for compressing and accelerating the ML and DL models is presented and the challenges of the existing techniques are discussed and future research directions in the field are provided.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Related Papers (5)