scispace - formally typeset
Open AccessPosted Content

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size

Reads0
Chats0
TLDR
This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).
Abstract
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet). The SqueezeNet architecture is available for download here: this https URL

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles.

TL;DR: Evidence of "perceptual convexity" is found by showing that convex combinations of similar-looking images retain appearance, and that discrete geodesics yield meaningful frame interpolation and texture morphing, all without explicit correspondence.
Journal ArticleDOI

Presentation Attack Detection Using a Tiny Fully Convolutional Network

TL;DR: A method to detect presentation attacks using a small fully convolutional network that is comparable with the state-of-the-art accuracy, while the processing time and memory requirement are much reduced.
Proceedings ArticleDOI

Pixelhop++: A Small Successive-Subspace-Learning-Based (Ssl-Based) Model For Image Classification

TL;DR: The successive subspace learning (SSL) principle was developed and used to design an interpretable learning model, known as the PixelHop method, for image classification in this article, which decouple a joint spatial-spectral input tensor to multiple spatial tensors (one for each spectral component) under the spatialspectral separability assumption and perform the Saab transform in a channel-wise manner.
Posted Content

Diversity can be Transferred: Output Diversification for White- and Black-box Attacks

TL;DR: Output Diversified Sampling (ODS) is proposed, a novel sampling strategy that attempts to maximize diversity in the target model's outputs among the generated samples and significantly improves the performance of existing white-box and black-box attacks.
Posted Content

SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization.

TL;DR: This paper introduces an enhanced visual explanation in terms of visual sharpness called SS-CAM, which produces centralized localization of object features within an image through a smooth operation, which outperforms Score-C CAM on both faithfulness and localization tasks.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Related Papers (5)