Open AccessPosted Content
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
Reads0
Chats0
TLDR
This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).Abstract:
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here: this https URLread more
Citations
More filters
Proceedings ArticleDOI
Fast Spatially-Varying Indoor Lighting Estimation
TL;DR: This work proposes a real-time method to estimate spatially-varying indoor lighting from a single RGB image, and demonstrates, through quantitative experiments, that the results achieve lower lighting estimation errors and are preferred by users over the state-of-the-art.
Journal ArticleDOI
Automatic Skin Cancer Detection in Dermoscopy Images Based on Ensemble Lightweight Deep Learning Network
Lisheng Wei,Kun Ding,Huosheng Hu +2 more
TL;DR: The proposed lightweight skin cancer recognition model with feature discrimination based on fine-grained classification principle has better performance than the start-of-the-art deep learning-based approach on the ISBI 2016 skin lesion analysis towards melanoma detection challenge dataset.
Posted Content
Selection via Proxy: Efficient Data Selection for Deep Learning
Cody Coleman,Christopher Yeh,Stephen Mussmann,Baharan Mirzasoleiman,Peter Bailis,Percy Liang,Jure Leskovec,Matei Zaharia +7 more
TL;DR: This work shows that it can significantly improve the computational efficiency of data selection in deep learning by using a much smaller proxy model to perform data selection for tasks that will eventually require a large target model (e.g., selecting data points to label for active learning).
How to scale distributed deep learning
TL;DR: It is found, perhaps counterintuitively, that asynchronous SGD, including both elastic averaging and gossiping, converges faster at fewer nodes, whereas synchronous SGD scales better to more nodes (up to about 100 nodes).
Journal ArticleDOI
Recent advances in efficient computation of deep convolutional neural networks
TL;DR: A comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view is provided in this paper, where the authors provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher-student networks, compact network design, and hardware accelerators.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.