Open AccessPosted Content
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
TLDR
XNOR-Nets as discussed by the authors approximate convolutions using primarily binary operations, which results in 58x faster convolutional operations and 32x memory savings, and outperforms BinaryConnect and BinaryNets by large margins on ImageNet.Abstract:
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.read more
Citations
More filters
Journal ArticleDOI
Fast and Accurate Inference on Microcontrollers With Boosted Cooperative Convolutional Neural Networks (BC-Net)
Luca Mocerino,Andrea Calimera +1 more
TL;DR: Experiments conducted on four different CNN benchmarks deployed on off-the-shelf boards powered with the MCUs of the Cortex-M family by ARM show that BC-Nets outperform classical quantization and binarization when applied as separate techniques.
Posted Content
Single Shot Structured Pruning Before Training.
TL;DR: This work develops a methodology to remove entire channels and hidden units with the explicit aim of speeding up training and inference in deep neural networks using structured pruning applied before training.
Journal ArticleDOI
Merged Logic and Memory Fabrics for Accelerating Machine Learning Workloads
TL;DR: This article presents a tutorial regarding new computing architectures, circuits techniques, and multiple promising device technologies for in-memory computing targeting ML workloads.
Posted Content
Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks
Julieta Martinez,Jashan Shewakramani,Ting Wei Liu,Ioan Andrei Bârsan,Wenyuan Zeng,Raquel Urtasun +5 more
TL;DR: The weights of two adjacent layers can be permuted while expressing the same function and a connection to rate-distortion theory is established and an annealed quantization algorithm is relied on to better compress the network and achieve higher final accuracy.
Journal ArticleDOI
A ReRAM-Based Computing-in-Memory Convolutional-Macro With Customized 2T2R Bit-Cell for AIoT Chip IP Applications
Fei Tan,Yiming Wang,Yiming Yang,Liran Li,Tian Wang,Feng Zhang,Xinghua Wang,Jianfeng Gao,Yongpan Liu +8 more
TL;DR: This brief customized a bit-cell consisting of 2T2R ReRAM cells as one unit to achieve high hardware compute accuracy, great read/compute speed, and low power consuming and developed a complete computing-in-memory (CIM) convolutional macro based on ReRAM array.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.