scispace - formally typeset
Open AccessPosted Content

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

Reads0
Chats0
TLDR
XNOR-Nets as discussed by the authors approximate convolutions using primarily binary operations, which results in 58x faster convolutional operations and 32x memory savings, and outperforms BinaryConnect and BinaryNets by large margins on ImageNet.
Abstract
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.

read more

Citations
More filters
Posted Content

One Weight Bitwidth to Rule Them All.

TL;DR: This work takes the first step to understand if some weight bitwidth is better than others by aligning all to the same model size using a width-multiplier and shows that using a single bitwidth for the whole network can achieve better accuracy compared to mixed-precision quantization targeting zero accuracy degradation.
Proceedings ArticleDOI

Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Effciency, and Power-Intermittency Resilience

TL;DR: Herein, a bit-wise Convolutional Neural Network in-memory accelerator is implemented using Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) computational sub-arrays and utilizes a novel AND-Accumulation method capable of significantly-reduced energy consumption within convolutional layers.
Posted Content

NITI: Training Integer Neural Networks Using Integer-only Arithmetic.

TL;DR: A novel neural network training framework called NITI that exclusively utilizes low bitwidth integer arithmetic, which achieves similar accuracy as state-of-the-art integer training frameworks without relying on full-precision floating-point first and last layers.
Proceedings ArticleDOI

OrthrusPE: runtime reconfigurable processing elements for binary neural networks

TL;DR: This paper exploits DSP48 blocks on off-the-shelf FPGAs to compute binary Hadamard products and fixed-point arithmetic, thereby utilizing the same hardware resource for two distinct, critical modes of operation.
Journal ArticleDOI

Mononizing Binocular Videos

TL;DR: This paper presents the idea of mono-nizing binocular videos and a framework to effectively realize it, and formulated an encoding-and-decoding framework with the pyramidal deformable fusion module to exploit long-range correspondences between the left and right views, a quantization layer to suppress the restoring artifacts, and a compression noise simulation module to resist the compression noise introduced by modern video codecs.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Related Papers (5)