scispace - formally typeset
Open AccessProceedings ArticleDOI

Implementing Binarized Neural Networks with Magnetoresistive RAM without Error Correction

TLDR
For BNNs, ST-MRAMs can be programmed with weak (low-energy) programming conditions, without error correcting codes, and it is shown that this result can allow the use of low energy and low area ST- MRAM cells, and the energy savings at the system level can reach a factor two.
Abstract
One of the most exciting applications of Spin Torque Magnetoresistive Random Access Memory (ST-MRAM) is the in-memory implementation of deep neural networks, which could allow improving the energy efficiency of Artificial Intelligence by orders of magnitude with regards to its implementation on computers and graphics cards. In particular, ST-MRAM could be ideal for implementing Binarized Neural Networks (BNNs), a type of deep neural networks discovered in 2016, which can achieve state-of-the-art performance with a highly reduced memory footprint with regards to conventional artificial intelligence approaches. The challenge of ST-MRAM, however, is that it is prone to write errors and usually requires the use of error correction. In this work, we show that these bit errors can be tolerated by BNNs to an outstanding level, based on examples of image recognition tasks (MNIST, CIFAR-10 and ImageNet): bit error rates of ST-MRAM up to 0.1% have little impact on recognition accuracy. The requirements for ST-MRAM are therefore considerably relaxed for BNNs with regards to traditional applications. By consequence, we show that for BNNs, ST-MRAMs can be programmed with weak (low-energy) programming conditions, without error correcting codes. We show that this result can allow the use of low energy and low area ST-MRAM cells, and show that the energy savings at the system level can reach a factor two.

read more

Citations
More filters
Journal ArticleDOI

FeFET-Based Binarized Neural Networks Under Temperature-Dependent Bit Errors

TL;DR: In this paper , the temperature-dependent bit error model of Ferroelectric FET (FeFET) memories is revealed and the effect of temperature on BNN accuracy is evaluated.
Posted Content

Methodology for Realizing VMM with Binary RRAM Arrays: Experimental Demonstration of Binarized-ADALINE Using OxRAM Crossbar

TL;DR: An efficient hardware mapping methodology for realizing vector matrix multiplication (VMM) on resistive memory (RRAM) arrays is presented and a binarized-ADALINE (Adaptive Linear) classifier is experimentally demonstrated on an OxRAM crossbar.
Posted Content

Towards Explainable Bit Error Tolerance of Resistive RAM-Based Binarized Neural Networks

TL;DR: A straight-through gradient approximation is proposed to improve the weight-sign-flip training, by which BNNs adapt less to the bit error rates, and a metric that aims to measure BET without fault injection is defined, which correlates with accuracy over error rate for all FCNNs tested.
Proceedings ArticleDOI

Approximate computation based on NAND-SPIN MRAM for CNN on-chip training

TL;DR: In this article , the stochastic switching mechanism of the NAND-SPIN MRAM is utilized to perform the approximate update and storage of the synaptic weight, and more than 67% speedup and nearly 70% energy saving have been achieved with less than 1% accuracy loss.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book ChapterDOI

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

TL;DR: The Binary-Weight-Network version of AlexNet is compared with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than \(16\,\%\) in top-1 accuracy.
Posted Content

Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1

TL;DR: A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Posted Content

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

TL;DR: XNOR-Nets as discussed by the authors approximate convolutions using primarily binary operations, which results in 58x faster convolutional operations and 32x memory savings, and outperforms BinaryConnect and BinaryNets by large margins on ImageNet.
Related Papers (5)