scispace - formally typeset
Open AccessJournal ArticleDOI

Committee machines-a universal method to deal with non-idealities in memristor-based neural networks.

TLDR
A technology-agnostic approach, committee machines, is demonstrated, which increases the inference accuracy of memristive neural networks that suffer from device variability, faulty devices, random telegraph noise and line resistance.
Abstract
Artificial neural networks are notoriously power- and time-consuming when implemented on conventional von Neumann computing systems. Consequently, recent years have seen an emergence of research in machine learning hardware that strives to bring memory and computing closer together. A popular approach is to realise artificial neural networks in hardware by implementing their synaptic weights using memristive devices. However, various device- and system-level non-idealities usually prevent these physical implementations from achieving high inference accuracy. We suggest applying a well-known concept in computer science—committee machines—in the context of memristor-based neural networks. Using simulations and experimental data from three different types of memristive devices, we show that committee machines employing ensemble averaging can successfully increase inference accuracy in physically implemented neural networks that suffer from faulty devices, device-to-device variability, random telegraph noise and line resistance. Importantly, we demonstrate that the accuracy can be improved even without increasing the total number of memristors. Designing reliable and energy-efficient memristor-based artificial neural networks remains a challenge. Here, the authors demonstrate a technology-agnostic approach, committee machines, which increases the inference accuracy of memristive neural networks that suffer from device variability, faulty devices, random telegraph noise and line resistance.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Overview of Immune Response During SARS-CoV-2 Infection: Lessons From the Past.

TL;DR: The similarities of the coronaviruses that caused SARS and MERS to the novel SARS-CoV-2 in relation to their pathogenicity and immunogenicity are highlighted and various treatment strategies that could be employed for curing COVID-19 are focused on.
Journal ArticleDOI

Molecular and Immunological Diagnostic Tests of COVID-19: Current Status and Challenges.

TL;DR: This review focuses on a broad description of currently available diagnostic tests to detect either the virus (SARS-CoV-2) or virus-induced immune responses and lays out the shortcomings of certain tests and future needs.
Journal ArticleDOI

The interplay between inflammatory pathways and COVID-19: A critical review on pathogenesis and therapeutic options.

TL;DR: The detailed mechanism of occurrence of SARS-CoV-2 induced inflammatory storm and its connection with the pre-existing inflammatory conditions and possible treatment options to cope with the severe clinical manifestations of COVID-19 are provided.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Journal ArticleDOI

Training and operation of an integrated neuromorphic network based on metal-oxide memristors

TL;DR: The experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification).
Posted Content

Energy and Policy Considerations for Deep Learning in NLP

TL;DR: This paper quantifies the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and proposes actionable recommendations to reduce costs and improve equity in NLP research and practice.
Proceedings ArticleDOI

Energy and Policy Considerations for Deep Learning in NLP

TL;DR: In this article, the authors quantified the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and proposed actionable recommendations to reduce costs and improve equity in NLP research and practice.
Related Papers (5)