scispace - formally typeset
Search or ask a question
Author

Mamathamba Kalishettyhalli Mahadevaiah

Bio: Mamathamba Kalishettyhalli Mahadevaiah is an academic researcher from Leibniz Institute for Neurobiology. The author has contributed to research in topics: Neuromorphic engineering & Resistive random-access memory. The author has an hindex of 6, co-authored 20 publications receiving 155 citations. Previous affiliations of Mamathamba Kalishettyhalli Mahadevaiah include Innovations for High Performance Microelectronics.

Papers
More filters
Journal ArticleDOI
TL;DR: This work supports material-based development of RRAM synapses for novel neural networks with high accuracy and low-power consumption by studying the material, device, and architecture aspects of resistive switching memory devices for implementing a 2-layer neural network for pattern recognition.
Abstract: Training and recognition with neural networks generally require high throughput, high energy efficiency, and scalable circuits to enable artificial intelligence tasks to be operated at the edge, i.e., in battery-powered portable devices and other limited-energy environments. In this scenario, scalable resistive memories have been proposed as artificial synapses thanks to their scalability, reconfigurability, and high-energy efficiency, and thanks to the ability to perform analog computation by physical laws in hardware. In this work, we study the material, device, and architecture aspects of resistive switching memory (RRAM) devices for implementing a 2-layer neural network for pattern recognition. First, various RRAM processes are screened in view of the device window, analog storage, and reliability. Then, synaptic weights are stored with 5-level precision in a 4 kbit array of RRAM devices to classify the Modified National Institute of Standards and Technology (MNIST) dataset. Finally, classification performance of a 2-layer neural network is tested before and after an annealing experiment by using experimental values of conductance stored into the array, and a simulation-based analysis of inference accuracy for arrays of increasing size is presented. Our work supports material-based development of RRAM synapses for novel neural networks with high accuracy and low-power consumption.

123 citations

Journal ArticleDOI
TL;DR: In this article, a detailed study of the multilevel-cell (MLC) programming of RRAM arrays for neural network applications is presented, where the authors compare three MLC programming schemes and discuss their variations in terms of the different slopes in the programming characteristics.
Abstract: Resistive switching memory (RRAM) is a promising technology for embedded memory and its application in computing. In particular, RRAM arrays can provide a convenient primitive for matrix–vector multiplication (MVM) with strong impact on the acceleration of neural networks for artificial intelligence (AI). At the same time, RRAM is affected by intrinsic conductance variations, which might cause degradation of accuracy in AI inference hardware. This work provides a detailed study of the multilevel-cell (MLC) programming of RRAM for neural network applications. We compare three MLC programming schemes and discuss their variations in terms of the different slopes in the programming characteristics. We test the accuracy of a two-layer fully connected neural network (FC-NN) as a function of the MLC scheme, the number of weight levels, and the weight mapping configuration. We find a tradeoff between the FC-NN accuracy, size, and current consumption. This work highlights the importance of a holistic approach to AI accelerators encompassing the device properties, the overall circuit performance, and the AI application specifications.

43 citations

Journal ArticleDOI
TL;DR: A multi-level variation of the state-of-the-art incremental step pulse with verify algorithm (M-ISPVA) to improve the stability of the low resistive state levels and introduces for the first time the proper combination of current compliance control and program/verify paradigms.
Abstract: Achieving a reliable multi-level operation in resistive random access memory (RRAM) arrays is currently a challenging task due to several threats like the post-algorithm instability occurring after the levels placement, the limited endurance, and the poor data retention capabilities at high temperature. In this paper, we introduced a multi-level variation of the state-of-the-art incremental step pulse with verify algorithm (M-ISPVA) to improve the stability of the low resistive state levels. This algorithm introduces for the first time the proper combination of current compliance control and program/verify paradigms. The validation of the algorithm for forming and set operations has been performed on 4-kbit RRAM arrays. In addition, we assessed the endurance and the high temperature multi-level retention capabilities after the algorithm application proving a 1 k switching cycles stability and a ten years retention target with temperatures below 100 °C.

42 citations

Journal ArticleDOI
TL;DR: Memristive devices, which serve as non-volatile resistive memory, are employed to emulate the plastic behaviour of biological synapses to emulate stochastic plasticity with fully CMOS integrated binary RRAM devices.
Abstract: Biological neural networks outperform current computer technology in terms of power consumption and computing speed while performing associative tasks, such as pattern recognition. The analogue and massive parallel in-memory computing in biology differs strongly from conventional transistor electronics that rely on the von Neumann architecture. Therefore, novel bio-inspired computing architectures have been attracting a lot of attention in the field of neuromorphic computing. Here, memristive devices, which serve as non-volatile resistive memory, are employed to emulate the plastic behaviour of biological synapses. In particular, CMOS integrated resistive random access memory (RRAM) devices are promising candidates to extend conventional CMOS technology to neuromorphic systems. However, dealing with the inherent stochasticity of resistive switching can be challenging for network performance. In this work, the probabilistic switching is exploited to emulate stochastic plasticity with fully CMOS integrated binary RRAM devices. Two different RRAM technologies with different device variabilities are investigated in detail, and their potential applications in stochastic artificial neural networks (StochANNs) capable of solving MNIST pattern recognition tasks is examined. A mixed-signal implementation with hardware synapses and software neurons combined with numerical simulations shows that the proposed concept of stochastic computing is able to process analogue data with binary memory cells.

21 citations

Journal ArticleDOI
TL;DR: In this work, three different RRAM compact models implemented in Verilog-A are analyzed and evaluated in order to reproduce the multilevel approach based on the switching capability of experimental devices.
Abstract: In this work, three different RRAM compact models implemented in Verilog-A are analyzed and evaluated in order to reproduce the multilevel approach based on the switching capability of experimental devices. These models are integrated in 1T-1R cells to control their analog behavior by means of the compliance current imposed by the NMOS select transistor. Four different resistance levels are simulated and assessed with experimental verification to account for their multilevel capability. Further, an Artificial Neural Network study is carried out to evaluate in a real scenario the viability of the multilevel approach under study.

21 citations


Cited by
More filters
30 Oct 2011
TL;DR: In this article, the authors studied resistance switching in Au/HfO2 (10 nm)/(Pt, TiN) devices using different bias modes, i.e., a sweeping, a quasistatic and a static (constant voltage stress) mode.
Abstract: Resistance switching is studied in Au/HfO2 (10 nm)/(Pt, TiN) devices, where HfO2 is deposited by atomic layer deposition. The study is performed using different bias modes, i.e., a sweeping, a quasistatic and a static (constant voltage stress) mode. Instabilities are reported in several circumstances (change in bias polarity, modification of the bottom electrode, and increase in temperature). The constant voltage stress mode allows extracting parameters related to the switching kinetics. This mode also reveals random fluctuations between the ON and OFF states. The dynamics of resistance switching is discussed along a filamentary model which implies oxygen vacancies diffusion. The rf properties of the ON and OFF states are also presented (impedance spectroscopy). © 2010 American Institute of Physics

112 citations

Journal ArticleDOI
01 Jul 2020
TL;DR: An overview of IMC in terms of memory devices and circuit architectures is provided, including typical architectures for neural network accelerators, content addressable memory (CAM), and novel circuit topologies for general‐purpose computing with low complexity.
Abstract: With the rise in artificial intelligence (AI), computing systems are facing new challenges related to the large amount of data and the increasing burden of communication between the memory and the processing unit. In‐memory computing (IMC) appears as a promising approach to suppress the memory bottleneck and enable higher parallelism of data processing, thanks to the memory array architecture. As a result, IMC shows a better throughput and lower energy consumption with respect to the conventional digital approach, not only for typical AI tasks, but also for general‐purpose problems such as constraint satisfaction problems (CSPs) and linear algebra. Herein, an overview of IMC is provided in terms of memory devices and circuit architectures. First, the memory device technologies adopted for IMC are summarized, focusing on both charge‐based memories and emerging devices relying on electrically induced material modification at the chemical or physical level. Then, the computational memory programming and the corresponding device nonidealities are described with reference to offline and online training of IMC circuits. Finally, array architectures for computing are reviewed, including typical architectures for neural network accelerators, content addressable memory (CAM), and novel circuit topologies for general‐purpose computing with low complexity.

93 citations

Journal ArticleDOI
03 Nov 2021-ACS Nano
TL;DR: In this article, the Baseline funding program of the King Abdullah University of Science and Technology (KAUST) has been used to support the work of the authors of this paper.
Abstract: This work has been supported by the generous Baseline funding program of the King Abdullah University of Science and Technology (KAUST).

78 citations

01 Jan 2016
TL;DR: In this paper, the variability limits of filament-based resistive RAM arrays in the full resistance range are identified, and extensive characterizations of multi-kbits RRAM arrays during Forming, Set, Reset and cycling operations are presented allowing the quantification of the intrinsic variability factors.
Abstract: While Resistive RAM (RRAM) are seen as an alternative to NAND Flash, their variability and cycling understanding remain a major roadblock. Extensive characterizations of multi-kbits RRAM arrays during Forming, Set, Reset and cycling operations are presented allowing the quantification of the intrinsic variability factors. As a result, the fundamental variability limits of filament-based RRAM in the full resistance range are identified.

76 citations

Journal ArticleDOI
TL;DR: The physics and operation of CMOS-based floating-gate memory devices in artificial neural networks will be addressed and several memristive concepts will be reviewed and discussed for applications in deep neural network and spiking neural network architectures.
Abstract: Neuromorphic computing has emerged as one of the most promising paradigms to overcome the limitations of von Neumann architecture of conventional digital processors. The aim of neuromorphic computing is to faithfully reproduce the computing processes in the human brain, thus paralleling its outstanding energy efficiency and compactness. Toward this goal, however, some major challenges have to be faced. Since the brain processes information by high-density neural networks with ultra-low power consumption, novel device concepts combining high scalability, low-power operation, and advanced computing functionality must be developed. This work provides an overview of the most promising device concepts in neuromorphic computing including complementary metal-oxide semiconductor (CMOS) and memristive technologies. First, the physics and operation of CMOS-based floating-gate memory devices in artificial neural networks will be addressed. Then, several memristive concepts will be reviewed and discussed for applications in deep neural network and spiking neural network architectures. Finally, the main technology challenges and perspectives of neuromorphic computing will be discussed.

71 citations