scispace - formally typeset
Search or ask a question
Author

Kibong Moon

Bio: Kibong Moon is an academic researcher from Pohang University of Science and Technology. The author has contributed to research in topics: Neuromorphic engineering & Resistive random-access memory. The author has an hindex of 20, co-authored 42 publications receiving 1731 citations.

Papers
More filters
Journal ArticleDOI
02 Jan 2017
TL;DR: The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.
Abstract: Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first revie...

800 citations

Journal ArticleDOI
TL;DR: A linear potentiation behavior of conductance under identical pulses is demonstrated using the effect of barrier layer on the switching, which was realized by fabricating an RRAM on top of an Al electrode.
Abstract: We analyze the response of identical pulses on a filamentary resistive memory (RRAM) to implement the synapse function in neuromorphic systems. Our findings show that the multilevel states of conductance are achieved by varying the measurement conditions related to the formation and rupture of a conductive filament. Furthermore, abrupt set switching behavior in the RRAM leads to an unchanged conductance state, leading to degradation in the accuracy of pattern recognition. Thus, we demonstrate a linear potentiation (or depression) behavior of conductance under identical pulses using the effect of barrier layer on the switching, which was realized by fabricating an RRAM on top of an Al electrode. As a result, when the range of the conductance is symmetrically controlled at both polarities, a significantly improved accuracy is achieved for pattern recognition using a neural network with a multilayer perceptron.

353 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a TiO x-based resistive switching device for neuromorphic synapse applications, which is capable of 64-levels conductance states because of their optimized interface between the metal electrode and the TiO X film.
Abstract: We propose TiO x -based resistive switching device for neuromorphic synapse applications This device is capable of 64-levels conductance states because of their optimized interface between the metal electrode and the TiO x film To compensate the change in switching power with increasing pulse number, we propose the use of fixed voltage and current pulses in potentiation and depression conditions, respectively By adopting a hybrid pulse scheme, the symmetry of conductance change under both potentiation and depression conditions is shown to be significantly improved Both the improved conductance levels and the symmetry of conductance change are directly related with enhanced pattern recognition accuracy, which is confirmed by a neural network simulation

159 citations

Journal ArticleDOI
TL;DR: It is confirmed thatsynapse device characteristics directly affect the pattern recognition accuracy of ANNs, and a 3-terminal synapse device or a device based on a new operation principle should be developed as an alternative for on-chip training applications.
Abstract: Hardware artificial neural network (ANN) systems with high density synapse array devices can perform massive parallel computing for pattern recognition with low power consumption. To implement a neuromorphic system with on-chip training capability, we need to develop an ideal synapse device with various device requirements, such as scalability, MLC characteristics, low power operation, data retention, and symmetric/linear conductance changes under potentiation/depression modes. Although various devices have been proposed for synapse applications, they have limitations for application in neuromorphic systems. In this paper, we will cover various RRAM synapse devices, such as filamentary switching RRAM (HfOx, TaOx, Cu-CBRAM) and analog RRAM devices, based on interface resistive switching (Pr0.7Ca0.3MnOx and TiOx) and ferroelectric polarization (HfZrOx). By optimizing potentiation/depression conditions, we could improve the conductance linearity and MLC characteristics of filamentary synapse devices. Interface RRAM has better MLC characteristics with limited retention and conductance linearity. By controlling the reactivity of metal electrodes and the oxygen concentration in oxides, we can modulate the synapse characteristics. Metal-Ferroelectric-Insulator-Semiconductor (MFIS) FET devices exhibit good retention characteristics and analog memory characteristics due to polarization. Based on various synapse device characteristics, we have estimated the pattern recognition accuracy of MNIST handwritten digits and CIFAR-10 datasets. We have confirmed that synapse device characteristics directly affect the pattern recognition accuracy of ANNs. In order to simultaneously satisfy all the requirements of synapse devices, it is necessary to develop new technology capable of controlling the movement of oxygen vacancies and metal ions at the atomic scale. Considering the limited synapse characteristics of current 2-terminal RRAM devices, hardware ANNs capable of only off-chip training can be constructed by optimizing the current RRAM devices by limiting the bit number. A 3-terminal synapse device or a device based on a new operation principle should be developed as an alternative for on-chip training applications.

145 citations

Journal ArticleDOI
TL;DR: This study demonstrates an integrate and fire (I&F) neuron using threshold switching (TS) devices to implement spike‐based neuromorphic system and indicates applicability of TS‐based I&F neuron in neuromorphic hardware application.

83 citations


Cited by
More filters
Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Journal ArticleDOI
29 Jan 2020-Nature
TL;DR: The fabrication of high-yield, high-performance and uniform memristor crossbar arrays for the implementation of CNNs and an effective hybrid-training method to adapt to device imperfections and improve the overall system performance are proposed.
Abstract: Memristor-enabled neuromorphic computing systems provide a fast and energy-efficient approach to training neural networks1–4. However, convolutional neural networks (CNNs)—one of the most important models for image recognition5—have not yet been fully hardware-implemented using memristor crossbars, which are cross-point arrays with a memristor device at each intersection. Moreover, achieving software-comparable results is highly challenging owing to the poor yield, large variation and other non-ideal characteristics of devices6–9. Here we report the fabrication of high-yield, high-performance and uniform memristor crossbar arrays for the implementation of CNNs, which integrate eight 2,048-cell memristor arrays to improve parallel-computing efficiency. In addition, we propose an effective hybrid-training method to adapt to device imperfections and improve the overall system performance. We built a five-layer memristor-based CNN to perform MNIST10 image recognition, and achieved a high accuracy of more than 96 per cent. In addition to parallel convolutions using different kernels with shared inputs, replication of multiple identical kernels in memristor arrays was demonstrated for processing different inputs in parallel. The memristor-based CNN neuromorphic system has an energy efficiency more than two orders of magnitude greater than that of state-of-the-art graphics-processing units, and is shown to be scalable to larger networks, such as residual neural networks. Our results are expected to enable a viable memristor-based non-von Neumann hardware solution for deep neural networks and edge computing. A fully hardware-based memristor convolutional neural network using a hybrid training method achieves an energy efficiency more than two orders of magnitude greater than that of graphics-processing units.

1,033 citations

Journal ArticleDOI
TL;DR: The challenges in the integration and use in computation of large-scale memristive neural networks are discussed, both as accelerators for deep learning and as building blocks for spiking neural networks.
Abstract: With their working mechanisms based on ion migration, the switching dynamics and electrical behaviour of memristive devices resemble those of synapses and neurons, making these devices promising candidates for brain-inspired computing. Built into large-scale crossbar arrays to form neural networks, they perform efficient in-memory computing with massive parallelism by directly using physical laws. The dynamical interactions between artificial synapses and neurons equip the networks with both supervised and unsupervised learning capabilities. Moreover, their ability to interface with analogue signals from sensors without analogue/digital conversions reduces the processing time and energy overhead. Although numerous simulations have indicated the potential of these networks for brain-inspired computing, experimental implementation of large-scale memristive arrays is still in its infancy. This Review looks at the progress, challenges and possible solutions for efficient brain-inspired computation with memristive implementations, both as accelerators for deep learning and as building blocks for spiking neural networks.

948 citations

Journal ArticleDOI
27 Nov 2019-Nature
TL;DR: An overview of the developments in neuromorphic computing for both algorithms and hardware is provided and the fundamentals of learning and hardware frameworks are highlighted, with emphasis on algorithm–hardware codesign.
Abstract: Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign. The authors review the advantages and future prospects of neuromorphic computing, a multidisciplinary engineering concept for energy-efficient artificial intelligence with brain-inspired functionality.

877 citations

Journal ArticleDOI
TL;DR: This Review provides an overview of memory devices and the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.
Abstract: Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing. This Review provides an overview of memory devices and the key computational primitives for in-memory computing, and examines the possibilities of applying this computing approach to a wide range of applications.

841 citations