scispace - formally typeset
Search or ask a question
Author

Olga Krestinskaya

Bio: Olga Krestinskaya is an academic researcher from Nazarbayev University. The author has contributed to research in topics: Memristor & CMOS. The author has an hindex of 16, co-authored 84 publications receiving 756 citations. Previous affiliations of Olga Krestinskaya include King Abdullah University of Science and Technology.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices and discusses why the neuromorphic architectures are useful for edge devices and shows the advantages, drawbacks, and open problems in the field of neuromemristive circuits for edge computing.
Abstract: The volume, veracity, variability, and velocity of data produced from the ever increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks, and open problems in the field of neuromemristive circuits for edge computing.

201 citations

Journal ArticleDOI
TL;DR: In this paper, analog backpropagation learning circuits for various memristive learning architectures, such as deep neural network, binary neural networks, multiple neural network and hierarchical temporal memory, were proposed.
Abstract: The on-chip implementation of learning algorithms would speed up the training of neural networks in crossbar arrays. The circuit level design and implementation of a back-propagation algorithm using gradient descent operation for neural network architectures is an open problem. In this paper, we propose analog backpropagation learning circuits for various memristive learning architectures, such as deep neural network, binary neural network, multiple neural network, hierarchical temporal memory, and long short-term memory. The circuit design and verification are done using TSMC 180-nm CMOS process models and TiO2-based memristor models. The application level validations of the system are done using XOR problem, MNIST character, and Yale face image databases.

82 citations

Journal ArticleDOI
TL;DR: The HTM SP realizes an optimized hardware design through the introduction of mean overlap calculations and by replacing the threshold determination in the inhibition stage with a weighted summation operator over the neighborhood of the pixel under consideration.
Abstract: Hierarchical temporal memory (HTM) is a machine learning algorithm inspired by the information processing mechanisms of the human neocortex and consists of a spatial pooler (SP) and temporal memory (TM). In this paper, we develop circuits and systems to achieve the optimized design of an HTM SP, an HTM TM, and a memristive analog pattern matcher for pattern recognition applications. The HTM SP realizes an optimized hardware design through the introduction of mean overlap calculations and by replacing the threshold determination in the inhibition stage with a weighted summation operator over the neighborhood of the pixel under consideration. HTM TM is based on discrete analog memristive memory arrays and a weight update procedure. The operation of the proposed system is demonstrated for a face recognition problem, using the standard AR, ORL, and Yale databases, and for speech recognition, using the TIMIT database, with achieved accuracies of 87.21% and approximately 90%, respectively, given an SNR of 10 dB. Visual data processing using binary HTM SP features requires less storage and processing memory than required by the traditional processing methods, with the area and power requirements for its implementation being 0.096 mm2 and 1756 mW, respectively. The design of the TM circuit for a single pixel requires 23.85 ${\mu}\text{m}^{2}$ of area and 442.26 ${\mu}\text{W}$ of power.

81 citations

Journal ArticleDOI
TL;DR: This work proposes a hardware implementation of LSTM system using memristor, which has proved to mimic behavior of a biological synapse and has promising properties such as smaller size and absence of current leakage among others, making it a suitable element for designing L STM functions.
Abstract: Long-short term memory (LSTM) is a cognitive architecture that aims to mimic the sequence temporal memory processes in human brain. The state and time-dependent based processing of events is essential to enable contextual processing in several applications such as natural language processing, speech recognition and machine translations. There are many different variants of LSTM and almost all of them are software based. The hardware implementation of LSTM remains as an open problem. In this work, we propose a hardware implementation of LSTM system using memristors. Memristor has proved to mimic behavior of a biological synapse and has promising properties such as smaller size and absence of current leakage among others, making it a suitable element for designing LSTM functions. Sigmoid and hyperbolic tangent functions hardware realization can be performed by using a CMOS-memristor threshold logic circuit. These ideas can be extended for a practical application of implementing sequence learning in real-time sensory processing data.

80 citations

Proceedings ArticleDOI
27 May 2018
TL;DR: The analog learning circuits for realizing backpropagation algorithm for use with neural networks in memristive crossbar arrays are presented and validated comprehensively using the relevant transfer characteristics and transient response of individual circuit modules.
Abstract: The implementation of backpropagation algorithm using gradient descent operation with analog circuits is an open problem. In this paper, we present the analog learning circuits for realizing backpropagation algorithm for use with neural networks in memristive crossbar arrays. The circuits are simulated in SPICE using TSMC 180nm CMOS process models, and HP memristor models. The gradient descent operations are validated comprehensively using the relevant transfer characteristics and transient response of individual circuit modules.

73 citations


Cited by
More filters
Journal ArticleDOI
01 Jul 2019
TL;DR: A programmable neuromorphic computing chip based on passive memristor crossbar arrays integrated with analogue and digital components and an on-chip processor enables the implementation of neuromorphic and machine learning algorithms.
Abstract: Memristors and memristor crossbar arrays have been widely studied for neuromorphic and other in-memory computing applications. To achieve optimal system performance, however, it is essential to integrate memristor crossbars with peripheral and control circuitry. Here, we report a fully functional, hybrid memristor chip in which a passive crossbar array is directly integrated with custom-designed circuits, including a full set of mixed-signal interface blocks and a digital processor for reprogrammable computing. The memristor crossbar array enables online learning and forward and backward vector-matrix operations, while the integrated interface and control circuitry allow mapping of different algorithms on chip. The system supports charge-domain operation to overcome the nonlinear I–V characteristics of memristor devices through pulse width modulation and custom analogue-to-digital converters. The integrated chip offers all the functions required for operational neuromorphic computing hardware. Accordingly, we demonstrate a perceptron network, sparse coding algorithm and principal component analysis with an integrated classification layer using the system. A programmable neuromorphic computing chip based on passive memristor crossbar arrays integrated with analogue and digital components and an on-chip processor enables the implementation of neuromorphic and machine learning algorithms.

460 citations

Journal ArticleDOI
TL;DR: A critical survey of emerging neuromorphic devices and architectures enabled by quantum dots, metal nanoparticles, polymers, nanotubes, nanowires, two-dimensional layered materials and van der Waals heterojunctions with a particular emphasis on bio-inspired device responses that are uniquely enabled by low-dimensional topology, quantum confinement and interfaces.
Abstract: Memristive and nanoionic devices have recently emerged as leading candidates for neuromorphic computing architectures. While top-down fabrication based on conventional bulk materials has enabled many early neuromorphic devices and circuits, bottom-up approaches based on low-dimensional nanomaterials have shown novel device functionality that often better mimics a biological neuron. In addition, the chemical, structural and compositional tunability of low-dimensional nanomaterials coupled with the permutational flexibility enabled by van der Waals heterostructures offers significant opportunities for artificial neural networks. In this Review, we present a critical survey of emerging neuromorphic devices and architectures enabled by quantum dots, metal nanoparticles, polymers, nanotubes, nanowires, two-dimensional layered materials and van der Waals heterojunctions with a particular emphasis on bio-inspired device responses that are uniquely enabled by low-dimensional topology, quantum confinement and interfaces. We also provide a forward-looking perspective on the opportunities and challenges of neuromorphic nanoelectronic materials in comparison with more mature technologies based on traditional bulk electronic materials. This Review highlights the progress made towards the development of neuromorphic devices and architectures enabled by low-dimensional nanomaterials

390 citations

Journal ArticleDOI
01 Jul 2020
TL;DR: The development of neuro-inspired computing chips and their key benchmarking metrics are reviewed, providing a co-design tool chain and proposing a roadmap for future large-scale chips are provided and a future electronic design automation tool chain is proposed.
Abstract: The rapid development of artificial intelligence (AI) demands the rapid development of domain-specific hardware specifically designed for AI applications. Neuro-inspired computing chips integrate a range of features inspired by neurobiological systems and could provide an energy-efficient approach to AI computing workloads. Here, we review the development of neuro-inspired computing chips, including artificial neural network chips and spiking neural network chips. We propose four key metrics for benchmarking neuro-inspired computing chips — computing density, energy efficiency, computing accuracy, and on-chip learning capability — and discuss co-design principles, from the device to the algorithm level, for neuro-inspired computing chips based on non-volatile memory. We also provide a future electronic design automation tool chain and propose a roadmap for the development of large-scale neuro-inspired computing chips. This Review Article examines the development of neuro-inspired computing chips and their key benchmarking metrics, providing a co-design tool chain and proposing a roadmap for future large-scale chips.

303 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a review of deep neural network concepts in background subtraction for novices and experts in order to analyze this success and to provide further directions.

278 citations

Journal ArticleDOI
TL;DR: It is demonstrated experimentally that the synaptic weights shared in different time steps in an LSTM can be implemented with a memristor crossbar array, which has a small circuit footprint, can store a large number of parameters and offers in-memory computing capability that contributes to circumventing the ‘von Neumann bottleneck’.
Abstract: Recent breakthroughs in recurrent deep neural networks with long short-term memory (LSTM) units have led to major advances in artificial intelligence. However, state-of-the-art LSTM models with significantly increased complexity and a large number of parameters have a bottleneck in computing power resulting from both limited memory capacity and limited data communication bandwidth. Here we demonstrate experimentally that the synaptic weights shared in different time steps in an LSTM can be implemented with a memristor crossbar array, which has a small circuit footprint, can store a large number of parameters and offers in-memory computing capability that contributes to circumventing the ‘von Neumann bottleneck’. We illustrate the capability of our crossbar system as a core component in solving real-world problems in regression and classification, which shows that memristor LSTM is a promising low-power and low-latency hardware platform for edge inference. Deep neural networks are increasingly popular in data-intensive applications, but are power-hungry. New types of computer chips that are suited to the task of deep learning, such as memristor arrays where data handling and computing take place within the same unit, are required. A well-used deep learning model called long short-term memory, which can handle temporal sequential data analysis, is now implemented in a memristor crossbar array, promising an energy-efficient and low-footprint deep learning platform.

251 citations