scispace - formally typeset
Journal ArticleDOI

Charge-Trap Transistors for CMOS-Only Analog Memory

Reads0
Chats0
TLDR
A comprehensive investigation of the programming behavior of CTTs, including analog retention, intra- and inter-device variation, and fine-tuning of the device, both for individual devices and for devices in an integrated array reveals the promising future of using the CTT as a CMOS-only analog memory device.
Abstract
Since our demonstration of unsupervised learning using the CMOS-only charge-trap transistors (CTTs) as analog synapses, there has been an increasing interest in exploiting the device for various other neural network (NN) applications. However, most of these studies are limited to mere simulation due to the absence of detailed experimental device characterization. In this article, we provide a comprehensive investigation of the programming behavior of CTTs, including analog retention, intra- and inter-device variation, and fine-tuning of the device, both for individual devices and for devices in an integrated array. It is found that, after programming, the channel current gradually increases to a higher level, and the shift is larger when the device is programmed to a higher threshold voltage. With this postprogramming current increase appropriately accounted for, individual devices can be programmed to an equivalent precision of five bits, and three bits can be achieved for devices in an array. Our results reveal the promising future of using the CTT as a CMOS-only analog memory device.

read more

Citations
More filters
Journal ArticleDOI

Drain-Erase Scheme in Ferroelectric Field Effect Transistor—Part II: 3-D-NAND Architecture for In-Memory Computing

TL;DR: The drain-erase scheme is proposed to enable the individual cell’s program/erase/inhibition, which is necessary for individual weight updates in in situ training, and the VMM operation is simulated in a 3-D NAND-like FeFET array.
Journal ArticleDOI

Investigation of Read Disturb and Bipolar Read Scheme on Multilevel RRAM-Based Deep Learning Inference Engine

TL;DR: The read disturb-induced conductance drift characteristic is statistically measured on a test vehicle based on 2-bit HfO2 RRAM array and a bipolar read scheme is proposed and tested to enhance the resilience against the read disturb.
Journal ArticleDOI

Ferroelectric devices and circuits for neuro-inspired computing

TL;DR: In this paper, a 2T-1FeFET synaptic cell design that improves its in situ training accuracy to approach software baseline is presented. And the FeFET drain-erase scheme for array-level operations is introduced to make the in- situ training feasible for FeFet-based hardware accelerator.
Journal ArticleDOI

Investigation of hysteresis in hole transport layer free metal halide perovskites cells under dark conditions.

TL;DR: Efficient non-volatile memory devices based on hybrid organic-inorganic perovskite (CH3NH3PbI3) as a resistive switching layer on a Glass/Indium Tin Oxide (ITO) substrate and this device could be integrated inside a photovoltaic array to work as a power-on-chip device, where generation and computation could be possible on the same substrate for memory and neuromorphic applications.
Journal ArticleDOI

Investigating Ferroelectric Minor Loop Dynamics and History Effect—Part II: Physical Modeling and Impact on Neural Network Training

TL;DR: In this article, a physics-based phase-field multidomain switching model is used to understand the origin of ferroelectric partial switching, and a possible mitigation strategy is proposed.
References
More filters
Proceedings ArticleDOI

A Massively Parallel Coprocessor for Convolutional Neural Networks

TL;DR: A massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms, is presented, which uses low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation.
Proceedings ArticleDOI

Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: Comparative performance analysis (accuracy, speed, and power)

TL;DR: It is shown that NVM-based systems could potentially offer faster and lower-power ML training than GPU-based hardware, despite the inherent random and deterministic imperfections of such devices.
Proceedings ArticleDOI

14.1 A 2.9TOPS/W deep convolutional neural network SoC in FD-SOI 28nm for intelligent embedded systems

TL;DR: A booming number of computer vision, speech recognition, and signal processing applications, are increasingly benefiting from the use of deep convolutional neural networks, with a DCNN significantly outperforming classical approaches for the first time.
Journal ArticleDOI

BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W

TL;DR: In-memory neural network processing without any external data accesses, sustained by the symmetry and simplicity of the computation of the binary/ternaty neural network, improves the energy efficiency dramatically.
Related Papers (5)