scispace - formally typeset
Search or ask a question
Author

Halid Mulaosmanovic

Other affiliations: Polytechnic University of Milan
Bio: Halid Mulaosmanovic is an academic researcher from Dresden University of Technology. The author has contributed to research in topics: Non-volatile memory & Ferroelectricity. The author has an hindex of 18, co-authored 65 publications receiving 1512 citations. Previous affiliations of Halid Mulaosmanovic include Polytechnic University of Milan.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
01 Dec 2017
TL;DR: This work shows the implementation of a ferroelectric field effect transistor (FeFET) based eNVM solution into a leading edge 22nm FDSOI CMOS technology, a viable choice for overall low-cost and low-power IoT applications in 22nm and beyond technology nodes.
Abstract: We show the implementation of a ferroelectric field effect transistor (FeFET) based eNVM solution into a leading edge 22nm FDSOI CMOS technology Memory windows of 15 V are demonstrated in aggressively scaled FeFET cells with an area as small as 0025 μm2 At this point program/erase endurance cycles up to 105 are supported Complex pattern are written into 32 MBit arrays using ultrafast program/erase pulses in a 10 ns range at 42 V High temperature retention up to 300 °C is achieved It makes FeFET based eNVM a viable choice for overall low-cost and low-power IoT applications in 22nm and beyond technology nodes

306 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: The FeFET unique properties make it the best candidate for eNVM solutions in sub-2x technologies for low-cost IoT applications.
Abstract: We successfully implemented a one-transistor (1T) ferroelectric field effect transistor (FeFET) eNVM into a 28nm gate-first super low power (28SLP) CMOS technology platform using two additional structural masks. The electrical baseline properties remain the same for the FeFET integration and the JTAG-controlled 64 kbit memory shows clearly separated states. High temperature retention up to 250 °C is demonstrated and endurance up to 105 cycles was achieved. The FeFET unique properties make it the best candidate for eNVM solutions in sub-2x technologies for low-cost IoT applications.

276 citations

Journal ArticleDOI
TL;DR: It is demonstrated that the switching of single domains can be directly observed in ultrascaled ferroelectric field effect transistors and suggested opportunities for hafnium oxide based ferroelectrics in nonvolatile memory devices are suggested.
Abstract: The recent discovery of ferroelectricity in thin hafnium oxide films has led to a resurgence of interest in ferroelectric memory devices. Although both experimental and theoretical studies on this new ferroelectric system have been undertaken, much remains to be unveiled regarding its domain landscape and switching kinetics. Here we demonstrate that the switching of single domains can be directly observed in ultrascaled ferroelectric field effect transistors. Using models of ferroelectric domain nucleation we explain the time, field and temperature dependence of polarization reversal. A simple stochastic model is proposed as well, relating nucleation processes to the observed statistical switching behavior. Our results suggest novel opportunities for hafnium oxide based ferroelectrics in nonvolatile memory devices.

255 citations

Proceedings ArticleDOI
05 Jun 2017
TL;DR: This work presents for the first time a synapse based on a single ferroelectric FET (FeFET) integrated in a 28nm HKMG technology, having hafnium oxide as the ferroElectric and a resistive element in series.
Abstract: A compact nanoscale device emulating the functionality of biological synapses is an essential element for neuromorphic systems. Here we present for the first time a synapse based on a single ferroelectric FET (FeFET) integrated in a 28nm HKMG technology, having hafnium oxide as the ferroelectric and a resistive element in series. The gradual and non-volatile ferroelectric switching is exploited to mimic the synaptic weight. We demonstrate both the spike-timing dependent plasticity (STDP) and the signal transmission and discuss the effect of the spike properties and circuit design on STDP.

201 citations

Journal ArticleDOI
TL;DR: It is shown that by carefully shaping electrical excitations based on the particular nucleation-limited switching kinetics of the ferroelectric layer further neuronal behaviors can be emulated, such as firing activity tuning, arbitrary refractory period and the leaky effect.
Abstract: Neuron is the basic computing unit in brain-inspired neural networks. Although a multitude of excellent artificial neurons realized with conventional transistors have been proposed, they might not be energy and area efficient in large-scale networks. The recent discovery of ferroelectricity in hafnium oxide (HfO2) and the related switching phenomena at the nanoscale might provide a solution. This study employs the newly reported accumulative polarization reversal in nanoscale HfO2-based ferroelectric field-effect transistors (FeFETs) to implement two key neuronal dynamics: the integration of action potentials and the subsequent firing according to the biologically plausible all-or-nothing law. We show that by carefully shaping electrical excitations based on the particular nucleation-limited switching kinetics of the ferroelectric layer further neuronal behaviors can be emulated, such as firing activity tuning, arbitrary refractory period and the leaky effect. Finally, we discuss the advantages of an FeFET-based neuron, highlighting its transferability to advanced scaling technologies and the beneficial impact it may have in reducing the complexity of neuromorphic circuits.

151 citations


Cited by
More filters
Journal ArticleDOI
01 Jun 2018
TL;DR: This Review Article examines the development of in-memory computing using resistive switching devices, where the two-terminal structure of the devices, theirresistive switching properties, and direct data processing in the memory can enable area- and energy-efficient computation.
Abstract: Modern computers are based on the von Neumann architecture in which computation and storage are physically separated: data are fetched from the memory unit, shuttled to the processing unit (where computation takes place) and then shuttled back to the memory unit to be stored. The rate at which data can be transferred between the processing unit and the memory unit represents a fundamental limitation of modern computers, known as the memory wall. In-memory computing is an approach that attempts to address this issue by designing systems that compute within the memory, thus eliminating the energy-intensive and time-consuming data movement that plagues current designs. Here we review the development of in-memory computing using resistive switching devices, where the two-terminal structure of the devices, their resistive switching properties, and direct data processing in the memory can enable area- and energy-efficient computation. We examine the different digital, analogue, and stochastic computing schemes that have been proposed, and explore the microscopic physical mechanisms involved. Finally, we discuss the challenges in-memory computing faces, including the required scaling characteristics, in delivering next-generation computing. This Review Article examines the development of in-memory computing using resistive switching devices.

1,193 citations

Journal ArticleDOI
TL;DR: This Review provides an overview of memory devices and the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.
Abstract: Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing. This Review provides an overview of memory devices and the key computational primitives for in-memory computing, and examines the possibilities of applying this computing approach to a wide range of applications.

841 citations

Journal ArticleDOI
23 Jan 2018
TL;DR: This comprehensive review summarizes state of the art, challenges, and prospects of the neuro-inspired computing with emerging nonvolatile memory devices and presents a device-circuit-algorithm codesign methodology to evaluate the impact of nonideal device effects on the system-level performance.
Abstract: This comprehensive review summarizes state of the art, challenges, and prospects of the neuro-inspired computing with emerging nonvolatile memory devices. First, we discuss the demand for developing neuro-inspired architecture beyond today’s von-Neumann architecture. Second, we summarize the various approaches to designing the neuromorphic hardware (digital versus analog, spiking versus nonspiking, online training versus offline training) and discuss why emerging nonvolatile memory is attractive for implementing the synapses in the neural network. Then, we discuss the desired device characteristics of the synaptic devices (e.g., multilevel states, weight update nonlinearity/asymmetry, variation/noise), and survey a few representative material systems and device prototypes reported in the literature that show the analog conductance tuning. These candidates include phase change memory, resistive memory, ferroelectric memory, floating-gate transistors, etc. Next, we introduce the crossbar array architecture to accelerate the weighted sum and weight update operations that are commonly used in the neuro-inspired machine learning algorithms, and review the recent progresses of array-level experimental demonstrations for pattern recognition tasks. In addition, we discuss the peripheral neuron circuit design issues and present a device-circuit-algorithm codesign methodology to evaluate the impact of nonideal device effects on the system-level performance (e.g., learning accuracy). Finally, we give an outlook on the customization of the learning algorithms for efficient hardware implementation.

730 citations

Journal ArticleDOI
TL;DR: A comprehensive review on emerging artificial neuromorphic devices and their applications is offered, showing that anion/cation migration-based memristive devices, phase change, and spintronic synapses have been quite mature and possess excellent stability as a memory device, yet they still suffer from challenges in weight updating linearity and symmetry.
Abstract: The rapid development of information technology has led to urgent requirements for high efficiency and ultralow power consumption. In the past few decades, neuromorphic computing has drawn extensive attention due to its promising capability in processing massive data with extremely low power consumption. Here, we offer a comprehensive review on emerging artificial neuromorphic devices and their applications. In light of the inner physical processes, we classify the devices into nine major categories and discuss their respective strengths and weaknesses. We will show that anion/cation migration-based memristive devices, phase change, and spintronic synapses have been quite mature and possess excellent stability as a memory device, yet they still suffer from challenges in weight updating linearity and symmetry. Meanwhile, the recently developed electrolyte-gated synaptic transistors have demonstrated outstanding energy efficiency, linearity, and symmetry, but their stability and scalability still need to be optimized. Other emerging synaptic structures, such as ferroelectric, metal–insulator transition based, photonic, and purely electronic devices also have limitations in some aspects, therefore leading to the need for further developing high-performance synaptic devices. Additional efforts are also demanded to enhance the functionality of artificial neurons while maintaining a relatively low cost in area and power, and it will be of significance to explore the intrinsic neuronal stochasticity in computing and optimize their driving capability, etc. Finally, by looking into the correlations between the operation mechanisms, material systems, device structures, and performance, we provide clues to future material selections, device designs, and integrations for artificial synapses and neurons.

373 citations