scispace - formally typeset
Search or ask a question
Author

Giyong Yang

Bio: Giyong Yang is an academic researcher from Samsung. The author has contributed to research in topics: Static random-access memory & Low-power electronics. The author has an hindex of 5, co-authored 6 publications receiving 231 citations.

Papers
More filters
Proceedings ArticleDOI
06 Mar 2014
TL;DR: This paper presents 14nm FinFET-based 128Mb 6T SRAM chips featuring low-VMIN with newly developed assist techniques, and presents peripheral-assist techniques required to overcome the bitcell challenges to high yield.
Abstract: With the explosive growth of battery-operated portable devices, the demand for low power and small size has been increasing for system-on-a-chip (SoC). The FinFET is considered as one of the most promising technologies for future low-power mobile applications because of its good scaling ability, high on-current, better SCE and subthreshold slope, and small leakage current [1]. As a key approach for low-power, supply-voltage (VDD) scaling has been widely used in SoC design. However, SRAM is the limiting factor of voltage-scaling, since all SRAM functions of read, write, and hold-stability are highly influenced by increased variations at low VDD, resulting in lower yield. In addition, the width-quantization property of FinFET device reduces the design window for transistor sizing, and increases the failure probability due to the un-optimized bitcell sizing [1]. In order to overcome the bitcell challenges to high yield, peripheral-assist techniques are required. In this paper, we present 14nm FinFET-based 128Mb 6T SRAM chips featuring low-VMIN with newly developed assist techniques.

113 citations

Journal ArticleDOI
TL;DR: Two 128 Mb dual-power-supply SRAM chips are fabricated in a 14 nm FinFET technology and the disturbance-noise reduction (DNR) scheme is proposed as a read-assist circuit to improve the VMIN of the high-performance SRAM.
Abstract: Two 128 Mb dual-power-supply SRAM chips are fabricated in a 14 nm FinFET technology. A 0.064 $\mu$ m $^{2}$ and a 0.080 $\mu$ m $^{2}$ 6T SRAM bitcells are designed for high-density (HD) and high-performance (HP) applications. To improve ${\rm V}_{{\rm MIN}}$ of the high-density SRAM, a negative bitline scheme (NBL) is adopted as a write-assist technique. Then, the disturbance-noise reduction (DNR) scheme is proposed as a read-assist circuit to improve the ${\rm V}_{{\rm MIN}}$ of the high-performance SRAM. The 128 Mb 6T-HD SRAM test-chip is fully demonstrated featuring 0.50 ${\rm V}_{{\rm MIN}}$ with 200 mV improvement by NBL, and 0.47 ${\rm V}_{{\rm MIN}}$ for the 128 Mb 6T-HP with 40 mV improvement by the DNR. Improved ${\rm V}_{{\rm MIN}}$ reduces 45.4% and 12.2% power-consumption of the SRAM macro with the help of each assist circuit, respectively.

58 citations

Journal ArticleDOI
TL;DR: The various SRAM assist schemes are explored to evaluate the power, performance, and area (PPA) gain, and the figure-of-merit (FOM) is induced by the minimum operating voltage (VMIN) and assist overheads.
Abstract: Two 128 Mb 6T SRAM test chips are implemented in a 10 nm FinFET technology A 0040 $\mu \text{m}^{2}$ 6T SRAM bitcell is designed for high density (HD), and 0049 $\mu \text{m}^{2}$ for high performance (HP) The various SRAM assist schemes are explored to evaluate the power, performance, and area (PPA) gain, and the figure-of-merit (FOM) is induced by the minimum operating voltage ( $V_{\mathrm{ MIN}}$ ) and assist overheads The dual-transient wordline scheme is proposed to improve the $V_{\mathrm{ MIN}}$ by 475 mV for the 128 Mb 6T-HP SRAM The suppressed bitline scheme with negative bitline improves the $V_{\mathrm{ MIN}}$ by 135 mV for the 128 Mb 6T-HD SRAM The FOM of PPA gain evaluates the optimum SRAM assist for the different bitcells based on the applications

57 citations

Proceedings ArticleDOI
25 Feb 2016
TL;DR: Assist-circuits are more crucial in a FinFET technology to improve VMIN, which in turn adds to the Power, Performance, and Area (PPA) gain of SRAM.
Abstract: The power consumption of a mobile application processor (AP) is strongly limited by the SRAM minimum operating voltage, VMIN [1], since the 6T bit cell must balance between write-ability and bit cell stability. However, the SRAM VMIN scales down gradually with advanced process nodes due to increased variability. This is evident with the quantized device-width and limited process-knobs of a FinFET technology, which has greatly affected SRAM design [2–4]. Therefore, assist-circuits are more crucial in a FinFET technology to improve VMIN, which in turn adds to the Power, Performance, and Area (PPA) gain of SRAM.

20 citations

Proceedings ArticleDOI
01 Feb 2017
TL;DR: A separate analysis of SRAM macro defect failures, in the bitcell and peripheral logic, provides a deeper understanding so as to increase the maximum repairable rate under random defect conditions.
Abstract: Conventional patterning techniques, such as self-aligned double patterning (SADP) and litho-etch-litho-etch (LELE), have paved the way for the extreme ultraviolet (EUV) technology that aims to reduce the photomask steps [1,2]. EUV adds the extreme scaling to the high-performance of FinFET technology, thus opening up new opportunities for system-on-chip designers: delivering power, performance, and area (PPA) competitiveness. In terms of area, peripheral logic has scaled down aggressively in comparison to the bitcell given the intense design-rule shrinkage. Figure 12.2.1 shows the bitcell scaling trend and the peripheral logic unit area across different process nodes. Compared to the 10nm process node, the peripheral logic unit area is closer to the bitcell area in a 7nm process node aided by EUV, which allows bi-directional metal lines for scaling. Complex patterns and intensive scaling induce defective elements in the SRAM peripheral logic. Therefore, the probability of yield-loss due to defects is high, which necessitates the need for a repair scheme for the peripheral logic in addition to the SRAM bitcell. Despite the varied literature on bitcell repair, such as the built-in self-repair that analyzes the faulty bitcells to allocate the repair efficiently for a higher repairable rate [3], literature that discusses peripheral logic repair is sparse. Early literature [4] discusses the usage of a sense-amplifier, designed with redundancy, to address the sense-amplifier offset. Nevertheless, it is not related to the peripheral logic repair for yield improvement. This paper exclusively addresses the peripheral logic repair issue to achieve a higher repairable rate. A separate analysis of SRAM macro defect failures, in the bitcell and peripheral logic, provides a deeper understanding so as to increase the maximum repairable rate under random defect conditions.

8 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors survey the recent progresses in SRAM and RRAM-based CIM macros that have been demonstrated in silicon and discuss general design challenges of the CIM chips including analog-to-digital conversion bottleneck, variations in analog compute, and device non-idealities.
Abstract: Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall problem in hardware accelerator design for deep learning. The input vector and weight matrix multiplication, i.e., the multiply-and-accumulate (MAC) operation, could be performed in the analog domain within memory sub-array, leading to significant improvements in throughput and energy efficiency. Static random access memory (SRAM) and emerging non-volatile memories such as resistive random access memory (RRAM) are promising candidates to store the weights of deep neural network (DNN) models. In this review, firstly we survey the recent progresses in SRAM and RRAM based CIM macros that have been demonstrated in silicon. Then we discuss general design challenges of the CIM chips including analog-to-digital conversion (ADC) bottleneck, variations in analog compute, and device non-idealities. Next we introduce the DNN+NeuroSim benchmark framework that is capable of evaluating versatile device technologies for CIM inference and training performance from software/hardware co-design's perspective.

94 citations

Journal ArticleDOI
TL;DR: The various SRAM assist schemes are explored to evaluate the power, performance, and area (PPA) gain, and the figure-of-merit (FOM) is induced by the minimum operating voltage (VMIN) and assist overheads.
Abstract: Two 128 Mb 6T SRAM test chips are implemented in a 10 nm FinFET technology A 0040 $\mu \text{m}^{2}$ 6T SRAM bitcell is designed for high density (HD), and 0049 $\mu \text{m}^{2}$ for high performance (HP) The various SRAM assist schemes are explored to evaluate the power, performance, and area (PPA) gain, and the figure-of-merit (FOM) is induced by the minimum operating voltage ( $V_{\mathrm{ MIN}}$ ) and assist overheads The dual-transient wordline scheme is proposed to improve the $V_{\mathrm{ MIN}}$ by 475 mV for the 128 Mb 6T-HP SRAM The suppressed bitline scheme with negative bitline improves the $V_{\mathrm{ MIN}}$ by 135 mV for the 128 Mb 6T-HD SRAM The FOM of PPA gain evaluates the optimum SRAM assist for the different bitcells based on the applications

57 citations

Proceedings ArticleDOI
Soonyoung Lee1, Il-gon Kim1, Sungmock Ha1, Cheong-sik Yu1, Jinhyun Noh1, Sangwoo Pae1, Jongwoo Park1 
19 Apr 2015
TL;DR: In this paper, two different SRAM cells, high-performance (HP) and high-density (HD), were irradiated with alpha particles, thermal neutrons, and high energy neutrons.
Abstract: Radiation-induced Soft Error Rate (SER) of SRAM built in 14nm FinFET on bulk technology was extensively characterized. Two different SRAM cells, high-performance (HP) and high-density (HD), were irradiated with alpha particles, thermal neutrons, and high-energy neutrons. Empirical results reveal excellent SER performance of FinFET compared to the prior technology nodes, drastically reducing SER FIT rate by 5–10X. It is found that HP cell is more sensitive to a single event upset than HD cell design. We will discuss the effects of charge collection efficiency as one of major parameter and present supporting simulation results.

57 citations

Journal ArticleDOI
TL;DR: Trends in the design of device and circuits for on-chip nonvolatile memory using memrisitive devices as well as the challenges faced by researchers in its further development are examined.
Abstract: Memristive devices have shown considerable promise for on-chip nonvolatile memory and computing circuits in energy-efficient systems. However, this technology is limited with regard to speed, power, VDDmin, and yield due to process variation in transistors and memrisitive devices as well as the issue of read disturbance. This paper examines trends in the design of device and circuits for on-chip nonvolatile memory using memristive devices as well as the challenges faced by researchers in its further development. Several silicon-verified examples of circuitry are reviewed in this paper, including those aimed at high-speed, area-efficient, and low-voltage applications.

51 citations

Proceedings ArticleDOI
Shreesh Narasimha1, Basanth Jagannathan1, A. Ogino1, Jaeger Daniel1  +150 moreInstitutions (1)
01 Dec 2017
TL;DR: A fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization, designed to enable both High Performance Compute (HPC) and mobile applications.
Abstract: We present a fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization. This technology reflects an improvement of 2.8X routed logic density and >40% performance over the 14nm reference technology described in [1-3]. A full range of Vts is enabled on-chip through a unique multi-workfunction process. This enables both excellent low voltage SRAM response and highly scaled memory area simultaneously. The HD 6-T bitcell size is 0.0269um2. This 7nm technology is fully enabled by immersion lithography and advanced optical patterning techniques (like SAQP and SADP). However, the technology platform is also designed to leverage EUV insertion for specific multi-patterned (MP) levels for cycle time benefit and manufacturing efficiency. A complete set of foundation and complex IP is available in this advanced CMOS platform to enable both High Performance Compute (HPC) and mobile applications.

50 citations