scispace - formally typeset
Search or ask a question

Showing papers by "Shashank Gupta published in 2013"


Journal ArticleDOI
TL;DR: In this article, different physics-based negative bias temperature instability (NBTI) models as proposed in the literature are reviewed, and the predictive capability of these models is benchmarked against experimental data.
Abstract: Different physics-based negative bias temperature instability (NBTI) models as proposed in the literature are reviewed, and the predictive capability of these models is benchmarked against experimental data. Models that focus exclusively on hole trapping in gate-insulator-process-related preexisting traps are found to be inconsistent with direct experimental evidence of interface trap generation. Models that focus exclusively on interface trap generation are incapable of predicting ultrafast measurement data. Models that assume strong correlation between interface trap generation and hole trapping in switching hole traps cannot simultaneously predict long-time dc stress, recovery, and ac stress and cannot estimate gate insulator process impact. Uncorrelated contributions from generation and recovery of interface traps, together with hole trapping and detrapping in preexisting and newly generated bulk insulator traps, are invoked to comprehensively predict dc stress and recovery, ac duty cycle and frequency, and gate insulator process impact of NBTI. The reaction-diffusion model can accurately predict generation and recovery of interface traps for different devices and experimental conditions. Hole trapping/detrapping is modeled using a two-level energy well model.

266 citations


Journal ArticleDOI
TL;DR: In this article, a physics-based approach for Fermi-level pinning in metal-semiconductor contacts has been extended to metal-interfacial layer (IL)-semiconductors (MIS) contacts.
Abstract: Metal-induced-gap-states model for Fermi-level pinning in metal-semiconductor contacts has been extended to metal-interfacial layer (IL)-semiconductor (MIS) contacts using a physics-based approach. Contact resistivity simulations evaluating various ILs on n-Ge indicate the possibility of forming low resistance contacts using TiO2, ZnO, and Sn-doped In2O3 (ITO) layers. Doping of the IL is proposed as an additional knob for lowering MIS contact resistance. This is demonstrated through simulations and experimentally verified with circular-transfer length method and diode measurements on Ti/n+-ZnO/n-Ge and Ti/ITO/n-Ge MIS contacts.

97 citations


Proceedings ArticleDOI
21 Feb 2013
TL;DR: This work proposes integration of Ge into EDFinFET architecture in which Ge (or SiGe) is grown on top of Si fin and shows 10× reduction in LER based VT variability in comparison to FinFETs and can enable multiple VT just by the application of a bias at the body terminal.
Abstract: Band to band tunneling (BTBT) is a major challenge in Ge FinFETs due to its smaller band gap. Narrow fin widths reduce BTBT due to quantum confinement (QC). However, Line Edge Roughness (LER) on narrower fins causes large VT variability. Previously, we have proposed an architecture named Epitaxially Defined (ED) FinFET to reduce VT variability due to LER wherein channel depletion is defined by low doped highly uniform epitaxy (thus named Epi Defined FinFET) (epi-thickness non uniformity<;2%) over a thick highly doped Si fin instead of lithography based patterning subject to LER (non-uniformity<;50% i.e. 2nm LER on a 4nm fin). In the present work, we propose integration of Ge into EDFinFET architecture in which Ge (or SiGe) is grown on top of Si fin. Proposed structure shows 10× reduction in LER based VT variability in comparison to FinFETs. Valence band QC in gate oxide/Ge/Si stack is used to control BTBT. Biaxial stress in thin Ge epitaxially grown on Si results in 27% higher ION. Thin Ge film required is lower than critical defect free thickness of Ge epitaxy on Si. Hence defect free Ge integration into FinFET architecture in enabled. We also show that EDFinFET can enable multiple VT just by the application of a bias at the body terminal.

7 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work addresses the new and important task of annotating token spans in billions of Web pages that mention named entities from a large entity catalog such as Wikipedia or Freebase, and designed simple but effective application-specific load estimation and key-splitting methods.
Abstract: Cloud computing frameworks such as map-reduce (MR) are widely used in the context of log mining, inverted indexing, and scientific data analysis. Here we address the new and important task of annotating token spans in billions of Web pages that mention named entities from a large entity catalog such as Wikipedia or Freebase. The key step in annotation is disambiguation: given the token Albert, use its mention context to determine which Albert is being mentioned. Disambiguation requires holding in RAM a machine-learnt statistical model for each mention phrase. In earlier work with only two million entities, we could fit all models in RAM, and stream rapidly through the corpus from disk. However, as the catalog grows to hundreds of millions of entities, this simple solution is no longer feasible. Simple adaptations like caching and evicting models online, or making multiple passes over the corpus while holding a fraction of models in RAM, showed unacceptable performance. Then we attempted to write a standard Hadoop MR application, but this hit a serious load skew problem (82.12% idle CPU). Skew in MR application seems widespread. Many skew mitigation approaches have been proposed recently. We tried SkewTune, which showed only modest improvement. We realized that reduce key splitting was essential, and designed simple but effective application-specific load estimation and key-splitting methods. A precise performance model was first created, which led to an objective function that we optimized heuristically. The resulting schedule was executed on Hadoop MR. This approach led to large benefits: our final annotator was 5.4× faster than standard Hadoop MR, and 5.2× faster than even SkewTune. Idle time was reduced to 3%. Although fine-tuned to our application, our technique may be of independent interest.

5 citations