scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Modeling and analysis of manufacturing variations

Sani R. Nassif1
09 May 2001-pp 223-228
TL;DR: In this article, the authors examine the sources and trends of process variability, the new challenges associated with the increase in within-die variability analysis, and propose a modeling and simulation methodology to deal with this variability.
Abstract: Process-induced variations are an important consideration in the design of integrated circuits. Until recently, it was sufficient to model die-to-die shifts in device performance, leading to the well known worst-case modeling and design methodology. However, current and near-future integrated circuits are large enough that device and interconnect parameter variations within the chip are as important as those same variations from chip to chip. This presents a new set of challenges for process modeling and characterization and for the associated design tools and methodologies. This paper examines the sources and trends of process variability, the new challenges associated with the increase in within-die variability analysis, and proposes a modeling and simulation methodology to deal with this variability.
Citations
More filters
Journal ArticleDOI
TL;DR: A methodology to statistically design the SRAM cell and the memory organization using the failure-probability and the yield-prediction models and can be used in an early stage of a design cycle to enhance memory yield in nanometer regime.
Abstract: In this paper, we have analyzed and modeled failure probabilities (access-time failure, read/write failure, and hold failure) of synchronous random-access memory (SRAM) cells due to process-parameter variations. A method to predict the yield of a memory chip based on the cell-failure probability is proposed. A methodology to statistically design the SRAM cell and the memory organization is proposed using the failure-probability and the yield-prediction models. The developed design strategy statistically sizes different transistors of the SRAM cell and optimizes the number of redundant columns to be used in the SRAM array, to minimize the failure probability of a memory chip under area and leakage constraints. The developed method can be used in an early stage of a design cycle to enhance memory yield in nanometer regime.

494 citations


Cites background from "Modeling and analysis of manufactur..."

  • ...Thus, the Vt fluctuation due to the RDF of one transistor does not depend on the Vt fluctuation of any neighboring transistor....

    [...]

  • ...The standard deviation of the Vt fluctuation (σVt) due to RDF depends on the manufacturing process, doping profile, and the transistor sizing [6]....

    [...]

  • ...scaling [1]–[5], analysis and reduction of the mismatch-induced parametric failures in an SRAM cell is extremely necessary to enhance the yield of a memory designed in nanoscaled complementary metal–oxide–semiconductor (CMOS) [8], [9]....

    [...]

  • ...This assumption is valid if we are considering the Vt variation due to RDF [1]–[3]....

    [...]

  • ...The mean (µLCELL) and the standard deviation (σLCELL) of the leakage of a cell considering RDF-induced Vt fluctuation can be obtained using the process described in (3)....

    [...]

Journal ArticleDOI
25 Sep 2006
TL;DR: A brief discussion of key sources of power dissipation and their temperature relation in CMOS VLSI circuits, and techniques for full-chip temperature calculation with special attention to its implications on the design of high-performance, low-power V LSI circuits is presented.
Abstract: The growing packing density and power consumption of very large scale integration (VLSI) circuits have made thermal effects one of the most important concerns of VLSI designers The increasing variability of key process parameters in nanometer CMOS technologies has resulted in larger impact of the substrate and metal line temperatures on the reliability and performance of the devices and interconnections Recent data shows that more than 50% of all integrated circuit failures are related to thermal issues This paper presents a brief discussion of key sources of power dissipation and their temperature relation in CMOS VLSI circuits, and techniques for full-chip temperature calculation with special attention to its implications on the design of high-performance, low-power VLSI circuits The paper is concluded with an overview of techniques to improve the full-chip thermal integrity by means of off-chip versus on-chip and static versus adaptive methods

420 citations


Cites background from "Modeling and analysis of manufactur..."

  • ...This is due to the rising impact of global and random sources of variations on performance characteristics of VLSI circuits [57], which tend to increase the criticality of temperature dependencies in the circuit....

    [...]

Proceedings ArticleDOI
09 Apr 2006
TL;DR: Applying mathematical theories from random fields and convex analysis, this work develops a robust technique to extract a valid spatial-correlation function and matrix from measurement data by solving a constrained nonlinear optimization problem and a modified alternative-projection algorithm.
Abstract: Increased variability of process parameters and recent progress in statistical static timing analysis make extraction of statistical characteristics of process variation and spatial correlation an important yet challenging problem in modern chip designs. Unfortunately, existing approaches either focus on extraction of only a deterministic component of spatial variation or do not consider actual difficulties in computing a valid spatial correlation function and matrix, simply ignoring the fact that not every function and matrix can be used to describe the spatial correlation. Based upon the mathematical theory of random fields and convex analysis, in this paper, we develop (1) a robust technique to extract a valid spatial correlation function by solving a constrained nonlinear optimization problem; and (2) a robust technique to extract a valid spatial correlation matrix by employing a modified alternative projection algorithm.Our novel techniques guarantee to extract a valid spatial correlation function and matrix that are closest to measurement data, even if those measurements are affected by unavoidable random noises. Experiment results based upon a Monte-Carlo model confirm the accuracy and robustness of our techniques, and show that we are able to recover the correlation function and matrix with very high accuracy even in the presence of significant random noises.

215 citations

Proceedings ArticleDOI
13 Jun 2005
TL;DR: This paper presented an efficient block-based statistical timing analysis approach with linear complexity with respect to the circuit size, which can accurately predict non-Gaussian delay distributions from realistic nonlinear gate and interconnect delay models.
Abstract: Process variations have a growing impact on circuit performance for today's integrated circuit (IC) technologies. The non-Gaussian delay distributions as well as the correlations among delays make statistical timing analysis more challenging than ever. In this paper, the authors presented an efficient block-based statistical timing analysis approach with linear complexity with respect to the circuit size, which can accurately predict non-Gaussian delay distributions from realistic nonlinear gate and interconnect delay models. This approach accounts for all correlations, from manufacturing process dependence, to re-convergent circuit paths to produce more accurate statistical timing predictions. With this approach, circuit designers can have increased confidence in the variation estimates, at a low additional computation cost.

197 citations


Cites background from "Modeling and analysis of manufactur..."

  • ...According to the current technology trends, more than ±35% variations on the gate length are cited for 90 nanometer processes, and they are getting even larger for 65 nanometer processes [9]....

    [...]

Journal ArticleDOI
TL;DR: Applying mathematical theories from random fields and convex analysis, this work develops a robust technique to extract a valid spatial-correlation function and matrix from measurement data by solving a constrained nonlinear optimization problem and a modified alternative-projection algorithm.
Abstract: The increased variability of process parameters makes it important yet challenging to extract the statistical characteristics and spatial correlation of process variation. Recent progress in statistical static-timing analysis also makes the extraction important for modern chip designs. Existing approaches extract either only a deterministic component of spatial variation or these approaches do not consider the actual difficulties in computing a valid spatial-correlation function, ignoring the fact that not every function and matrix can be used to describe the spatial correlation. Applying mathematical theories from random fields and convex analysis, we develop: 1) a robust technique to extract a valid spatial-correlation function by solving a constrained nonlinear optimization problem and 2) a robust technique to extract a valid spatial-correlation matrix by employing a modified alternative-projection algorithm. Our novel techniques guarantee to extract a valid spatial-correlation function and matrix from measurement data, even if those measurements are affected by unavoidable random noises. Experiment results, obtained from data generated by a Monte Carlo model, confirm the accuracy and robustness of our techniques and show that we are able to recover the correlation function and matrix with very high accuracy even in the presence of significant random noises

185 citations


Cites background from "Modeling and analysis of manufactur..."

  • ...1)Modeling: Because the systematic variation is more like a deterministic variation [ 14 ], we lump it with the nominal value h0, i.e.,...

    [...]

  • ...According to the scale of their causes, process variations can also be classified into the following two categories [ 14 ], [16]: 1) Systematic variation describes the deterministic portion of the variation....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, two sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies and they are shown to be improvements over simple sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.
Abstract: Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.

8,328 citations

Proceedings ArticleDOI
Robert B. Hitchcock1
01 Jan 1982
TL;DR: Timing Analysis, a program designed to analyze the timing of large digital computers and is based, in part, on the concepts disclosed in a patented method for determining the extreme characteristics of logic block diagrams, is described.
Abstract: Timing Verification consists of validating the path delays (primary input or storage element to primary output or storage element) to be sure they are not too long or too short and checking the clock pulses to be sure they are not too wide or too narrow. The programs addressing these problems neither produce input patterns like test pattern generators nor require input patterns like traditional simulators. Several programs (described here) operate by tracing paths [P173][WO78][SA81][KA81]. One program [MC80] extends simulation into a pessimistic analyzer not dependent on test patterns. Timing Analysis, a program described recently in [HI82a], is designed to analyze the timing of large digital computers and is based, in part, on the concepts disclosed in a patented method [DO81] for determining the extreme characteristics of logic block diagrams. The output of Timing Analysis includes "slack" at each block to provide a measure of the severity of the timing problem. The program also generates standard deviations for the times so that a statistical timing design can be produced rather than a worst case approach.

265 citations

Journal ArticleDOI
TL;DR: It is found that circuits with a large number of critical paths and with a low logic depth are most sensitive to uncorrelated gate delay variations, and scenarios for future technologies show the increased impact of uncor related delay variations on digital design.
Abstract: The yield of low voltage digital circuits is found to he sensitive to local gate delay variations due to uncorrelated intra-die parameter deviations. Caused by statistical deviations of the doping concentration they lead to more pronounced delay variations for minimum transistor sizes. Their influence on path delays in digital circuits is verified using a carry select adder test circuit fabricated in 0.5 and 0.35 /spl mu/m complementary metal-oxide-semiconductor (CMOS) technologies with two different threshold voltages. The increase of the path delay variations for smaller device dimensions and reduced supply voltages as well as the dependence on the path length is shown. It is found that circuits with a large number of critical paths and with a low logic depth are most sensitive to uncorrelated gate delay variations. Scenarios for future technologies show the increased impact of uncorrelated delay variations on digital design. A reduction of the maximal clock frequency of 10% is found for, for example, highly pipelined systems realized in a 0.18-/spl mu/m CMOS technology.

177 citations

Proceedings ArticleDOI
06 Jun 1994
TL;DR: A new empirical gate delay model is proposed which combines the benefits of empirically derived k-factor models and switch-resistor models to efficiently handle capacitance shielding due to metal interconnect resistance, model the RC interconnect delay, and provide tighter bounds for simultaneous switching.
Abstract: As signal speeds increase and gate delays decrease for high-performance digital integrated circuits, the gate delay modeling problem becomes increasingly more difficult. With scaling, increasing interconnect resistances and decreasing gate-output impedances make it more difficult to empirically characterize gate-delay models. Moreover, the single-input-switching assumption for the empirical models is incompatible with the inevitable simultaneous switching for today.s high-speed logic paths. In this paper a new empirical gate delay model is proposed. Instead of building the empirical equations in terms of capacitance loading and input-signal transition time, the models are generated in terms of parameters which combine the benefits of empirically derived k-factor models and switch-resistor models to efficiently: 1) handle capacitance shielding due to metal interconnect resistance, 2) model the RC interconnect delay, and 3)provide tighter bounds for simultaneous switching.

111 citations

Proceedings ArticleDOI
Sani R. Nassif1
06 Dec 1998
TL;DR: This paper lays the groundwork needed to analyze the impact of inter-chip variations on digital circuits and proposes an extreme-case analysis algorithm to efficiently determine the worst case performance due to such variability.
Abstract: Current, integrated circuits are large enough that device and interconnect parameter variations within it chip are as important as those same variations from chip to chip. Previously, digital designers were concerned only with chip-to-chip variability, for which analysis techniques exist; concern for within-chip variations has been in the domain of analog circuit design. In this paper, we lay the groundwork needed to analyze the impact of inter-chip variations on digital circuits and propose an extreme-case analysis algorithm to efficiently determine the worst case performance due to such variability.

102 citations