scispace - formally typeset
Open AccessProceedings ArticleDOI

Statistical timing for parametric yield prediction of digital integrated circuits

Reads0
Chats0
TLDR
Three novel path-based algorithms for statistical timing analysis and parametric yield prediction of digital integrated circuits are proposed and results in the face of statistical temperature and Vdd variations are presented.
Abstract
Uncertainty in circuit performance due to manufacturing and environmental variations is increasing with each new generation of technology. It is therefore important to predict the performance of a chip as a probabilistic quantity. This paper proposes three novel algorithms for statistical timing analysis and parametric yield prediction of digital integrated circuits. The methods have been implemented in the context of the EinsTimer static timing analyzer. Numerical results are presented to study the strengths and weaknesses of these complementary approaches. Across-the-chip variability continues to be accommodated by EinsTimer's "Linear Combination of Delay (LCD)" mode. Timing analysis results in the face of statistical temperature and V/sub dd/ variations are presented on an industrial ASIC part on which a bounded timing methodology leads to surprisingly wrong results.

read more

Content maybe subject to copyright    Report

Statistical timing for parametric yield prediction of digital
integrated circuits
Citation for published version (APA):
Jess, J. A. G., Kalafala, K., Naidu, S. R., Otten, R. H. J. M., & Visweswariah, C. (2006). Statistical timing for
parametric yield prediction of digital integrated circuits.
IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems
,
25
(11), 2376-2392. https://doi.org/10.1109/TCAD.2006.881332
DOI:
10.1109/TCAD.2006.881332
Document status and date:
Published: 01/01/2006
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be
important differences between the submitted version and the official published version of record. People
interested in the research are advised to contact the author for the final version of the publication, or visit the
DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page
numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please
follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Download date: 10. Aug. 2022

2376 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 25, NO. 11, NOVEMBER 2006
Statistical Timing for Parametric Yield Prediction of
Digital Integrated Circuits
Jochen A. G. Jess, Associate Member, IEEE, Kerim Kalafala, Srinath R. Naidu, Ralph H. J. M. Otten,
and Chandu Visweswariah, Fellow, IEEE
Abstract—Uncertainty in circuit performance due to manufac-
turing and environmental variations is increasing with each new
generation of technology. It is therefore important to predict the
performance of a chip as a probabilistic quantity. This paper
proposes three novel path-based algorithms for statistical timing
analysis and parametric yield prediction of digital integrated
circuits. The methods have been implemented in the context of
the EinsTimer static timing analyzer. The three methods are com-
plementary in that they are designed to target different process
variation conditions that occur in practice. Numerical results
are presented to study the strengths and weaknesses of these
complementary approaches. Timing analysis results in the face of
statistical temperature and V
dd
variations are presented on an
industrial ASIC part on which a bounded timing methodology
leads to surprisingly wrong results.
Index Terms—Digital integrated circuits, timing.
I. INTRODUCTION
Y
IELD LOSS is broadly categorized into catastrophic
yield loss (due to contamination and dust particles, for
example) and parametric or circuit-limited yield loss, which
impacts the spread of performance of functional parts. This
paper presents three algorithms for statistical timing analysis
and parametric yield prediction of digital integrated circuits due
to both manufacturing and environmental variations.
With each new generation of technology, variability in chip
performance is increasing. The increased variability renders
existing timing analysis methodology unnecessarily pessimistic
and unrealistic. The traditional “bounded” or “corner-based”
static timing approach further breaks down in the case of mul-
tiple voltage islands. The International Technology Roadmap
for Semiconductors [1] has identified a clear need for statistical
timing analysis.
The algorithms in this paper pay special attention to corre-
lations. Capturing and taking into account inherent correlations
are absolutely the key in obtaining a correct result. Correlations
Manuscript received December 2, 2004; revised July 12, 2005. This paper
was recommended by Associate Editor F. N. Najm.
J. A. G. Jess and R. H. J. M. Otten are with the Department of Electrical
Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The
Netherlands (e-mail: j.a.g.jess@ele.tue.nl; otten@ics.ele.tue.nl).
K. Kalafala is with the IBM Microelectronics Division, East Fishkill,
Hopewell Junction, NY 12533 USA (e-mail: kalafala@us.ibm.com).
S. R. Naidu was with the Department of Electrical Engineering, Eindhoven
University of Technology, 5600 MB Eindhoven, The Netherlands. He is now
with Magma Design Automation Pvt., Ltd., Bangalore 560017, India (e-mail:
srinath@magma-da.com).
C. Visweswariah is with the IBM T. J. Watson Research Center, Yorktown
Heights, NY 10598 USA (e-mail: chandu@us.ibm.com).
Digital Object Identifier 10.1109/TCAD.2006.881332
occur because different paths may share one or more gates, and
because all gate delays depend on some global parameters such
as junction depth or ambient temperature. All methods in this
paper fully take into account both classes of correlations and
are equipped to handle deterministic across-the-chip variations.
This paper does not directly address spatial correlation across
a chip, but our methods can be extended to handle it as well,
using the same principle as suggested in [2].
II. P
REVIOUS WORK
There is a wealth of literature on parametric statistical timing
analysis and yield prediction. The problem was first proposed
in the context of statistical program evaluation and review
technique (PERT) where the objective was to calculate the
probability distribution curve of the project completion time,
given that the subtasks in a task graph were random variables
drawn from some distribution. The problem was quickly recog-
nized as falling in a difficult complexity class known as the
#P-complete class. This meant that it was impossible to produce
in polynomial time a constant-factor approximation of the true
probability distribution curve of project completion time. The
statistical PERT problem is covered in the paper of Nadas [3],
and the theoretical complexity of the problem was established
by Hagstrom [4]. Bounds on the project completion time were
proposed by Kleindorfer [5] and Dodin [6].
In the context of integrated circuits, statistical timing meth-
ods may broadly be classified into performance-space methods
that manipulate timing variables such as arrival times and slacks
as statistical quantities, and parameter-space methods that per-
form manipulations in the space of the sources of variation.
In the performance space, we are conceptually interested in
integrating the joint probability density function (JPDF) of the
delays of all paths over a cube of side equal to the required delay
and of dimensionality equal to the number of paths. In other
words, it amounts to the integration of a complicated JPDF
over a simple integration region in high-dimensional space.
In parameter space, on the other hand, we are interested in
integrating the JPDF of the sources of parametric variation over
a complex feasible region in relatively low-dimensional space.
Another broad classification of the statistical timing methods
is to divide them into two categories: block-based methods and
path-based methods. Block-based methods have linear com-
plexity and are amenable to incremental processing, as noted
by [2] and [7], while path-based methods are more accurate
in that they better take into account the correlations due to
reconvergent fan-out and spatial correlation.
0278-0070/$20.00 © 2006 IEEE
Authorized licensed use limited to: Eindhoven University of Technology. Downloaded on September 1, 2009 at 06:14 from IEEE Xplore. Restrictions apply.

JESS et al.: STATISTICAL TIMING FOR PARAMETRIC YIELD PREDICTION OF DIGITAL INTEGRATED CIRCUITS 2377
Monte Carlo and modified Monte Carlo methods have often
been used as in [8] where the yield is estimated by means
of a surface integral of the feasible region. In the context of
digital circuits, Gattiker et al. [9] consider the probability of
each path meeting its timing requirement, but ignore correla-
tions between paths. An extremely efficient discrete probability
approach in the performance space was proposed in [10] and
[11], but the path reconvergence is handled with difficulty and
global correlations are ignored. A good source of information
about the statistical design is in [12]. A recent performance-
space probabilistic framework was proposed in [13] but has a
restricted domain of application.
III. M
OTIVATION
Unfortunately, most existing methods take into account one
or other type of correlation mentioned in the previous section,
but not both. An example of a work that takes both types of
correlations into account is in [2]. Their modeling approach is
similar to ours, but our algorithms are different. They also often
neglect the dependence of slew (rise/fall time) and downstream
load capacitance on the sources of variation. This paper pro-
poses a unified framework to handle correlations due to path
sharing as well as correlations due to the fact that the gates on
the chip are affected by the same set of global parameters. It
is crucial to take both types of correlations into account if we
are to accurately predict yield. This paper builds on the ideas
contained in [14].
Any methodology for statistical timing analysis must be
able to handle different process conditions. It is possible that
no single method will be able to accurately predict yield for
all performances in all types of conditions. It is therefore
desirable to develop a suite of methods, which can target
different situations (low yield/high yield, few sources of global
variation/many sources of global variation). This paper is an
attempt to construct such a suite of methods. We propose two
methods that operate in the space of manufacturing variations
(parameter space) and one method that operates in the space of
path delays. The three methods have complementary strengths
and weaknesses as outlined below.
1) The first method proposed in this paper, the paral-
lelepiped method, is best suited for a situation with a
small number of sources of global variation. It provides
a guaranteed lower bound on the true probability distrib-
ution curve of circuit delay and a “useful” upper bound
on the true probability distribution curve. A “best-guess”
estimate of the true curve can also be produced which in
practice approximates the real curve fairly well.
2) The second of the methods proposed in this paper, the
ellipsoid method, is less sensitive than the first method to
the number of sources of variation and is highly effective
at low yields. It also provides information that can be used
to tune the circuit to improve yield. However, it cannot be
directly used when there are many critical paths in the
circuit. We propose a novel preprocessing step to reduce
the number of paths that need to be considered.
3) The last method proposed in this paper is a performance-
space method (which operates in the space of path arrival
times) whose chief advantage is its extremely low time
complexity. It is intended for use in situations where a
quick estimate of yield is desired.
IV. M
ODELING
All three methods presented in this paper assume that the
delay and slew (or rise/fall time) of each arc of the timing graph
are linear functions of the sources of variation, similar to the
assumptions in [9], [15], and [16], for example. However, the
nominal delays and slews and the sensitivity coefficients can
be location dependent to accommodate deterministic intrachip
variability. The actual statistical timing analysis consists of
two phases. In the first phase, a representative set of paths
is gathered by the timing analysis program after a nominal
timing analysis. The sensitivity coefficients of each “complete”
path (including the launching and capturing clock paths if any)
are computed and accumulated by path-tracing procedures. In
the second phase, the statistical timing engine predicts the
distribution of the minimum of all the path slacks. Path slack is
defined as the difference between the required time and arrival
time of the signal along the path. All methods work off of a
common timing graph and path-tracing procedure.
The slack of each of P paths is modeled as
s
i
= s
nom
i
+
n
j=1
A
ij
z
j
(1)
where s
i
is the slack of the ith path (a statistical quantity),
s
nom
i
is the nominal slack, n is the number of global sources
of variation, A
ij
is a P × n matrix of path sensitivities, and δz
j
is the variation of the jth global parameter from its nominal
value. Delay, slew, and loading effects are taken into account in
the coefficients of A in our implementation using the concept of
chain ruling. The slew u can be expressed as u = k
1
+ k
2
z.
Delay can be expressed as d = k
3
+ k
4
u + k
5
z. Substituting
in this expression for the slew, we obtain k
4
k
2
+ k
5
as the
sensitivity coefficient of delay to the process variations z.
For a required slack ρ, we can write the following:
F =
z|s
nom
i
+
n
j=1
A
ij
z
j
ρ, i =1, 2, ···,P
F =
z|d
nom
i
+
n
j=1
A
ij
z
j
η
i
,i=1, 2, ···,P
.
(2)
Here, d
nom
i
is the nominal delay of the ith path and η
i
=
ν
nom
i
ρ where ν
nom
i
is the nominal required time of the ith
path. Each of the above P constraints represents a hyperplane in
n-dimensional parameter space, on one side of which the path
has sufficient slack and on the other side of which it is a failing
path. The intersection of all the “good” half-spaces forms a
convex polytope and is defined as the feasible region. The goal
of the parameter-space methods is to integrate the JPDF of the
Authorized licensed use limited to: Eindhoven University of Technology. Downloaded on September 1, 2009 at 06:14 from IEEE Xplore. Restrictions apply.

2378 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 25, NO. 11, NOVEMBER 2006
Fig. 1. Feasible region defined by hyperplanes.
sources of variation in the feasible region. Mathematically, this
integral can be expressed as
Y =

···
F
f(∆z
1
, z
2
,...z
n
)dz. (3)
This procedure is repeated for a range of η values to produce
the entire slack versus the yield curve.
In Fig. 1, we show a feasible region in two dimensions. The
dotted concentric circles represent a JPDF, which we wish to
integrate over the feasible region. The value of the integral is
the yield value for a particular slack value. When the slack value
changes, the hyperplanes are shifted along their normal vectors
to get a new feasible region.
The methods we propose in this paper do not directly handle
spatial correlation. However, they can easily be extended to
handle it. The method used in [2] is to divide the chip into
subregions using a grid. Then, for any given process parameter,
they assume different random variables for the individual grid
buckets. A covariance matrix is imposed upon the random
variables to describe the correlations between them. The same
technique can be used in our case with an increase in the
dimensionality of the problem due to the variables from the
grid buckets. There would be no change in the linear model.
However, for very fine grids, the number of variables can
become very large. The parallelepiped and ellipsoid methods
that we present below are both sensitive to the number of para-
meters, although the ellipsoid method is less so. In case of large
dimensionality, we might first want to use principal component
analysis to reduce the dimensionality. An alternative would be
to use on-chip variation to handle the spatial correlation. In
this method, sensitivity coefficients would be made location
dependent. The method would be less accurate than the grid-
based approach but would be much faster. In this paper, we treat
the number of parameters to be a variable.
We shall begin by describing the most intuitive method
among the three proposed in this paper, which is called the
parallelepiped method. This method performs a “brute-force”
integration of the JPDF of global sources of variation in the
feasible region. The next method we present, the ellipsoid
method, first approximates the feasible region by the maximum
volume ellipsoid that can be inscribed in it, and then performs
the integration over the ellipsoid. Both the parallelepiped and
ellipsoid methods work in parameter space, i.e., the space of
the sources of variation. The last method we present, the fastest
among the three, is called the binding probability method,
which is a performance-space method in that it works in the
space of slacks of the paths.
V. P
ARALLELEPIPED METHOD
The basic idea of the parallelepiped method is to recursively
divide the feasible region into the largest possible fully feasi-
ble parallelepipeds and integrate the JPDF of the underlying
sources of variation over these parallelepipeds instead of the
original feasible region. The approach does not require delays
to have linear models, and allows for arbitrary distributions of
the sources of variation. However, if the model is nonlinear,
it must still be convex. Since slack is the difference between
the required time and arrival time, it is difficult for slack to be
convex even if both required time and arrival time are convex.
A. Algorithm
The basic reference on the parallelepiped approach is the
second algorithm of Cohen and Hickey [17]. The method rests
on the fact that if all vertices of an n-parallelepiped lie in
any convex feasible region, then all points in the interior of
the parallelepiped are feasible. With the above observation,
the region of integration in the parameter space is recursively
subdivided into progressively smaller parallelepipeds until we
find parallelepipeds all of whose vertices are feasible. Then,
we simply sum up the weighted volume of the feasible par-
allelepipeds to obtain a lower bound on the desired yield as
shown in the pseudocode below for a single given performance
requirement
procedure Vol (ll, recursionDepth){
if (recursionDepth < maxDepth){
if (all vertices of parallelepiped are feasible)
add integral of region to yield;
else{
subdivide region into smaller parallelepipeds;
for (each new parallelepiped p)
Vol (lowerLeft (p), recursionDepth+1);
}
}
}
Vol (lowerLeft (boundingBox), 0).
Authorized licensed use limited to: Eindhoven University of Technology. Downloaded on September 1, 2009 at 06:14 from IEEE Xplore. Restrictions apply.

JESS et al.: STATISTICAL TIMING FOR PARAMETRIC YIELD PREDICTION OF DIGITAL INTEGRATED CIRCUITS 2379
Fig. 2. Illustration of the parallelepiped method in two dimensions to a
recursion depth of four. Light-gray regions contribute to lower bound of yield.
For the best estimate, a portion of the weight of each parallelepiped on the
boundary is also included.
The algorithm begins by choosing a boundingBox that is
known to contain the feasible region. For statistical timing,
the obvious choice is the ±4σ or ±3σ box in n-dimensions.
In the algorithm, lowerLeft represents a function that returns
the vertex of the parallelepiped that has the lowest coordinate
in each dimension. Fig. 2 graphically illustrates the method
in two dimensions. Gray regions contribute to the final yield
computation and obviously provide a lower bound on the
required probability integral. Note that descent to the lowest
level of recursion is confined to the boundaries of the feasible
region.
Since at worst, we visit every leaf node of a q-ary tree, where
q =2
n
, and at each vertex, we check the feasibility of each
path constraint, we end up with a worst case complexity of
O(Pn2
(n×max Depth)
). Pn is the complexity of checking the
feasibility of one vertex. In fact, if a static timer is employed,
the feasibility of a vertex can be established more efficiently.
In any case, the method is exponential in the product of the
recursion depth and the dimensionality of the manufacturing
space. However, several tricks can be applied to speed up this
algorithm in practice.
1) If a particular path is infeasible at all vertices, the recur-
sion can stop at once. No matter how deep the recursion,
that particular path will not become feasible, so there is
no good yield to be had.
2) If a particular path is feasible at all vertices, that path
can be skipped as the recursion proceeds. This trick is
implemented by simply maintaining a list of “skippable”
paths that grows as the depth of the recursion increases.
3) The number of recursion levels can be drastically reduced
by modifying the basic algorithm to additionally produce
an upper bound and a best estimate answer. The (strict)
lower bound is still the weighted volume of the gray
region of Fig. 2. At the lowest level of recursion, if at
least one vertex is feasible and at least one is infeasible,
the upper bound gets the entire weighted volume of the
parallelepiped (represented by the black region in the
figure). Although not a strict upper bound, in practice,
this estimate always exceeds the exact yield. The “best
estimate” result gets the yield credit proportional to the
fraction of vertices that is feasible. With this mechanism,
we have found that three to four levels of recursion are
always sufficient for accurate results. The law of large
numbers helps, since each parallelepiped at the lowest
level of recursion contributes a signed error.
4) The parallelepiped method can handle any statistical dis-
tribution of the underlying sources of variation, provided
that the JPDF can be integrated over the volume of a
parallelepiped. If one or more sources of variation form
a multivariate normal distribution, that part of the integral
can be expressed as the product of differences of error
functions in that subspace. The manufacturing space is
first rotated and scaled so as to obtain circular symmetry.
Then, the required error functions are precomputed and
stored in a single array of size 2
n
+1to avoid repeated
calls to the system erf function.
The following tricks will further improve the efficiency but have
not yet been implemented.
1) Once a decision is made to recurse, only the internal
vertices of the subparallelepipeds need to be visited, since
the feasibility at the vertices of the parent parallelepiped
has already been ascertained.
2) Since the bulk of the weighted volume is near the center
of the JPDF, an adaptive grid scheme could be considered
which uses a finer grid near the origin of the z space
and a progressively coarser grid toward the boundary of
the bounding box.
3) Recursion can be carried out by subdividing the paral-
lelepiped in one dimension at a time, and if a path is
infeasible at all vertices, for example, subdivision in the
other dimensions is obviated.
B. Modified Algorithm
The above algorithm has been adapted to compute the entire
yield-versus-slack curve at once instead of one performance
point at a time. As each parallelepiped is processed, the con-
tributions of the parallelepiped toward the yield for all slack
values are simultaneously recorded before proceeding to the
next parallelepiped or next level of recursion. All CPU time
results in this paper use this modified method.
The basic idea is briefly explained here. Let s
parent
min
be the
smallest slack at any of the vertices of the parent parallelepiped.
Then, for all slack below s
parent
min
, the entire parent paral-
lelepiped is in the feasible region, and appropriate yield credit
is given. As we recurse, we are only interested in slacks greater
than s
parent
min
within this volume. For each subparallelepiped at
the present level of recursion, the yield credit corresponding to
the smaller parallelepiped is granted for all slack from s
parent
min
to
s
min
(lowest slack among the vertices of the subparallelepiped).
The upper bound and best guess yields are similarly kept
updated as the recursion proceeds.
Authorized licensed use limited to: Eindhoven University of Technology. Downloaded on September 1, 2009 at 06:14 from IEEE Xplore. Restrictions apply.

Citations
More filters
Proceedings ArticleDOI

First-order incremental block-based statistical timing analysis

TL;DR: In this article, a canonical first order delay model is proposed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form and the sensitivities of all timing quantities to each of the sources of variation are available.
Journal ArticleDOI

High-performance CMOS variability in the 65-nm regime and beyond

TL;DR: The performance of CMOS is described and variability isn't likely to decrease, since smaller devices contain fewer atoms and consequently exhibit less self-averaging, but the situation may be improved by removing most of the doping.
Book

VLSI Test Principles and Architectures: Design for Testability (Systems on Silicon)

TL;DR: This book is a comprehensive guide to new DFT methods that will show the readers how to design a testable and quality product, drive down test cost, improve product quality and yield, and speed up time-to-market and time- to-volume.
Journal ArticleDOI

First-Order Incremental Block-Based Statistical Timing Analysis

TL;DR: A canonical first-order delay model that takes into account both correlated and independent randomness is proposed, and the first incremental statistical timer in the literature is reported, suitable for use in the inner loop of physical synthesis or other optimization programs.
Proceedings ArticleDOI

Mixture importance sampling and its application to the analysis of SRAM designs in the presence of rare failure events

TL;DR: A novel methodology based on an efficient form of importance sampling, mixture importance sampling is proposed for statistical SRAM design and analysis, which is comprehensive, computationally efficient and in excellent agreement with standard Monte Carlo techniques.
References
More filters
Book

Monte Carlo methods

TL;DR: The general nature of Monte Carlo methods can be found in this paper, where a short resume of statistical terms is given, including random, pseudorandom, and quasirandom numbers.

Monte Carlo Methods

exp
TL;DR: In this paper, the Monte Carlo method is not compelling for one dimensional integration, but it is more compelling for a d-dimensional integral evaluated withM points, so that the error in I goes down as 1/ √ M and is smaller if the variance σ 2 f of f is smaller.
Journal ArticleDOI

The Greatest of a Finite Set of Random Variables

TL;DR: In this article, the authors present formulas and tables that permit approximations to the moments in case n > 2, where the moments are approximated by iteration of a three-parameter computation or, alternatively, through successive use of the threeparameter table, which is given.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What methods have been used to estimate the yield of a digital circuit?

Monte Carlo and modified Monte Carlo methods have often been used as in [8] where the yield is estimated by means of a surface integral of the feasible region. 

The use of the path filtering as discussed in the section on path filtering can reduce the computational burden of finding the ellipsoid. 

The justification for this step is that for a small enough parallelepiped, the fraction count(i)/2n approximates the fraction of the time the given path is the worst slack path when points are sampled within the parallelepiped. 

In the parallelepiped method, at the lowest level of recursion, the authors can determine for each parallelepiped for which paths the parallelepiped is infeasible. 

Since the parallelepiped method becomes infeasible at higher dimensions, the authors cannot use the method above to determine the criticality for each path at a given performance. 

In the algorithm, lowerLeft represents a function that returns the vertex of the parallelepiped that has the lowest coordinate in each dimension. 

One avenue of future work is to apply the path-filtering algorithm as a preprocessing step for both the parallelepiped as well as the binding probability method. 

It provides a guaranteed lower bound on the true probability distribution curve of circuit delay and a “useful” upper bound on the true probability distribution curve. 

This purpose is served by the way in which the authors perform path filtering: all important directions are accounted for ensuring that the ellipsoid is “boxed” in all sides, and is therefore fairly representative of the real ellipsoid. 

Although the results are shown here with only two sources of environmental variation, the anticipated applications of these methods are to solve the problem of timing circuits with multiple voltage islands and to take manufacturing variations into account. 

With the above observation, the region of integration in the parameter space is recursively subdivided into progressively smaller parallelepipeds until the authors find parallelepipeds all of whose vertices are feasible. 

The extreme efficiency of the binding probability method is motivating some new researches into handling skewed distributions in this method. 

Block-based methods have linear complexity and are amenable to incremental processing, as noted by [2] and [7], while path-based methods are more accurate in that they better take into account the correlations due to reconvergent fan-out and spatial correlation. 

To first order, the Monte Carlo and binding probability methods are unaffected by the number of parameters, whereas the ellipsoid method has polynomial dependence and the parallelepiped method has exponential dependence which dominates the run time above six dimensions.