scispace - formally typeset
Open AccessJournal ArticleDOI

Efficient and Accurate Statistical Analog Yield Optimization and Variation-Aware Circuit Sizing Based on Computational Intelligence Techniques

TLDR
Techniques inspired by computational intelligence are used to speed up yield optimization without sacrificing accuracy, and the resulting ORDE algorithm can achieve approximately a tenfold improvement in computational effort compared to an improved MC-based yield optimization algorithm integrating the infeasible sampling and Latin-hypercube sampling techniques.
Abstract
In nanometer complementary metal-oxide-semiconductor technologies, worst-case design methods and response-surface-based yield optimization methods face challenges in accuracy. Monte-Carlo (MC) simulation is general and accurate for yield estimation, but its efficiency is not high enough to make MC-based analog yield optimization, which requires many yield estimations, practical. In this paper, techniques inspired by computational intelligence are used to speed up yield optimization without sacrificing accuracy. A new sampling-based yield optimization approach, which determines the device sizes to optimize yield, is presented, called the ordinal optimization (OO)-based random-scale differential evolution (ORDE) algorithm. By proposing a two-stage estimation flow and introducing the OO technique in the first stage, sufficient samples are allocated to promising solutions, and repeated MC simulations of non-critical solutions are avoided. By the proposed evolutionary algorithm that uses differential evolution for global search and a random-scale mutation operator for fine tunings, the convergence speed of the yield optimization can be enhanced significantly. With the same accuracy, the resulting ORDE algorithm can achieve approximately a tenfold improvement in computational effort compared to an improved MC-based yield optimization algorithm integrating the infeasible sampling and Latin-hypercube sampling techniques. Furthermore, ORDE is extended from plain yield optimization to process-variation-aware single-objective circuit sizing.

read more

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 30, NO. 6, JUNE 2011 793
Efficient and Accurate Statistical Analog Yield
Optimization and Variation-Aware Circuit Sizing
Based on Computational Intelligence Techniques
Bo Liu, Francisco V. Fernández, and Georges G. E. Gielen, Fellow, IEEE
Abstract—In nanometer complementary metal-oxide-semi-
conductor technologies, worst-case design methods and response-
surface-based yield optimization methods face challenges in
accuracy. Monte-Carlo (MC) simulation is general and accurate
for yield estimation, but its efficiency is not high enough to
make MC-based analog yield optimization, which requires many
yield estimations, practical. In this paper, techniques inspired
by computational intelligence are used to speed up yield op-
timization without sacrificing accuracy. A new sampling-based
yield optimization approach, which determines the device sizes to
optimize yield, is presented, called the ordinal optimization (OO)-
based random-scale differential evolution (ORDE) algorithm. By
proposing a two-stage estimation flow and introducing the OO
technique in the first stage, sufficient samples are allocated to
promising solutions, and repeated MC simulations of non-critical
solutions are avoided. By the proposed evolutionary algorithm
that uses differential evolution for global search and a random-
scale mutation operator for fine tunings, the convergence speed
of the yield optimization can be enhanced significantly. With
the same accuracy, the resulting ORDE algorithm can achieve
approximately a tenfold improvement in computational effort
compared to an improved MC-based yield optimization algo-
rithm integrating the infeasible sampling and Latin-hypercube
sampling techniques. Furthermore, ORDE is extended from plain
yield optimization to process-variation-aware single-objective
circuit sizing.
Index Terms—Differential evolution, ordinal optimization,
variation-aware analog sizing, yield optimization.
I. Introduction
I
NDUSTRIAL analog integrated circuit design not only
calls for fully optimized nominal design solutions, but also
requires high robustness and yield in the light of varying
supply voltage and temperature conditions, as well as inter-die
and intra-die process variations [1]–[3]. Especially in nanome-
ter complementary metal–oxide–semiconductor (CMOS) tech-
Manuscript received January 21, 2010; revised May 28, 2010 and September
22, 2010; accepted November 27, 2010. Date of current version May 18,
2011. This work was supported by a special bilateral agreement scholarship
of Katholieke Universiteit Leuven, Leuven, Belgium, and Tsinghua University,
Beijing, China, and by the TIC-2532 Project funded by Consejeria de
Innovación, Ciencia y Empresa, Junta de Andalucia, Spain. This paper was
recommended by Associate Editor P. Li.
B. Liu and G. G. E. Gielen are with Katholieke Universiteit Leuven, B-3001
Leuven, Belgium (e-mail: bo.liu@esat.kuleuven.be; georges.gielen@esat.
kuleuven.be).
F. V. Fernández is with IMSE-CNM, CSIC, University of Sevilla, Sevilla,
E-41092, Spain (e-mail: francisco.fernandez@imse-cnm.csic.es).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCAD.2011.2106850
nologies, random and systematic process variations have a
large influence on the quality and yield of the manufactured
analog circuits. As a consequence, in the high-performance
analog and mixed-signal design flows, the designer needs
guidelines and tools to deal with these factors impacting circuit
yield and performances in an integrated manner in order to
avoid costly re-design iterations [4].
Yield optimization includes system-level hierarchical opti-
mization [5] and building-block-level yield optimization [6],
[7]. At the building block level, there exist parametric yield
optimization [6]–[8] and layout-related yield optimization [9]–
[11], e.g., critical area yield analysis [9]. This paper focuses
on parametric yield optimization at the building-block level.
The yield optimization flow is summarized in Fig. 1.
In the optimization loop, the candidate circuit parameters
are generated by the optimization engine; the performances
and yield are analyzed and fed back to the optimization
engine for the next iteration. Yield analysis is a critical
point in the yield optimization flow. Among the factors
that impact yield, statistical inter-die and intra-die process
variations play a vital role [8]. Previous yield optimization
methods include device model corner-based methods [3],
[12], performance-specific worst-case design (PSWCD)
methods [6], [7], response-surface-based methods [2], [15],
and Monte-Carlo (MC)-based methods.
1) Device model corner-based methods [3], [12] use the
same slow/fast parameter sets to decide the worst-case
parameters for all circuits for a given technology. They
are efficient due to the limited number of simulations
needed. But their drawback is that the worst-case perfor-
mance values are pessimistic as the corners correspond
to the tails of the joint probability density function of the
parameters, resulting in considerable over-design. Also,
the slow/fast values obtained for a single performance,
e.g., delay, and the worst-case process parameters for
other performances may be different. Second, the actual
yield may be low if the intra-die variations are ignored.
If the intra-die variations were considered, the number
of simulations would be extremely large. The limitations
of device model corner-based methods for robust analog
sizing are discussed in [1] and [13].
2) The PSWCD methods [6], [7], [13], [14] represent
an important progress in robust sizing of analog ICs.
Instead of using the same slow/fast parameter sets
0278-0070/$26.00
c
2011 IEEE

794 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 30, NO. 6, JUNE 2011
for all the circuits, the PSWCD methods decide on
the worst-case parameters for specific performances
of each circuit and nominal design. Determining the
performance-specific worst-case parameters is critical
for this kind of method. Although the search for the
WC point typically uses some nonlinear optimization
formulation, most PSWCD methods [13] linearize the
performances at the worst-case point, which can intro-
duce inherent errors. Some PSWCD methods build a
response surface between the inter-die parameters and
the performances [14] (RSM PSWCD). The inter-die
parameters are independent of the design parameters,
but intra-die variations have correlations to the design
parameters. Therefore, it is not easy to consider them,
and furthermore accounting for intra-variations for each
device would increase the total number of the process
variation variables dramatically. While some PSWCD
methods calculate an approximate estimation of the
yield, others do not calculate yield. Instead, they cal-
culate a range of the process parameters for a given
yield, in which the specifications are met. In this case,
the estimated yield is not available explicitly and the
method has to be run repeatedly with different target
values (e.g., yield > 95–99%) to know the best yield
that can be achieved.
3) In response-surface-based methods, first macro-models
over the yield and the design variables and process
parameters are established through regression methods
and these are subsequently used to estimate the yield
in the sizing process. Macro-models can be classified
into white-box models and black-box models. A white-
box model analytically expresses the yield as a function
of the design and process parameters. Some additional
parameters are used for regression purposes. Black-box
models, on the other hand, do not consider analytical
expressions of the yield, but construct a regression
model according to the input (i.e., design points, process
parameters) and the output (i.e., yield) data. Accurate
yield-aware performance macro-models can make a siz-
ing tool explore design alternatives with little compu-
tational cost. However, response-surface-based methods
suffer from the trade-off between the accuracy and
the complexity of the model, as well as the accuracy
and the number of samples (CPU time) to create the
model.
4) MC-based methods have the advantages of generality
and high accuracy [16], so they are the most reliable and
commonly used technique for yield estimation. Never-
theless, a large number of simulations are needed for MC
analysis, therefore preventing its use within an iterative
yield optimization loop (Fig. 1). Some speed enhance-
ment techniques for MC simulations based on design of
experiments (DOE) techniques have been proposed, such
as the Latin hypercube sampling (LHS) method [17],
[18] or the quasi-Monte-Carlo (QMC) method [19], [20],
to replace primitive Monte-Carlo (PMC) simulation.
These speed improvements are very significant, but our
experiments show that the computational load is still too
Fig. 1. General flow of yield optimization methods.
large for yield optimization if only using DOE methods
in real practice (see Section IV).
Currently, PSWCD methods and response-surface-based
methods are the most popular approaches in the repeated
iterations within yield optimization loops, while some form
of Monte-Carlo yield estimation is most popular in design
verification.
Therefore, in this paper we address the efficiency of MC-
based yield optimization by proposing a different (but comple-
mentary) approach exploiting techniques from computational
intelligence: while keeping high accuracy, we dramatically
increase the efficiency of yield optimization by: 1) optimally
allocating the computing budget to candidate solutions in
order to avoid non-critical MC simulations, and 2) enhancing
the convergence speed of the search strategy by means of
a random-scale mutation operator in combination with the
differential evolution (DE) algorithm to decrease the amount
of expensive MC simulations.
Based on the above ideas, we then present the ordinal
optimization (OO)-based random-scale differential evolution
(ORDE) algorithm for analog yield optimization. The method
aims to:
1) be general enough to be applied to any analog circuit in
any technology process and for any distribution of the
process parameters;
2) simultaneously handle inter-die and intra-die variations
in nanometer technologies;
3) provide highly accurate results comparable to Monte-
Carlo analysis;
4) use an order of magnitude less computational effort
compared with the improved MC-based method in-
tegrating the infeasible pruning and Latin hypercube
sampling techniques (Section III-A) and as such making
the computational time of accurate yield optimization
practical.
The remainder of this paper is organized as follows. Sec-
tion II reviews basic concepts of yield optimization. Sec-
tion III introduces the components and the general framework
of ORDE. Section IV tests ORDE on practical examples.
Comparisons with response-surface-based methods are also
performed. In Section V, the ORDE algorithm is extended from
plain yield optimization to process-variation-aware single-
objective analog sizing, which optimizes a target design ob-
jective (e.g., power) subject to a minimum yield requirement.
The concluding remarks are presented in Section VI.

LIU et al.: EFFICIENT AND ACCURATE STATISTICAL ANALOG YIELD OPTIMIZATION AND VARIATION-AWARE CIRCUIT SIZING 795
II. Basics of Yield Optimization
The aim of yield optimization is to find a circuit design point
d
that has a maximum yield, considering the manufacturing
and environmental variations [8]. In the following, we will
elaborate the design space D, process parameter space S with
distribution pdf(s), environmental parameter space , and
specifications P.
The design space D is the search space of the circuit
design points, d, which can be transistor widths and lengths,
resistances, capacitances and bias voltages and currents. Each
one has an upper and lower bound, which is determined
by the technological process or the user’s setup. The pro-
cess parameter space S is the space of statistical parameters
reflecting the process fluctuations, e.g., oxide thickness T
ox
and threshold voltage V
th
. Process parameter variations can
be inter-die or intra-die. For an accurate model, both types
should be considered. The environmental variables include
temperature and power supply voltage. The specifications
P are the requirements set by the designer, which can be
classified into performance constraints (e.g., DC gain > 70 dB)
and functional constraints (e.g., transistors must work in the
saturation region).
The yield optimization problem can be formulated as to find
a design point d
that maximizes yield (in the case of plain
yield optimization) [13]
d
=arg
dD
max{Y (d)} (1.1)
or that minimizes some function f (e.g., power, area) subject
to a minimum yield requirement y (in the case of yield-aware
sizing) [17]
d
= arg min
dD
{f (d, s, θ)} s S, θ
s.t. Y (d) y.
(1.2)
Yield is defined as the percentage of manufactured circuits
that meet all the specifications considering process and envi-
ronmental variations. Hence, yield can be formulated as
Y(d)=E{YS(d, s, θ)|pdf (s)} (2)
where E is the expected value. YS(d, s, θ) is equal to 1 if the
performance of d meets all the specifications considering s
(process fluctuation) and θ (environmental variation); other-
wise, YS(d, s, θ) is equal to 0. In most analog circuits, circuit
performances change monotonically with the environmental
variables θ. Then, the impact of environmental variations
can be handled by simulations at the extreme values of the
environmental variables. For instance, if the power supply may
experience some variations, e.g, 10%, the largest degradation
is obtained by simulating at the extreme values: (1 ± 10%)×
nominal value. Process variations, on the other hand, are
much more complex: directly simulating the extreme values
(classical worst-case analysis [1]) may cause serious over-
design. This paper therefore focuses on the impact of statistical
process variations (space S) in yield optimization.
Fig. 2. Two-stage yield estimation flow.
III. The ORDE Algorithm
A. The Use of Infeasible Pruning and DOE in ORDE
To satisfy the first three goals (be general enough, able
to handle both inter-die and intra-die variations, high accu-
racy) from Section I, MC analysis is selected. The speed
enhancement technique, DOE, for MC-based yield estimation
is used. The DOE method implemented in ORDE is LHS.
However, the key contributions of ORDE are not related
to a particular sampling mechanism, therefore, other speed
acceleration methods like the recently proposed QMC [19],
[20] can be integrated. In the yield optimization process,
some candidate solutions will appear that cannot satisfy the
specifications even for nominal values of process parameters.
Their yield values will be too low to become a useful candidate
solution. Hence, there is not much sense in applying the MC-
based yield estimation to these solutions. In ORDE, we call
them infeasible solutions and assign them a zero yield value.
Their violation of constraints is calculated, and the constrained
optimization algorithm will minimize the constraint violations
and move the search space to feasible solutions (i.e., design
points that satisfy the specifications for nominal process pa-
rameters). This technique is named “infeasible pruning” in this
paper. The selected feasible solutions are handled by ordinal
optimization, which is described below.
B. Basics of ORDE
Many recent analog circuit sizing and yield optimization
methodologies are based on evolutionary computation (EC),
which relies on the evolution of a set of candidate solutions,
commonly called population, along a set of iterations, com-
monly called generations [19]. The computational effort at
each iteration and the necessary number of iterations are two
key factors that affect the speed of the yield optimization. We
solve these two problems by optimally allocating the com-
puting budget to each candidate in the population (reducing
the computational effort at each iteration) and by improving
the search mechanism (decreasing the necessary number of
iterations, and, hence, decreasing the number of expensive MC
simulations). Therefore, the total computational effort can be
reduced considerably. In this paper, we use two computational
intelligence techniques to implement these two key ideas.
Our yield estimation flow is depicted in Fig. 2. In order
to optimally allocate the computing budget at each iteration,
instead of assigning the same number of MC simulations to
feasible solutions, the yield estimation process is divided into
two stages. In the first stage, the fitness ranking of the candi-
date solutions and a reasonably accurate yield estimation result

796 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 30, NO. 6, JUNE 2011
Fig. 3. Yield optimization flow.
for good (critical) solutions are important. For medium or bad
(non-critical) candidate solutions, their ranking is important,
but accurate yield estimation is not. The reason is that the
function of the yield estimation for non-critical candidates is to
guide the selection operator in the EC algorithm, but the candi-
dates themselves are likely not to be selected as the final result
or even not enter the second stage of the yield optimization
flow. Hence, the computational efforts spent on feasible but
non-optimal candidate solutions can be strongly reduced. On
the other hand, the estimations for these non-critical candidates
cannot be too inaccurate either. After all, correct selection
of candidate solutions in the yield optimization algorithm is
necessary. In the first stage, the yield optimization problem
is therefore formulated as an ordinal optimization problem,
aimed at identifying critical candidate solutions by allocating
a sufficient number of samples to the MC simulation of these
solutions, while reasonably few samples are allocated to non-
critical solutions [22]. Notice that this approach is intended
to assign a different number of MC simulations to the yield
estimations of the different candidates. This is different, and
compatible, with the efficient sampling addressed with any
DOE technique. In the second stage of the ORDE method,
an accurate result is highly important, so the number of
simulations within each yield estimation is increased in the
second stage to obtain an accurate yield value.
Another key technique of ORDE is to decrease the nec-
essary number of iterations of the optimization flow shown
in Fig. 3. Instead of using conventional EC algorithms, we
design a selection-based random-scale DE algorithm (RSDE),
which is a combination of three different techniques. Each
technique plays a significant role in each phase. The first phase
emphasizes a selection-based method to focus the search into
the feasible solution space, defined by the nominal values of
the process parameters. We use the DE framework [23] (a
powerful and fast global optimization algorithm) for global
search (emphasized in the second phase) and a random-scale
operator for fine tunings (emphasized in the third phase).
In the following, the basic components of ORDE will
be introduced first, and the general framework will then be
presented.
C. Introducing Ordinal Optimization into Yield Optimization
OO has emerged as an efficient technique for simulation and
optimization, especially for problems where the computation
of the simulation models is time consuming [22]. OO is based
on two basic tenets.
1) Obtaining the “order” of a set of candidates is easier than
estimating an accurate “value” of each candidate. The
convergence rate of ordinal optimization is exponential.
This means that the probability that the observed best
solution is among the true best solutions grows as
O(e
αn
) where α is a positive real number and n is the
number of simulations [22]. In contrast, the convergence
rate of methods aimed at estimating the right value
instead of the order, e.g., the direct Monte Carlo method,
is at most O(1
n) [24].
2) An accurate estimation is very costly but a satisfactory
value can be obtained much easier.
Therefore, OO fits the objectives of the first stage of yield
estimation of ORDE (see Fig. 2) quite well. In the first
stage, a bunch of good designs are selected through evolution
and sent to the second stage. The requirement is a correct
selection with a reasonably accurate yield estimation and with
the smallest computational effort. According to OO, a large
portion of the simulations should be conducted with those
critical solutions in order to reduce the estimator variance. On
the other hand, limited computational effort should be spent on
non-critical solutions that have little effect on identifying the
good solutions, even if they have large variances. This leads
to the core problem in ordinal optimization: allocating the
computing budget, which can be formulated as follows. Given
a pre-defined computing budget, how should it be distributed
among the candidate designs?
Consider the yield evaluation function. For a single sim-
ulation (e.g., a sample of process parameters), we define
YS(d, s) = 1 if all the circuit specifications are met, and
YS(d, s) = 0 otherwise. Because the MC simulation determines
the yield as the ratio of the number of functional chips to
all fabricated chips, the mean value of YS(d, s), corresponds
to the yield value, Y(d). Let us consider a total computing
budget equal to T simulations. In ORDE, T is determined
by the number of feasible solutions (i.e., solutions that meet
the performance constraints for nominal values of the process
parameters) at each generation. Here, we set T = sim
ave
×M1,
where M1 is the number of feasible solutions and sim
ave
is
the average budget for each candidate set by the user. The
budget allocation problem consists in determining the number
of simulations n
1
,n
2
, ···,n
M1
of the M1 candidate solutions
such that n
1
+ n
2
+ ···n
M1
= T . For this problem, several
algorithms have been reported in the specialized literature [25],
[26]. An asymptotic solution to this optimal computing budget
allocation problem is proposed in [25]:
n
b
= σ
b
M1
i=1,i=b
n
2
i
2
i
1/2
n
i
/n
j
=
σ
i
b,i
σ
j
b,j
2
i, j ∈{1, 2, ···,M1} i = j = b (3)
where b is the best design of the M1 candidate solutions
(represented by the highest estimated yield value based on
the available samples for each candidate). For each candidate
solution, some samples are allocated. For each sample, the
corresponding YS(d, s) can be computed (0 or 1). By these
YS(d, s), we can calculate their mean (estimated yield, Y (d))
and σ
2
1
2
2
, ···
2
M1
, which are the finite variances of the
M1 solutions, respectively. They measure the accuracy of

LIU et al.: EFFICIENT AND ACCURATE STATISTICAL ANALOG YIELD OPTIMIZATION AND VARIATION-AWARE CIRCUIT SIZING 797
Algorithm 1 Ordinal optimization for analog yield analysis
Step 0: Let k=0, and perform n
0
simulations for each
feasible design, i.e., n
k
i
= n
0
,i=1, 2, ..., M1.
Step 1:If
M1
i=1
n
k
i
T , stop the OO for yield analysis.
Step 2: Consider additional simulations (refer to [22] for
the selection of the and n
0
values) and compute the new
budget allocation n
k+1
i
,i =1, 2, ..., M1 by (3). If n
k+1
i
n
max
, then n
k+1
i
= n
max
.
Step 3: Perform additional max{0,n
k+1
i
n
k
i
} simulations for
design d
i
, i =1, 2, ..., M1. Let k = k + 1 and go to step 1.
Fig. 4. Function of OO in a typical population.
the estimation. Parameter δ
b,i
= Y
b
(d) Y
i
(d) represents the
deviations of the estimated yield value of each design solution
with respect to that of the best design. The interpretation of (3)
is quite intuitive. If δ
b,i
is large, the estimated yield value of
design i is bad, and according to n
i
/n
j
=(
σ
i
b,i
σ
j
b,j
)
2
, n
i
becomes
small, i.e., we should not allocate many simulations to this
design. However if σ
i
is large, it means that the accuracy
of the yield estimation is low, and we should allocate more
simulations to this design to obtain a better yield estimate.
Therefore, the quotient σ
i
b,i
represents a trade-off between
the yield value of design i and the accuracy of its estima-
tion. Therefore, an OO-based yield analysis algorithm can be
designed as Algorithm 1.
Parameter n
0
is the initial number of simulations for each
candidate solution, selected to provide a very rough idea of the
yield. More simulations are allocated according to the quality
of the candidate later on. Parameter n
max
is the upper limit
of the number of simulations for any candidate. The value of
n
max
must call for a balance between the accuracy and the
efficiency.
A typical population from example 2, described in Sec-
tion IV, is selected to show the benefits of OO (see Fig. 4):
candidates with a yield value larger than 70% correspond
to 31% of the population, and are assigned 56% of the
simulations. Candidates with a yield value smaller than 40%
correspond to 33% of the population, and are only assigned
12% of the simulations. The total number of simulations is
10.2% of those of the infeasible pruning (IP)+LHS method
applied to the same candidate designs, because repeated MC
simulations of non-critical solutions are avoided.
The above technique is used until the yield converges close
to the desired value. For example, if the desired target yield
is 99%, the threshold value between the first and the second
stage can be 97%. Candidates with an estimated yield larger
than the threshold value enter the second stage. In this stage,
all the candidates are assigned the specified maximum number
(n
max
) of samples to guarantee the accuracy of the final result,
while other candidates in the population still remain in the first
stage and still use the estimation method described previously.
Note that the two stages are therefore not separated in time,
but rather use different yield estimation methods.
The threshold value must be properly selected. A too low
threshold value may cause low efficiency, as OO would stop
when the yield values of the selected points are not promising
enough (e.g., a 50% yield threshold for a requirement of
90% yield) and shifts the yield estimation and selection tasks
to the second stage, which is more CPU expensive. A too
high threshold value (e.g., a threshold equal to the yield
requirement) may cause low accuracy. The reason is that in
most cases the points selected by OO are promising (it can
compare the candidates and do the selection correctly) but the
estimated yield values are not sufficiently accurate for the final
result. Assigning the threshold to be two percentage points
below the required target yield represents an appropriate trade-
off between efficiency and accuracy.
D. Brief Introduction to the DE Algorithm
In addition to introducing OO to decrease the computational
effort at each iteration, decreasing the necessary number of
iterations is another key objective. The DE algorithm [23]
is selected as the global search engine. The DE algorithm
outperforms many EC algorithms in terms of solution quality
and convergence speed [23]. DE uses a simple differential
operator to create new candidate solutions and a one-to-one
competition scheme to greedily select new candidates.
The ith candidate solution in the Q-dimensional search
space at generation t can be represented as
d
i
(t)=[d
i,1
,d
i,2
, ···,d
i,Q
]. (4)
At each generation t, the mutation and crossover operators
are applied to the candidate solutions, and a new population
arises. Then, selection takes place, and the corresponding can-
didate solutions from both populations compete to comprise
the next generation. The operators are now explained in detail.
For each target candidate solution, according to the mutation
operator, a mutant vector is built
V
i
(t +1)=[v
i,1
(t +1),...,v
i,Q
(t + 1)]. (5)
It is generated by adding the weighted difference between
a given number of candidate solutions randomly selected
from the previous population to another candidate solution.
In ORDE, the latter one is selected to be the best individual
in the current population. The mutation operation is therefore
described by the following equation:
V
i
(t +1)=d
best
(t)+F (d
r1
(t) d
r2
(t)) (6)
where indices r
1
and r
2
(r
1
,r
2
∈{1, 2,...,M}) are randomly
chosen and mutually different, and also different from the
current index i. Parameter F (0, 2] is a constant called

Citations
More filters
Journal ArticleDOI

An Efficient Evolutionary Algorithm for Chance-Constrained Bi-Objective Stochastic Optimization

TL;DR: A new method, called MOOLP (multi-objective uncertain optimization with ordinal optimization (OO)), Latin supercube sampling and parallel computation, is proposed in this paper for dealing with the CBSOP.
Journal ArticleDOI

An Efficient High-Frequency Linear RF Amplifier Synthesis Method Based on Evolutionary Computation and Machine Learning Techniques

TL;DR: A new method, called efficient machine learning-based differential evolution, is presented for mm-wave frequency linear RF amplifier synthesis by using electromagnetic (EM) simulations to evaluate the key passive components and the generated low-dimensional expensive optimization problem can be solved efficiently and global search can be achieved.
Journal ArticleDOI

An Artificial Neural Network Assisted Optimization System for Analog Design Space Exploration

TL;DR: A new analog circuit optimization system for automated sizing of analog integrated circuits that consists of a genetic algorithm (GA) based global optimization engine and an artificial neural network (ANN) based local optimization engine so the local minimum search (LMS) can have a much faster search speed.
Journal ArticleDOI

Richardson extrapolation-based sensitivity analysis in the multi-objective optimization of analog circuits

TL;DR: The final results show that the optimal sizes, selected after executing the sensitivity approach, guarantee the lowest sensitivities values while improving the performances of the RFC OTA.
Journal ArticleDOI

Efficient Yield Optimization for Analog and SRAM Circuits via Gaussian Process Regression and Adaptive Yield Estimation

TL;DR: Experimental results show that the proposed Bayesian optimization approach for yield optimization of analog and SRAM circuits can significantly reduce the number of circuit simulations without compromising optimization efficacy.
References
More filters
Book

Artificial Intelligence: A Modern Approach

TL;DR: In this article, the authors present a comprehensive introduction to the theory and practice of artificial intelligence for modern applications, including game playing, planning and acting, and reinforcement learning with neural networks.
Journal ArticleDOI

A method for the solution of certain non – linear problems in least squares

TL;DR: In this article, the problem of least square problems with non-linear normal equations is solved by an extension of the standard method which insures improvement of the initial solution, which can also be considered an extension to Newton's method.
Book

Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)

TL;DR: This volume explores the differential evolution (DE) algorithm in both principle and practice and is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimization.
Book

Differential Evolution: A Practical Approach to Global Optimization

TL;DR: The differential evolution (DE) algorithm is a practical approach to global numerical optimization which is easy to understand, simple to implement, reliable, and fast as discussed by the authors, which is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimisation.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What are the contributions mentioned in the paper "Efficient and accurate statistical analog yield optimization and variation-aware circuit sizing based on computational intelligence techniques" ?

In this paper, techniques inspired by computational intelligence are used to speed up yield optimization without sacrificing accuracy. A new sampling-based yield optimization approach, which determines the device sizes to optimize yield, is presented, called the ordinal optimization ( OO ) based random-scale differential evolution ( ORDE ) algorithm. By proposing a two-stage estimation flow and introducing the OO technique in the first stage, sufficient samples are allocated to promising solutions, and repeated MC simulations of non-critical solutions are avoided. Furthermore, ORDE is extended from plain yield optimization to process-variation-aware single-objective circuit sizing. 

The main idea of extending ORDE from plain yield optimization to single-objective variation-aware sizing is to add an outer selection procedure considering the objective function value and the yield as constraint. 

for medium-scale problems (10–20 design variables), derivative-free methods also need more than 20–30 iterations for each candidate, and each iteration needs nmax simulations. 

A too low threshold value may cause low efficiency, as OO would stop when the yield values of the selected points are not promising enough (e.g., a 50% yield threshold for a requirement of 90% yield) and shifts the yield estimation and selection tasks to the second stage, which is more CPU expensive. 

By using OO, the MC simulations are optimally allocated according to the solution qualities, so promising candidate solutions are assigned much more than 35 simulations. 

In the second stage of the ORDE method, an accurate result is highly important, so the number of simulations within each yield estimation is increased in the second stage to obtain an accurate yield value. 

The total number of simulations is 10.2% of those of the infeasible pruning (IP)+LHS method applied to the same candidate designs, because repeated MC simulations of non-critical solutions are avoided. 

The ith candidate solution in the Q-dimensional search space at generation t can be represented asdi(t) = [di,1, di,2, · · · , di,Q]. (4) At each generation t, the mutation and crossover operators are applied to the candidate solutions, and a new population arises. 

The reason is that the function of the yield estimation for non-critical candidates is to guide the selection operator in the EC algorithm, but the candidates themselves are likely not to be selected as the final result or even not enter the second stage of the yield optimization flow. 

Experiments with the infeasible pruning (IP) +LHS method have been performed using 300 and 500 LHS MC simulations for each feasible candidate. 

In derivative-based methods, calculating the required derivatives, e.g., Hessian matrix, often consumes numerous function evaluations when the number of design variables is large, especially when the derivatives cannot be expressed analytically. 

The authors have also tried uniform and Cauchy distributions for the scaling factor using benchmark problems in the EC field and found that the Gaussian-distributed F̂ results in the best average objective function value. 

From (11), the authors can calculate that with 99% confidence level and Y = 0.1%, the corresponding yield value of 50 000 LHS simulations is 96%.