Efficient and Accurate Statistical Analog Yield Optimization and Variation-Aware Circuit Sizing Based on Computational Intelligence Techniques
read more
Citations
An Efficient Evolutionary Algorithm for Chance-Constrained Bi-Objective Stochastic Optimization
An Efficient High-Frequency Linear RF Amplifier Synthesis Method Based on Evolutionary Computation and Machine Learning Techniques
An Artificial Neural Network Assisted Optimization System for Analog Design Space Exploration
Richardson extrapolation-based sensitivity analysis in the multi-objective optimization of analog circuits
Efficient Yield Optimization for Analog and SRAM Circuits via Gaussian Process Regression and Adaptive Yield Estimation
References
Artificial Intelligence: A Modern Approach
A method for the solution of certain non – linear problems in least squares
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
Differential Evolution: A Practical Approach to Global Optimization
Related Papers (5)
Why Quasi-Monte Carlo is Better Than Monte Carlo or Latin Hypercube Sampling for Statistical Circuit Analysis
Frequently Asked Questions (13)
Q2. What is the main idea of extending ORDE from plain yield optimization to single-objective?
The main idea of extending ORDE from plain yield optimization to single-objective variation-aware sizing is to add an outer selection procedure considering the objective function value and the yield as constraint.
Q3. How many iterations does a derivative-free method need?
for medium-scale problems (10–20 design variables), derivative-free methods also need more than 20–30 iterations for each candidate, and each iteration needs nmax simulations.
Q4. What is the reason why OO stops when the yield values of the selected points are not promising?
A too low threshold value may cause low efficiency, as OO would stop when the yield values of the selected points are not promising enough (e.g., a 50% yield threshold for a requirement of 90% yield) and shifts the yield estimation and selection tasks to the second stage, which is more CPU expensive.
Q5. How many simulations are assigned to each candidate?
By using OO, the MC simulations are optimally allocated according to the solution qualities, so promising candidate solutions are assigned much more than 35 simulations.
Q6. What is the reason why the yield estimation process is divided into two stages?
In the second stage of the ORDE method, an accurate result is highly important, so the number of simulations within each yield estimation is increased in the second stage to obtain an accurate yield value.
Q7. How many simulations are there for each candidate?
The total number of simulations is 10.2% of those of the infeasible pruning (IP)+LHS method applied to the same candidate designs, because repeated MC simulations of non-critical solutions are avoided.
Q8. What is the yield of the candidate solution in the Q-dimensional search space?
The ith candidate solution in the Q-dimensional search space at generation t can be represented asdi(t) = [di,1, di,2, · · · , di,Q]. (4) At each generation t, the mutation and crossover operators are applied to the candidate solutions, and a new population arises.
Q9. Why is the yield estimation for non-critical candidates important?
The reason is that the function of the yield estimation for non-critical candidates is to guide the selection operator in the EC algorithm, but the candidates themselves are likely not to be selected as the final result or even not enter the second stage of the yield optimization flow.
Q10. How many simulations have been performed with the infeasible pruning method?
Experiments with the infeasible pruning (IP) +LHS method have been performed using 300 and 500 LHS MC simulations for each feasible candidate.
Q11. What is the common way to calculate the derivatives?
In derivative-based methods, calculating the required derivatives, e.g., Hessian matrix, often consumes numerous function evaluations when the number of design variables is large, especially when the derivatives cannot be expressed analytically.
Q12. What is the average objective function value for the scaling factor?
The authors have also tried uniform and Cauchy distributions for the scaling factor using benchmark problems in the EC field and found that the Gaussian-distributed F̂ results in the best average objective function value.
Q13. How many LHS simulations are needed to get the yield?
From (11), the authors can calculate that with 99% confidence level and Y = 0.1%, the corresponding yield value of 50 000 LHS simulations is 96%.