Proceedings ArticleDOI

# Measurement of inherent noise in EDA tools

07 Aug 2002-pp 206-211

TL;DR: This work seeks to identify sources of noise in EDA tools, and analyze the effects of these noise sources on design quality, and proposes new behavior criteria for tools with respect to the existence and management of noise.

AbstractWith advancing semiconductor technology and exponentially growing design complexities, predictability of design tools becomes an important part of a stable top-down design process. Prediction of individual tool solution quality enables designers to use tools to achieve best solutions within prescribed resources, thus reducing design cycle time. However, as EDA tools become more complex, they become less predictable. One factor in the loss of predictability is inherent noise in both algorithms and how the algorithms are invoked. In this work, we seek to identify sources of noise in EDA tools, and analyze the effects of these noise sources on design quality. Our specific contributions are: (i) we propose new behavior criteria for tools with respect to the existence and management of noise; (ii) we compile and categorize possible perturbations in the tool use model or tool architecture that can be sources of noise; and (iii) we assess the behavior of industry place and route tools with respect to these criteria and noise sources. While the behavior criteria give some guidelines for and characterize the stability of tools, we are not recommending that tools be immune from input perturbations. Rather, the categorization of noise allows us to better understand how tools will or should behave; this may eventually enable improved tool predictors that consider inherent tool noise.

Topics: Noise (56%)

Content maybe subject to copyright    Report

##### Citations
More filters

Proceedings ArticleDOI
Puneet Gupta
08 Nov 2005
TL;DR: A novel dynamic programming-based technique for etch-dummy correctness (EtchCorr) which can be combine with the SAEDM in detailed placement of standard-cell designs and is validated on industrial testcases with respect to wafer printability, database complexity and device performance.
Abstract: Etch dummy features are used in the mask data preparation flow to reduce critical dimension (CD) skew between resist and etch processes and improve the printability of layouts. However, etch dummy rules conflict with SRAF (Sub-Resolution Assist Feature) insertion because each of the two techniques requires specific spacings of poly-to-assist, assist-to-assist, active-to-etch dummy and dummy-to-dummy. In this work, we first present a novel SRAF-aware etch dummy insertion method ( SAEDM ) which optimizes etch dummy insertion to make the layout more conducive to assist-feature insertion after etch dummy features have been inserted. However, placed standard-cell layouts may not have the ideal whitespace distribution to allow for optimal etch dummy and assist-feature insertions. Since placement of cells can create forbidden pitch violations, the placer must generate assist-correct and etch dummy-correct placements. This can be achieved by intelligent whitespace management in the placer. We describe a novel dynamic programming-based technique for etch-dummy correctness ( EtchCorr ) which can be combine with the SAEDM in detailed placement of standard-cell designs. Our algorithm is validated on industrial testcases with respect to wafer printability, database complexity and device performance.

145 citations

### Cites background from "Measurement of inherent noise in ED..."

• ...maximum delay overhead of 6% is within the inherent noise of the P&R tool.(11) The runtime of EtchCorr placement perturbation is negligible (∼ 5 minutes) compared to the running time of OPC (∼ 2....

[...]

Proceedings ArticleDOI
25 Mar 2018
TL;DR: Examples applications include removing unnecessary design and modeling margins through correlation mechanisms, achieving faster design convergence through predictors of downstream flow outcomes that comprehend both tools and design instances, and corollaries such as optimizing the usage of design resources licenses and available schedule.
Abstract: In the late-CMOS era, semiconductor and electronics companies face severe product schedule and other competitive pressures. In this context, electronic design automation (EDA) must deliver "design-based equivalent scaling" to help continue essential industry trajectories. A powerful lever for this will be the use of machine learning techniques, both inside and "around" design tools and flows. This paper reviews opportunities for machine learning with a focus on IC physical implementation. Example applications include (1) removing unnecessary design and modeling margins through correlation mechanisms, (2) achieving faster design convergence through predictors of downstream flow outcomes that comprehend both tools and design instances, and (3) corollaries such as optimizing the usage of design resources licenses and available schedule. The paper concludes with open challenges for machine learning in IC physical design.

43 citations

### Cites background from "Measurement of inherent noise in ED..."

• ...Figure 7 (right) illustrates that the statistics of this noisy tool behavior are Gaussian [32] [17]....

[...]

Journal ArticleDOI
TL;DR: This paper presents a context-aware DVFS design flow that considers the intrinsic characteristics of the hardware design, as well as the operating scenario-including the relative amounts of time spent in different modes, the range of performance scalability, and the target efficiency metric-to optimize the design for maximum energy efficiency.
Abstract: The proliferation of embedded systems and mobile devices has created an increasing demand for low-energy hardware. Dynamic voltage and frequency scaling (DVFS) is a popular energy reduction technique that allows a hardware design to reduce average power consumption while still enabling the design to meet a high-performance target when necessary. To conserve energy, many DVFS-based embedded and mobile devices often spend a large fraction of their lifetimes in a low-power mode. However, DVFS designs produced by conventional multimode CAD flows tend to have significant energy overheads when operating outside of the peak performance mode, even when they are operating in a low-power mode. A dedicated core can be added for low-energy operation, but has a high cost in terms of area and leakage. In this paper, we explore the DVFS design space to identify the factors that affect DVFS efficiency. Based on our insights, we propose two design-level techniques to enhance the energy efficiency of DVFS for energy constrained systems. First, we present a context-aware DVFS design flow that considers the intrinsic characteristics of the hardware design, as well as the operating scenario-including the relative amounts of time spent in different modes, the range of performance scalability, and the target efficiency metric-to optimize the design for maximum energy efficiency. We also present a selective replication-based DVFS design methodology that identifies hardware modules for which context-aware multimode design may be inefficient and creates dedicated module replicas for different operating modes for such modules. We show that context-aware design can reduce average power by up to 20% over a conventional multimode design flow. Selective replication can reduce average power by an additional 4%. We also use the generated insights to identify microarchitectural decisions that impact DVFS efficiency. We show that the benefits from the proposed design-level techniques increase when microarchitectural transformations are allowed.

39 citations

### Cites methods from "Measurement of inherent noise in ED..."

• ...We evaluate replication decisions at the granularity of RTL modules, and we analyze replication at different granularities in this section....

[...]

Journal ArticleDOI
Abstract: The value of guardband reduction is a critical open issue for the semiconductor industry. For example, due to competitive pressure, foundries have started to incent the design of manufacturing-friendly ICs through reduced model guardbands when designers adopt layout restrictions. The industry also continuously weighs the economic viability of relaxing process variation limits in the technology roadmap (available: http://public.itrs.net). Our work gives the first-ever quantification of the impact of model guardband reduction on outcomes from the synthesis, place and route (SPR 40% is the amount of guardband reduction reported by IBM for a variation-aware timing methodology. For the embedded processor core we observe up to 8% standard-cell area reduction, 7% routed wirelength reduction, 5% dynamic power reduction, and 10% leakage power reduction at 30% guardband reduction. We also report a set of fine-grain SPICE simulations that accurately assesses the impact of process guardband reduction, as distinguished from overall guardband reductions, on yield. We observe up to 4% increase in number of good dies per wafer at 27% process guardband reduction (i.e., with fixed voltage and temperature). Our results suggest that there is justification for the design, EDA and process communities to enable guardband reduction as an economic incentive for manufacturing-friendly design practices.

36 citations

Proceedings ArticleDOI
, Yang Du2
01 Oct 2016
TL;DR: This work develops machine learning-based models that predict whether a placement solution is routable without conducting trial or early global routing, and uses these models to accurately predict iso-performance Pareto frontiers of utilization, aspect ratio and number of layers in the back-end-of-line (BEOL) stack.
Abstract: In advanced technology nodes, physical design engineers must estimate whether a standard-cell placement is routable (before invoking the router) in order to maintain acceptable design turnaround time. Modern SoC designs consume multiple compute servers, memory, tool licenses and other resources for several days to complete routing. When the design is unroutable, resources are wasted, which increases the design cost. In this work, we develop machine learning-based models that predict whether a placement solution is routable without conducting trial or early global routing. We also use our models to accurately predict iso-performance Pareto frontiers of utilization, aspect ratio and number of layers in the back-end-of-line (BEOL) stack. Furthermore, using data mining and machine learning techniques, we develop new methodologies to generate training examples given very few placements. We conduct validation experiments in three foundry technologies (28nm FDSOI, 28nm LP and 45nm GS), and demonstrate accuracy ≥ 85.9% in predicting routability of a placement. Our predictions of Pareto frontiers in the three technologies are pessimistic by at most 2% with respect to the maximum achievable utilization for a given design in a given BEOL stack.

34 citations

##### References
More filters

Journal ArticleDOI
Abstract: The asymptotic behavior of the systems $X_{n + 1} = X_n + a_n b( {X_n ,\xi _n } ) + a_n \sigma ( X_n )\psi_n$ and $dy = \bar b( y )dt + \sqrt {a( t )} \sigma ( y )dw$ is studied, where $\{ {\psi _n } \}$ is i.i.d. Gaussian, $\{ \xi _n \}$ is a (correlated) bounded sequence of random variables and $a_n \approx A_0/\log (A_1 + n )$. Without $\{ \xi _n \}$, such algorithms are versions of the “simulated annealing” method for global optimization. When the objective function values can only be sampled via Monte Carlo, the discrete algorithm is a combination of stochastic approximation and simulated annealing. Our forms are appropriate. The $\{ \psi _n \}$ are the “annealing” variables, and $\{ \xi _n \}$ is the sampling noise. For large $A_0$, a full asymptotic analysis is presented, via the theory of large deviations: Mean escape time (after arbitrary time n) from neighborhoods of stable sets of the algorithm, mean transition times (after arbitrary time n) from a neighborhood of one stable set to another, a...

141 citations

Journal ArticleDOI
TL;DR: A search for the global minimum of a function is proposed; the search is on the basis of sequential noisy measurements and the search plan is shown to be convergent in probability to a set of minimizers.
Abstract: A search for the global minimum of a function is proposed; the search is on the basis of sequential noisy measurements. Because no unimodality assumptions are made, stochastic approximation and other well-known methods are not directly applicable. The search plan is shown to be convergent in probability to a set of minimizers. This study was motivated by investigations into machine learning. This setting is explained, and the methodology is applied to create an adaptively improving strategy for 8-puzzle problems.

55 citations

### "Measurement of inherent noise in ED..." refers background in this paper

• ...For example, in [8], Yakowitz and Lugosi studied a formulation of random search in the presence of noise for the machine learning domain....

[...]

Journal ArticleDOI
TL;DR: A detailed software architecture is presented that allows flexible, efficient and accurate assessment of the practical implications of new move-based algorithms and partitioning formulations and discusses the current level of sophistication in implementation know-how and experimental evaluation.
Abstract: We summarize the techniques of implementing move-based hypergraph partitioning heuristics and evaluating their performance in the context of VLSI design applications. Our first contribution is a detailed software architecture, consisting of seven reusable components, that allows flexible, efficient and accurate assessment of the practical implications of new move-based algorithms and partitioning formulations. Our second contribution is an assessment of the modern context for hypergraph partitioning research for VLSI design applications. In particular, we discuss the current level of sophistication in implementation know-how and experimental evaluation, and we note how requirements for real-world partitioners - if used as motivation for research - should affect the evaluation of prospective contributions. Two "implicit decisions" in the implementation of the Fiduccia-Mattheyses heuristic are used to illustrate the difficulty of achieving meaningful experimental evaluation of new algorithmic ideas.

42 citations

• ...For example, a KLFM netlist partitioning implementation [2] will search for the cell to be moved to a different partition based on the order of the cells in the gain bucket data structure....

[...]

Proceedings ArticleDOI
02 Jul 1986
TL;DR: It is found that the Min Cut partitioning with simplified Terminal Propagation is the most efficient placement procedure studied and mean results of many placements should be used when comparing algorithms.
Abstract: This paper describes a study of placement procedures for VLSI Standard Cell Layout. The procedures studied are Simulated Annealing, Min Cut placement, and a number of improvements to Min Cut placement including a technique called Terminal Propagation which allows Min Cut to include the effect of connections to external cells. The Min Cut procedures are coupled with a Force Directed Pairwise Interchange (FDPI) algorithm for placement improvement. For the same problem these techniques produce a range of solutions with a typical standard deviation 4% for the total wire length and 3% to 4% for the routed area. The spread of results for Simulated Annealing is even larger. This distribution of results for a given algorithm implies that mean results of many placements should be used when comparing algorithms. We find that the Min Cut partitioning with simplified Terminal Propagation is the most efficient placement procedure studied.

38 citations

### "Measurement of inherent noise in ED..." refers background in this paper

• ...In the VLSI CAD domain, some early discoveries about noise in placement tools are presented in [ 5 ]....

[...]

Proceedings ArticleDOI
08 Apr 2000
TL;DR: There is inherent variability in wire lengths obtained using commer- cially available place and route tools - wire length estimation error cannot be any smaller than a lower limit due to this variability, and the proposed model works well within these variability limitations.
Abstract: We present a novel technique for es- timating individual wire lengths in a given standard- cell-based design during the technology mapping phase of logic synthesis. The proposed method is based on creating a black box model of the place and route tool as a function of a number of parame- ters which are all available before layout. The place and route tool is characterized, only once, by apply- ing it to a set of typical designs in a certain technol- ogy. We also propose a net bounding box estimation technique based on the layout style and net neigh- borhood analysis. We show that there is inherent variability in wire lengths obtained using commer- cially available place and route tools - wire length estimation error cannot be any smaller than a lower limit due to this variability. The proposed model works well within these variability limitations.

35 citations

### "Measurement of inherent noise in ED..." refers background or result in this paper

• ...While the above-mentioned studies used the concept of noise to generate isomorphic circuits for tool benchmarking, Bodapati and Najm [1] analyzed noise in tools from a different perspective....

[...]

• ...Starting from the premise that noise due to cell/net ordering and naming has a negative effect on estimators, the authors of [1] proposed a pre-layout estimation model for individual wire length, and noted that the accuracy of their estimations are worsened by inherent tool noise (with respect to ordering and naming)....

[...]