scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Measurement of inherent noise in EDA tools

TL;DR: This work seeks to identify sources of noise in EDA tools, and analyze the effects of these noise sources on design quality, and proposes new behavior criteria for tools with respect to the existence and management of noise.
Abstract: With advancing semiconductor technology and exponentially growing design complexities, predictability of design tools becomes an important part of a stable top-down design process. Prediction of individual tool solution quality enables designers to use tools to achieve best solutions within prescribed resources, thus reducing design cycle time. However, as EDA tools become more complex, they become less predictable. One factor in the loss of predictability is inherent noise in both algorithms and how the algorithms are invoked. In this work, we seek to identify sources of noise in EDA tools, and analyze the effects of these noise sources on design quality. Our specific contributions are: (i) we propose new behavior criteria for tools with respect to the existence and management of noise; (ii) we compile and categorize possible perturbations in the tool use model or tool architecture that can be sources of noise; and (iii) we assess the behavior of industry place and route tools with respect to these criteria and noise sources. While the behavior criteria give some guidelines for and characterize the stability of tools, we are not recommending that tools be immune from input perturbations. Rather, the categorization of noise allows us to better understand how tools will or should behave; this may eventually enable improved tool predictors that consider inherent tool noise.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: This literature focuses on doing a comparative analysis between Modular Audio Recognition Framework (MARF) and the General Intentional Programming System (GIPSY) with the help of different software metrics.
Abstract: This literature focuses on doing a comparative analysis between Modular Audio Recognition Framework (MARF) and the General Intentional Programming System (GIPSY) with the help of different software metrics. At first, we understand the general principles, architecture and working of MARF and GIPSY by looking at their frameworks and running them in the Eclipse environment. Then, we study some of the important metrics including a few state of the art metrics and rank them in terms of their usefulness and their influence on the different quality attributes of a software. The quality attributes are viewed and computed with the help of the Logiscope and McCabe IQ tools. These tools perform a comprehensive analysis on the case studies and generate a quality report at the factor level, criteria level and metrics level. In next step, we identify the worst code at each of these levels, extract the worst code and provide recommendations to improve the quality. We implement and test some of the metrics which are ranked as the most useful metrics with a set of test cases in JDeodorant. Finally, we perform an analysis on both MARF and GIPSY by doing a fuzzy code scan using MARFCAT to find the list of weak and vulnerable classes.

1 citations

Proceedings ArticleDOI
01 Mar 2019
TL;DR: This paper describes the use of estimated Pareto optimal trade-off sets to provide designers with the capability of visualizing the results of EDA tool configuration settings, or “knobs”, that will offer an optimal post detail route design based on two design metrics, critical path length and core area.
Abstract: The ability to configure physical design tools is often dependent on the experience and knowledge of the physical designer (PD). Technology node sizes are ever decreasing, digital design sizes vary drastically, and design constraints change based on the needs of the application. As these changes occur frequently and physical design times can be extensive, the need for accurate quality of design results early in the design process is crucial. Collecting these metrics is computationally expensive, creating a need to determine how to best create and extract information as design flows change. This paper describes the use of estimated Pareto optimal trade-off sets to provide designers with the capability of visualizing the results of Electronic Design Automation (EDA) tool configuration settings, or “knobs”, that will offer an optimal post detail route design based on two design metrics, critical path length and core area. We will show that when given a set of design constraints by creating a point that occurs along the Pareto front, the knob settings used are optimal. With only 38 samples per design, we were able to produce estimated detail routed design metrics with a worst case error (WCE) of less than 10%.

1 citations


Cites background from "Measurement of inherent noise in ED..."

  • ...sample size [7], tool noise [8], dimensionality [9], and multicollinear-...

    [...]

Proceedings ArticleDOI
29 Oct 2022
TL;DR: In this paper , a stochastic approach, called LGC-Net, is proposed to solve the problem of non-deterministic parallel routing, which hampers model training and degrades prediction accuracy.
Abstract: Deep learning is a promising approach to early DRV (Design Rule Violation) prediction. However, non-deterministic parallel routing hampers model training and degrades prediction accuracy. In this work, we propose a stochastic approach, called LGC-Net, to solve this problem. In this approach, we develop new techniques of Gaussian random field layer and focal likelihood loss function to seamlessly integrate Log Gaussian Cox process with deep learning. This approach provides not only statistical regression results but also classification ones with different thresholds without retraining. Experimental results with noisy training data on industrial designs demonstrate that LGC-Net achieves significantly better accuracy of DRV density prediction than prior arts.
01 Jan 2013
TL;DR: In this paper, the authors evaluate design rules at the cell level using first-order models of variability and manufacturability and layout topology/congestion-based area estimates.
Abstract: Design Rules (DRs) are the biggest design-relevant quality metric for a technology. Even small changes in DRs can have significant impact on manufacturability as well as circuit characteristics including layout area, variability, power, and performance. To systematically evaluate design rules several works have been published. The most recent among them is the Design Rule Evaluator (UCLA_DRE), a tool developed by NanoCad lab at UCLA, for fast and systematic evaluation of design rules and layout styles in terms of major layout characteristics of area, manufacturability, and variability. The framework essentially creates a virtual standard-cell library and performs the evaluation based on the virtual layout using first order models of variability and manufacturability (instead of relying on accurate simulation) and layout topology/congestion-based area estimates (instead of explicit and slow layout generation).However, UCLA_DRE suffers from few major limitations. First, UCLA_DRE currently does not have the capability to evaluate the interaction between overlay design rules and overlay control, which is becoming more critical and more challenging with the move toward multiple-patterning(MP) lithography. Second, UCLA_DRE currently evaluates design rules at the cell level which may lead to misleading conclusions because most designs are routing-limited and, hence, not every change in cell area results in a corresponding change in chip area. Third, delay was not evaluated but it is well-known that delay-change can affect chip-area due to different buffering and gate sizing to meet timing requirements. The first part of this dissertation offers a framework to study interaction between overlay design rules and overly control options in terms of area, performance and yield. The framework can also be used for designing informed, design-aware overlay metrology and control strategies. In this work, the framework was used to explore the design impact of LELE double-patterning rules and poly-line end extension rule defined between poly and active layer for different overlay characteristics (i.e., within-field vs. field-to-field overlay) and different overlay models at the 14nm node. Interesting conclusions can be drawn from the results. For example, one result shows that increasing the minimum mask-overlap length by 1nm would allow the use of a third-order wafer/sixth-order field-level overlay model instead of a sixth-order wafer/sixth-order field-level model with negligible impact on design.In the second part of the dissertation, a new methodology called chipDRE, a framework to evaluate design rules at the chip-level, is described. chipDRE uses a good chips per wafer metric to unify area, performance, variability and functional yield. It uses UCLA_DRE to generate virtual standard-cell library and uses a mix of physical design and semi-empirical models to estimate area change at the chip-level due to both cell delay and cell area change. One interesting result for well to active spacing shows non-monotonic relationship of ``good chips per wafer with the rule value

Cites background from "Measurement of inherent noise in ED..."

  • ...Moreover, the work of [30] has demonstrated that little noise can have huge effect on place-and-route solution quality; this makes using a model-based estimate even more attractive....

    [...]

Dissertation
21 Feb 2007

Cites result from "Measurement of inherent noise in ED..."

  • ...Also in [33] a similar study was performed that resulted in a difference of up to 7% between the best and the worst placement result....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors studied the asymptotic behavior of the systems where the objective function values can only be sampled via Monte Carlo, where the discrete algorithm is a combination of stochastic approximation and simulated annealing.
Abstract: The asymptotic behavior of the systems $X_{n + 1} = X_n + a_n b( {X_n ,\xi _n } ) + a_n \sigma ( X_n )\psi_n $ and $dy = \bar b( y )dt + \sqrt {a( t )} \sigma ( y )dw$ is studied, where $\{ {\psi _n } \}$ is i.i.d. Gaussian, $\{ \xi _n \}$ is a (correlated) bounded sequence of random variables and $a_n \approx A_0/\log (A_1 + n )$. Without $\{ \xi _n \}$, such algorithms are versions of the “simulated annealing” method for global optimization. When the objective function values can only be sampled via Monte Carlo, the discrete algorithm is a combination of stochastic approximation and simulated annealing. Our forms are appropriate. The $\{ \psi _n \}$ are the “annealing” variables, and $\{ \xi _n \}$ is the sampling noise. For large $A_0 $, a full asymptotic analysis is presented, via the theory of large deviations: Mean escape time (after arbitrary time n) from neighborhoods of stable sets of the algorithm, mean transition times (after arbitrary time n) from a neighborhood of one stable set to another, a...

145 citations

Journal ArticleDOI
TL;DR: A search for the global minimum of a function is proposed; the search is on the basis of sequential noisy measurements and the search plan is shown to be convergent in probability to a set of minimizers.
Abstract: A search for the global minimum of a function is proposed; the search is on the basis of sequential noisy measurements. Because no unimodality assumptions are made, stochastic approximation and other well-known methods are not directly applicable. The search plan is shown to be convergent in probability to a set of minimizers. This study was motivated by investigations into machine learning. This setting is explained, and the methodology is applied to create an adaptively improving strategy for 8-puzzle problems.

55 citations


"Measurement of inherent noise in ED..." refers background in this paper

  • ...For example, in [8], Yakowitz and Lugosi studied a formulation of random search in the presence of noise for the machine learning domain....

    [...]

Journal ArticleDOI
TL;DR: A detailed software architecture is presented that allows flexible, efficient and accurate assessment of the practical implications of new move-based algorithms and partitioning formulations and discusses the current level of sophistication in implementation know-how and experimental evaluation.
Abstract: We summarize the techniques of implementing move-based hypergraph partitioning heuristics and evaluating their performance in the context of VLSI design applications. Our first contribution is a detailed software architecture, consisting of seven reusable components, that allows flexible, efficient and accurate assessment of the practical implications of new move-based algorithms and partitioning formulations. Our second contribution is an assessment of the modern context for hypergraph partitioning research for VLSI design applications. In particular, we discuss the current level of sophistication in implementation know-how and experimental evaluation, and we note how requirements for real-world partitioners - if used as motivation for research - should affect the evaluation of prospective contributions. Two "implicit decisions" in the implementation of the Fiduccia-Mattheyses heuristic are used to illustrate the difficulty of achieving meaningful experimental evaluation of new algorithmic ideas.

46 citations


Additional excerpts

  • ...For example, a KLFM netlist partitioning implementation [2] will search for the cell to be moved to a different partition based on the order of the cells in the gain bucket data structure....

    [...]

Proceedings ArticleDOI
Mark R. Hartoog1
02 Jul 1986
TL;DR: It is found that the Min Cut partitioning with simplified Terminal Propagation is the most efficient placement procedure studied and mean results of many placements should be used when comparing algorithms.
Abstract: This paper describes a study of placement procedures for VLSI Standard Cell Layout. The procedures studied are Simulated Annealing, Min Cut placement, and a number of improvements to Min Cut placement including a technique called Terminal Propagation which allows Min Cut to include the effect of connections to external cells. The Min Cut procedures are coupled with a Force Directed Pairwise Interchange (FDPI) algorithm for placement improvement. For the same problem these techniques produce a range of solutions with a typical standard deviation 4% for the total wire length and 3% to 4% for the routed area. The spread of results for Simulated Annealing is even larger. This distribution of results for a given algorithm implies that mean results of many placements should be used when comparing algorithms. We find that the Min Cut partitioning with simplified Terminal Propagation is the most efficient placement procedure studied.

39 citations


"Measurement of inherent noise in ED..." refers background in this paper

  • ...In the VLSI CAD domain, some early discoveries about noise in placement tools are presented in [ 5 ]....

    [...]

Proceedings ArticleDOI
08 Apr 2000
TL;DR: There is inherent variability in wire lengths obtained using commer- cially available place and route tools - wire length estimation error cannot be any smaller than a lower limit due to this variability, and the proposed model works well within these variability limitations.
Abstract: We present a novel technique for es- timating individual wire lengths in a given standard- cell-based design during the technology mapping phase of logic synthesis. The proposed method is based on creating a black box model of the place and route tool as a function of a number of parame- ters which are all available before layout. The place and route tool is characterized, only once, by apply- ing it to a set of typical designs in a certain technol- ogy. We also propose a net bounding box estimation technique based on the layout style and net neigh- borhood analysis. We show that there is inherent variability in wire lengths obtained using commer- cially available place and route tools - wire length estimation error cannot be any smaller than a lower limit due to this variability. The proposed model works well within these variability limitations.

39 citations


"Measurement of inherent noise in ED..." refers background or result in this paper

  • ...While the above-mentioned studies used the concept of noise to generate isomorphic circuits for tool benchmarking, Bodapati and Najm [1] analyzed noise in tools from a different perspective....

    [...]

  • ...Starting from the premise that noise due to cell/net ordering and naming has a negative effect on estimators, the authors of [1] proposed a pre-layout estimation model for individual wire length, and noted that the accuracy of their estimations are worsened by inherent tool noise (with respect to ordering and naming)....

    [...]