scispace - formally typeset
Search or ask a question

Showing papers by "Cadence Design Systems published in 2005"


Patent
01 Apr 2005
TL;DR: In this article, a Wafer Image Modeling and Prediction System (WIMAPS) is described that includes systems and methods that generate and/or apply models of resolution enhancement techniques (RET) and printing processes in integrated circuit (IC) fabrication.
Abstract: A Wafer Image Modeling and Prediction System (“WIMAPS”) is described that includes systems and methods that generate and/or apply models of resolution enhancement techniques (“RET”) and printing processes in integrated circuit (“IC”) fabrication. The WIMAPS provides efficient processes for use by designers in predicting the RET and wafer printing process so as to allow designers to filter predict printed silicon contours prior to application of RET and printing processes to the circuit design.

196 citations


Patent
22 Jun 2005
TL;DR: In this article, a method and an apparatus to perform static static timing analysis have been described, which includes performing statistical analysis on performance data of a circuit from a plurality of libraries at two or more process corners.
Abstract: A method and an apparatus to perform statistical static timing analysis have been disclosed. In one embodiment, the method includes performing statistical analysis on performance data of a circuit from a plurality of libraries at two or more process corners using a static timing analysis module, and estimating performance of the circuit at a predetermined confidence level based on results of the statistical analysis during an automated design flow of the circuit without using libraries at the predetermined confidence level.

195 citations


Patent
13 Aug 2005
TL;DR: In this article, a method of modifying polygons in a data set maskless or mask-based optical projection lithography is proposed, which includes: 1) mapping the data set to a figure-of-demerit, 2) moving individual polygon edges to decrease the figure-ofthe-dimention, and 3) disrupting the set of polygons to enable a further decrease in the figure of the demerit.
Abstract: A method of modifying polygons in a data set mask-less or mask based optical projection lithography includes: 1) mapping the data set to a figure-of-demerit; 2) moving individual polygon edges to decrease the figure-of-demerit; and 3) disrupting the set of polygons to enable a further decrease in the figure-of-demerit, wherein disrupting polygons includes any of the following polygon disruptions: breaking up, merging, or deleting polygons.

169 citations


Patent
13 Aug 2005
TL;DR: In this paper, an apparatus and method for improving image quality in a photolithographic process includes calculating a figure of demerit for a mask function and then adjusting the mask function to reduce the figure.
Abstract: An apparatus and method for improving image quality in a photolithographic process includes calculating a figure-of-demerit for a photolithographic mask function and then adjusting said photolithographic mask function to reduce the figure of demerit.

167 citations


Patent
09 Apr 2005
TL;DR: In this article, a method for correcting a process critical layout includes characterizing the influence of individual ones of a set of worst case process variations on a simulated nano-circuit layout design.
Abstract: A apparatus and method for correcting a process critical layout includes characterizing the influence of individual ones of a set of worst case process variations on a simulated nano-circuit layout design and then correcting layout geometries in the simulated nano-circuit layout based on such characterizations.

156 citations


Book ChapterDOI
TL;DR: The paper describes eight bounded and unbounded techniques, and analyzes the performance of these algorithms on a large and diverse set of hardware benchmarks.
Abstract: Model checking is a formal technique for automatically verifying that a finite-state model satisfies a temporal property. In model checking, generally Binary Decision Diagrams (BDDs) are used to efficiently encode the transition relation of the finite-state model. Recently model checking algorithms based on Boolean satisfiability (SAT) procedures have been developed to complement the traditional BDD-based model checking. These algorithms can be broadly classified into three categories: (1) bounded model checking which is useful for finding failures (2) hybrid algorithms that combine SAT and BDD based methods for unbounded model checking, and (3) purely SAT-based unbounded model checking algorithms. The goal of this paper is to provide a uniform and comprehensive basis for evaluating these algorithms. The paper describes eight bounded and unbounded techniques, and analyzes the performance of these algorithms on a large and diverse set of hardware benchmarks.

120 citations


Patent
06 Dec 2005
TL;DR: In this paper, a schematic view of a plurality of interconnected circuit devices of a circuit is displayed on the computer's display, and one or more of the devices of the displayed schematic view are selected by a user.
Abstract: In a computer implemented method of device layout in an integrated circuit design an array having a plurality of cells is selected and stored in a memory of a computer. A schematic view of a plurality of interconnected circuit devices of a circuit is displayed on the computer's display. One or more of the circuit devices of the displayed schematic view are selected by a user. Responsive to the selection of each circuit device, a processing means of the computer populates an empty cell of the array in the memory of the computer with a corresponding layout instance of the circuit device, wherein each layout instance represents a physical arrangement of material(s) that form the corresponding selected circuit device.

94 citations


Proceedings ArticleDOI
31 May 2005
TL;DR: This work proposes a methodology on top of a set of algorithms to exploit non-trivial voltage island boundaries for optimal power versus design cost trade-off under performance requirement, and shows a ten-fold improvement over current logical-boundary based industry approach.
Abstract: High power consumption not only leads to short battery life for handheld devices, but also causes on-chip thermal and reliability problems in general. As power consumption is proportional to the square of supply voltage, reducing supply voltage can significantly reduce power consumption. Multi-supply voltage (MSV) has previously been introduced to provide finer-grain power and performance trade-off. In this work we propose a methodology on top of a set of algorithms to exploit non-trivial voltage island boundaries for optimal power versus design cost trade-off under performance requirement. Our algorithms are efficient, robust and error-bounded, and can be flexibly tuned to optimize for various design objectives (e.g., minimal power within a given number of voltage islands, or minimal fragmentation in voltage islands within a given power bound) depending on the design requirement. Our experiment on real industry designs shows a ten-fold improvement of our method over current logical-boundary based industry approach.

82 citations


Journal ArticleDOI
TL;DR: This work presents a hardware-efficient design increasing throughput for the AES algorithm using a high-speed parallel pipelined architecture and achieves a high throughput of 29.77 Gbps in encryption whereas the highest throughput reported in literature is 21.54 Gbps.

79 citations


Proceedings ArticleDOI
07 Mar 2005
TL;DR: This introductory embedded tutorial gives an overview of the design problems at hand when designing integrated electronic systems in nanometer-scale CMOS technologies, such as the increased leakage and variability with scaling technologies.
Abstract: This special session adresses the problems that designers face when implementing analogand digital circuits in nanometer technologies. An introductory embedded tutorial will give an overview of the design problems at hand : the leakage power and process variability and their implications for digital circuits and memories, and the reducing supply voltages, the design productivity and signal integrity problems for embedded analog blocks. Next, a panel ofexperts from both industrial semiconductor houses and design companies, EDA vendors and research institutes will present and discuss with the audience their opinions on whether the design road ends at marker "65nm" or not.

68 citations


Journal ArticleDOI
TL;DR: A general hierarchical analysis methodology, HiPRIME, to efficiently analyze RLKC power delivery systems and develops and applies the IEKS method to build the multiport Norton equivalent circuits, which transform all the internal sources to Norton current sources at ports.
Abstract: This paper proposes a general hierarchical analysis methodology, HiPRIME, to efficiently analyze RLKC power delivery systems. After partitioning the circuits into blocks, we develop and apply the IEKS (Improved Extended Krylov Subspace) method to build the multiport Norton equivalent circuits which transform all the internal sources to Norton current sources at ports. Since there are no active elements inside the Norton circuits, passive or realizable model order reduction techniques such as PRIMA can be applied. The significant speed improvement, 700 times faster than Spice with less than 0.2% error and 7 times faster than a state-of-the-art solver, InductWise, is observed. To further reduce the top-level hierarchy runtime, we develop a second-level model reduction algorithm and prove its passivity.

Patent
15 Feb 2005
TL;DR: In this paper, a method for building a hierarchical representation of a circuit for simulation is presented, where the source file contains SPICE-like netlist descriptions of the circuit in a flattened representation.
Abstract: A method for building a hierarchical representation of a circuit for simulation includes 1) receiving a source file containing SPICE-like netlist descriptions of the circuit in a flattened representation; 2) generating a primitive database using the source file, where the primitive database includes a geometries-describing section for storing a plurality of primitive subcircuit blocks; 3) generating an instance database using the geometries-describing section, where the instances database includes instance subcircuit blocks corresponding to explicitly-expressed primitive subcircuit blocks with predefined geometric values; 4) generating a simulation database using the instance database, where the simulation database includes simulation subcircuit blocks corresponding to fully-flattened instance subcircuit blocks; and 5) simulating the circuit using the simulation database, the instance database, and the primitive database.

Patent
21 Sep 2005
TL;DR: In this article, a computer aided design tool and method for designing IC layouts by recommending subcircuit layout constraints based upon an automated identification from a circuit schematic of sub-circuit types requiring special IC layout constraints.
Abstract: A computer aided design tool and method for designing IC layouts by recommending subcircuit layout constraints based upon an automated identification from a circuit schematic of subcircuit types requiring special IC layout constraints. Subcircuit types are identified on the basis of netlist examination, as well as cues from the layout of the circuit schematic.

Journal ArticleDOI
TL;DR: In this paper, the average cluster-to-cluster velocity dispersion in seven different cluster aggregates (knots) is <10 km s-1, which is consistent with those reached by Zhang and coworkers based on comparisons between the positions of the clusters and the velocity and density structure of the nearby interstellar medium.
Abstract: Long-slit spectra of several dozen young star clusters have been obtained at three positions in the Antennae galaxies with the Space Telescope Imaging Spectrograph and its 52'' × 02 slit. Based on Hα emission-line measurements, the average cluster-to-cluster velocity dispersion in seven different cluster aggregates ("knots") is <10 km s-1. The fact that this upper limit is similar to the velocity dispersion of gas in the disks of typical spiral galaxies suggests that the triggering mechanism for the formation of young massive compact clusters ("super star clusters") is unlikely to be high-velocity cloud-cloud collisions. On the other hand, models in which preexisting giant molecular clouds in the disks of spiral galaxies are triggered into cluster formation are compatible with the observed low-velocity dispersions. These conclusions are consistent with those reached by Zhang and coworkers based on comparisons between the positions of the clusters and the velocity and density structure of the nearby interstellar medium. We find evidence for systematically lower values of the line ratios [N II]/Hα and [S II]/Hα in the bright central regions of some of the knots relative to their outer regions. This suggests that the harder ionizing photons are used up in the regions nearest the clusters, and the diffuse ionized gas farther out is photoionized by "leakage" of the leftover low-energy photons. The low values of the [S II]/Hα line ratio, typically [S II]/Hα < 0.4, indicate that the emission regions are photoionized rather than shock heated. The absence of evidence for shock-heated gas is an additional indication that high-velocity cloud-cloud collisions are not playing a major role in the formation of young clusters.

Journal ArticleDOI
TL;DR: In this article, the average cluster-to-cluster velocity dispersion in 7 different cluster aggregates was found to be <10 kms in the Antennae galaxy, which is consistent with those reached by Zhang et al. (2001) based on comparisons between positions of the clusters and the velocity and density structure of the nearby interstellar medium.
Abstract: Long-slit spectra of several dozen young star clusters have been obtained at three positions in the Antennae galaxies with the Space Telescope Imaging Spectrograph (STIS) and its 52"x0.2" slit. Based on H-alpha emission-line measurements, the average cluster-to-cluster velocity dispersion in 7 different cluster aggregates ("knots") is <10 \kms. The fact that this upper limit is similar to the velocity dispersion of gas in the disks of typical spiral galaxies suggests that the triggering mechanism for the formation of young massive compact clusters ("super star clusters") is unlikely to be high velocity cloud-cloud collisions. On the other hand, models where preexisting giant molecular clouds in the disks of spiral galaxies are triggered into cluster formation are compatible with the observed low velocity dispersions. These conclusions are consistent with those reached by Zhang et al. (2001) based on comparisons between the positions of the clusters and the velocity and density structure of the nearby interstellar medium. We find evidence for systematically lower values of the line ratios [NII]/H-alpha and [SII]/H-alpha in the bright central regions of some of the knots, relative to their outer regions. This suggests that the harder ionizing photons are used up in the regions nearest the clusters, and the diffuse ionized gas farther out is photoionized by 'leakage' of the leftover low-energy photons. The low values of the [SII]/H-alpha line ratio, typically [SII]/H-alpha<0.4, indicates that the emission regions are photoionized rather than shock heated. The absence of evidence for shock-heated gas is an additional indication that high velocity cloud-cloud collisions are not playing a major role in the formation of the young clusters.

Proceedings ArticleDOI
02 Oct 2005
TL;DR: An O(n log n) time algorithm is proposed to construct spanning graph for RSMTRB, and the experimental results show that this approach can achieve a solution with significantly reduced wire length.
Abstract: Given n points on a plane, a rectilinear Steiner minimal tree (RSMT) connects these points through some extra points called Steiner points to achieve a tree with minimal total wire length. Taking blockages into account dramatically increases the problem complexity. It is extremely unlikely that an efficient optimal algorithm exists for rectilinear Steiner minimal tree construction with rectilinear blockages (RSMTRB). Although there exist some heuristic algorithms for this problem, they have either poor quality or expensive running time. In this paper, we propose an efficient and effective approach to solve RSMTRB. The connection graph we used in this approach is called spanning graph which only contains O(n) edges and vertices. An O(n log n) time algorithm is proposed to construct spanning graph for RSMTRB. The experimental results show that this approach can achieve a solution with significantly reduced wire length. The total run time increased is negligible in the whole design flow.

Patent
31 May 2005
TL;DR: In this article, a method for generating an OPC model which takes into account across-wafer variations which occur during the process of manufacturing semiconductor chips based on the parameters of test patterns measured at the "wafer sweet spots" is provided.
Abstract: A method for generating an OPC model is provided which takes into consideration across-wafer variations which occur during the process of manufacturing semiconductor chips. More particularly, a method for generating an OPC model is provided which takes into consideration across-wafer variations which occur during the process of manufacturing semiconductor chips based on the parameters of test patterns measured at the “wafer sweet spots” so as to arrive at an accurate model.

Proceedings ArticleDOI
13 Jun 2005
TL;DR: Experimental results show that the proposed algorithm achieves at least 10X speed-up over the fastest decap allocation method reported so far with similar or even better budget quality and a power grid circuit with about one million nodes can be optimized using the new method in half an hour on the latest Linux workstations.
Abstract: This paper proposes a fast decoupling capacitance (decap) allocation and budgeting algorithm for both early stage decap estimation and later stage decap minimization in today's VLSI physical design. The new method is based on a sensitivity-based conjugate gradient (CG) approach. But it adopts several new techniques, which significantly improve the efficiency of the optimization process. First, the new approach applies the time-domain merged adjoint network method for fast sensitivity calculation. Second, an efficient search step scheme is proposed to replace the time-consuming line search phase in conventional conjugate gradient method for decap budget optimization. Third, instead of optimizing an entire large circuit, we partition the circuit into a number of smaller sub-circuits and optimize them separately by exploiting the locality of adding decaps. Experimental results show that the proposed algorithm achieves at least 10X speed-up over the fastest decap allocation method reported so far with similar or even better budget quality and a power grid circuit with about one million nodes can be optimized using the new method in half an hour on the latest Linux workstations.

Proceedings ArticleDOI
07 Mar 2005
TL;DR: The design of Seq-SAT is described, an efficient sequential SAT solver with improved search strategies over Satori, and a decision variable selection heuristic more suitable for solving the sequential problems is presented.
Abstract: A sequential SAT solver, Satori, was recently proposed (Iyer, M.K. et al., Proc. IEEE/ACM Int. Conf. on Computer-Aided Design, 2003) as an alternative to combinational SAT in verification applications. This paper describes the design of Seq-SAT, an efficient sequential SAT solver with improved search strategies over Satori. The major improvements include: (1) a new and better heuristic for minimizing the set of assignments to state variables; (2) a new priority-based search strategy and a flexible sequential search framework which integrates different search strategies; (3) a decision variable selection heuristic more suitable for solving the sequential problems. We present experimental results to demonstrate that our sequential SAT solver can achieve orders-of-magnitude speedup over Satori. We plan to release the source code of Seq-SAT.

Proceedings ArticleDOI
23 May 2005
TL;DR: This work presents a hardware-efficient design increasing throughput for the AES algorithm using a high-speed parallel pipelined architecture, and achieves a high throughput of 29.77 Gbit/s in encryption whereas the highest throughput reported in the literature is 21.54 G bit/s.
Abstract: In November 2001, the National Institute of Standards and Technology (NIST) of the USA chose the Rijndael algorithm as the suitable Advanced Encryption Standard (AES) to replace the Data Encryption Standard (DES) algorithm Since then, many hardware implementations have been proposed in literature We present a hardware-efficient design increasing throughput for the AES algorithm using a high-speed parallel pipelined architecture By using an efficient inter-round and intra-round pipeline design, our implementation achieves a high throughput of 2977 Gbit/s in encryption whereas the highest throughput reported in the literature is 2154 Gbit/s

Patent
18 Feb 2005
TL;DR: A method and system for hardware-based reporting of assertion information for emulation and hardware acceleration is described in this article, which includes providing a user interface to design assertions that aid in verifying an integrated circuit design.
Abstract: A method and system for hardware based reporting of assertion information for emulation and hardware acceleration is disclosed In one embodiment, a method of performing assertion-based verification, comprises providing a user interface to design assertions that aid in verifying an integrated circuit design Assertion instrumentation code is generated to implement the assertions in hardware as assertion instrumentation The assertion instrumentation code is provided to an emulator that generates the assertion instrumentation

Patent
31 Mar 2005
TL;DR: In this paper, the verification test data is generated for the physical device under test (DUT) using a constraint-based random test generation process and the output data is captured from the physical DUT in response to output data.
Abstract: Method, apparatus, and computer readable medium for functionally verifying a physical device under test (DUT) is described. In one example, verification test data is generated for the physical DUT using a constraint-based random test generation process. For example, the architecture, structure, and/or content of the verification test data may be defined in response to constraint data and an input/output data model. A first portion of the verification test data is applied to the physical DUT. Output data is captured from the physical DUT in response to application of the first portion of the verification test data. A second portion of the verification test data is selected in response to the output data. Expected output data for the physical DUT associated with the verification test data may be generated and compared with the output data captured from the DUT to functionally verify the design of the DUT.

Journal ArticleDOI
TL;DR: This paper shows that the problem of simultaneous power supply planning and noise avoidance can be formulated as a constrained maximum flow problem and present an efficient yet effective heuristic to handle the problem.
Abstract: With today's advanced integrated circuit manufacturing technology in deep submicron (DSM) environment, we can integrate entire electronic systems on a single system on a chip However, without careful power supply planning in layout, the design of chips will suffer from local hot spots, insufficient power supply, and signal integrity problems Postfloorplanning or postroute methodologies in solving power delivery and signal integrity problems have been applied but they will cause a long turnaround time, which adds costly delays to time-to-market In this paper, we study the problem of simultaneous power supply planning and noise avoidance as early as in the floorplanning stage We show that the problem of simultaneous power supply planning and noise avoidance can be formulated as a constrained maximum flow problem and present an efficient yet effective heuristic to handle the problem Experimental results are encouraging With a slight increase of total wirelength, we achieve almost no static IR (voltage)-drop requirement violation in meeting the current and power demand requirement imposed by the circuit blocks compared with a traditional floorplanner and 457% of improvement on a /spl Delta/I noise constraint violation compared with the approach that only considers power supply planning

Patent
18 Aug 2005
TL;DR: In this paper, a method for transforming an integrated circuit (IC) layout includes recognizing shapes within the IC layout, identifying features for each of the shapes and extracting situations for the respective features.
Abstract: Systems, methodologies and technologies for the analysis and transformation of integrated circuit layouts using situations are disclosed. A method for transforming an integrated circuit (IC) layout includes recognizing shapes within the IC layout, identifying features for each of the shapes and extracting situations for the respective features. Extracted situations can be used to improve optical proximity correction (OPC) of the IC layout. This improved OPC includes extracting the situations, simulating the situations to determine a set of the situations identified for modification based on failing to satisfy a desired OPC tolerance level, modifying the set of situations to improve satisfaction of the desired OPC tolerance level, and reintegrating the modified set of situations into the IC layout. Extracted situations can also be used to improve aerial image simulation of the IC layout. This improved aerial image simulation includes extracting the situations, simulating a subset of the situations to determine aerial images of the subset, and tiling the subset of situations to form a larger aerial image. Extracted situations can further be used to improve density analysis of the IC layout. This improved density analysis includes extracting the situations for a window of the IC layout, removing overlap from the window based on the extracted situations, calculating a density for each of the situations, and calculating a density for the window based on the density for each of the situations.

Patent
08 Dec 2005
TL;DR: In this article, a deadlock situation can be determined based on direct and indirect dependencies, such as loops and dependencies involving a first work element and a lower level second work element.
Abstract: Method and system for detecting indeterminate dependencies in a distributed computing grid. A determination is made whether a deadlock situation exists within a workflow of the distributed computing grid and a user of the computing grid is notified of the deadlock situation, e.g., where in the workflow deadlock occurs. A deadlock situation can be determined based on direct and indirect dependencies, such as loops and dependencies involving a first work element and a lower level second work element. A deadlock situation can also be determined based on the relationships between a job and a task, which is executable by a processor in the distributed computing grid.

Proceedings ArticleDOI
08 Nov 2005
TL;DR: It is shown how it is possible to use MISRs to perform a go/no-go failure test with very little data volume and to also use a compacted continuous stream of MISR output states to aid diagnosis.
Abstract: This paper describes a simple means for diagnosing failures by observing a compacted MISR output stream. While MISRs have been used in the industry for response compression, their use has often been seen as an impediment to diagnosis of failures. This paper shows how it is possible to use MISRs to perform a go/no-go failure test with very little data volume and to also use a compacted continuous stream of MISR output states to aid diagnosis

Patent
30 Jun 2005
TL;DR: In this paper, a real-time method for verifying and monitoring the calibrated model on a production or monitor wafer is presented, where the critical dimensions and images of these test and verification structures are monitored across wafer and across exposure field.
Abstract: This invention relates to a method for real time monitoring and verifying optical proximity correction (OPC) models and methods in production. Prior to OPC is performed on the integrated circuit layout, a model describing the optical, physical and chemical processes involving lithography should be obtained accurately and precisely. In general, the model is calibrated using the measurements obtained by running wafers through the same lithography, patterning, and etch processes. In this invention, a novel real time method for verifying and monitoring the calibrated model on a production or monitor wafer is presented: optical proximity corrected (OPC-ed) test and verification structures are placed on scribe lines or cut lines of the production or monitor wafer, and with pre-determined schedule, the critical dimensions and images of these test and verification structures are monitored across wafer and across exposure field.

Patent
04 Jun 2005
TL;DR: Local preferred direction (LPD) wiring models as discussed by the authors allow at least one wiring layer (200) to have a set of regions (205, 210, 215) that each has a different preferred direction than the particular wiring layer.
Abstract: Some embodiments of the invention provide a Local Preferred Direction (LPD) wiring model for use with one or more EDA tools. An LPD wiring model allows at least one wiring layer (200) to have a set of regions (205, 210, 215) that each has a different preferred direction (-45°, 0°, 90°) than the particular wiring layer. In addition, each region (205, 210, 215) has a local preferred direction (-45°, 0°, 90°) that differs from the local preferred direction of at least one other region in the set. Furthermore, at least two regions have two different polygonal shapes and no region in the set encompasses another region in the set. Some embodiments also provide a Graphical User Interface (GUI) that facilitates a visual presentation of an LPD design layout and provides tools to create and manipulate LPD regions in a design layout.

Journal ArticleDOI
TL;DR: A routing-driven methodology for scan chain ordering with minimum wirelength objective is presented and substantial wirelength reductions for the routing-based flow versus the traditional placement- based flow are shown.
Abstract: Scan chain insertion can have a large impact on routability, wirelength, and timing of the design. We present a routing-driven methodology for scan chain ordering with minimum wirelength objective. A routing-based approach to scan chain ordering, while potentially more accurate, can result in TSP (Traveling Salesman Problem) instances which are asymmetric and highly nonmetric; this may require a careful choice of solvers. We evaluate our new methodology on recent industry place-and-route blocks with 1200 to 5000 scan cells. We show substantial wirelength reductions for the routing-based flow versus the traditional placement-based flow. In a number of our test cases, over 86p of scan routing overhead is saved. Even though our experiments are, so far, timing oblivious, the routing-based flow also improves evaluated timing, and practical timing-driven extensions appear feasible.

Patent
20 Oct 2005
TL;DR: In this article, a method for compiling a model for use in a simulation, the method comprising receiving a description of the model and automatically converting the description into an implementation that is customized for a selected analysis during simulation.
Abstract: A method ( 200 ) is provided for compiling a model for use in a simulation, the method comprising receiving a description of the model ( 202 ); and automatically converting the description into an implementation of the model (204 ) that is customized for a selected analysis during simulation.