scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Sangiovanni-Vincentelli published in 1997"


Book•
01 Oct 1997
TL;DR: This paper is intended to give a complete overview of the POLIS system including its formal and algorithmic aspects and will be of interest to embedded system designers (automotive electronics, consumer electronics and telecommunications), micro-controller designers, CAD developers and students.
Abstract: Embedded systems are informally defined as a collection of programmable parts surrounded by ASICs and other standard components, that interact continuously with an environment through sensors and actuators. The programmable parts include micro-controllers and Digital Signal Processors (DSPs). Embedded systems are often used in life-critical situations, where reliability and safety are more important criteria than performance. Today, embedded systems are designed with an ad hoc approach that is heavily based on earlier experience with similar products and on manual design. Use of higher-level languages such as C helps structure the design somewhat, but with increasing complexity it is not sufficient. Formal verification and automatic synthesis of implementations are the surest ways to guarantee safety. Thus, the POLIS system which is a co-design environment for embedded systems is based on a formal model of computation. POLIS was initiated in 1988 as a research project at the University of California at Berkeley and, over the years, grew into a full design methodology with a software system supporting it. Hardware-Software Co-Design of Embedded Systems: The POLIS Approach is intended to give a complete overview of the POLIS system including its formal and algorithmic aspects. Hardware-Software Co-Design of Embedded Systems: The POLIS Approach will be of interest to embedded system designers (automotive electronics, consumer electronics and telecommunications), micro-controller designers, CAD developers and students.

667 citations


Journal Article•DOI•
01 Mar 1997
TL;DR: This paper addresses the design of reactive real-time embedded systems by reviewing the variety of approaches to solving the specification, validation, and synthesis problems for such embedded systems.
Abstract: This paper addresses the design of reactive real-time embedded systems. Such systems are often heterogeneous in implementation technologies and design styles, for example by combining hardware application-specific integrated circuits (ASICs) with embedded software. The concurrent design process for such embedded systems involves solving the specification, validation, and synthesis problems. We review the variety of approaches to these problems that have been taken.

537 citations



Proceedings Article•DOI•
13 Jun 1997
TL;DR: A new system design methodology is proposed that separates communication from behavior and the potential for this methodology to improve verification, modelling and synthesis is explored.
Abstract: A new system design methodology is proposed that separates communicationfrom behavior. To demonstrate the methodology weapplied it to a simple ATM design. Since verification is clearly amajor stumbling block for large system design, we focussed on theverification aspects of our methodology.In particular, a simulator was developed that is based on the communicationparadigm typical of our methodology. The simulatorgives substantial performance improvements without sacrificinguser access to detail.Finally, the potential for this methodology to improve verification,modeling and synthesis is explored.

310 citations


Book•
30 Nov 1997
TL;DR: Noise in Free Running Oscillators: Behavioral Modeling and Simulation of Phase-Locked Loops, and Conclusions and Future Work.
Abstract: 1. Introduction. 2. Mathematical Background. 3. Noise Models. 4. Overview of Noise Simulation for Nonlinear Electronic Circuits. 5. Time-Domain Non-Monte Carlo Noise Simulation. 6. Noise in Free Running Oscillators. 7. Behavioral Modeling and Simulation of Phase-Locked Loops. 8. Conclusions and Future Work. References. Index.

132 citations


Proceedings Article•DOI•
13 Nov 1997
TL;DR: This work motivates the need for CAD algorithms for PTL circuit design and proposes decomposed BDDs as a suitable logic level representation for synthesis of PTL networks, and presents a set of heuristical algorithms to synthesize PTL circuits optimized for area, delay and power.
Abstract: Pass transistor logic (PTL) can be a promising alternative to static CMOS for deep sub-micron design. In this work, we motivate the need for CAD algorithms for PTL circuit design and propose decomposed BDDs as a suitable logic level representation for synthesis of PTL networks. Decomposed BDDs can represent large, arbitrary functions as a multi-stage circuit and can exploit the natural, efficient mapping of a BDD to PTL. A comprehensive synthesis flow based on decomposed BDDs is outlined for PTL design. We show that the proposed approach allows us to make logic-level optimizations similar to the traditional multi- level network based synthesis flow for static CMOS, and also makes possible optimizations with a direct impact on area, delay and power of the final circuit implementation which do not have any equivalent in the traditional approach. We also present a set of heuristical algorithms to synthesize PTL circuits optimized for area, delay and power which are key to the proposed synthesis flow. Experimental results on ISCAS benchmark circuits show that our technique yields PTL circuits with substantial improvements over static CMOS designs. In addition, to the best of our knowledge this is the first time PTL circuits have been synthesized for the entire ISCAS benchmark set.

116 citations


Proceedings Article•DOI•
13 Nov 1997
TL;DR: This work proposes the use of partitioned ROBDDs to reduce the memory explosion problem associated with symbolic state space exploration techniques and shows the effectiveness of this approach on a set of ISCAS89 benchmark circuits.
Abstract: In this paper, we address the problem of finite state machine (FSM) traversal, a key step in most sequential verification and synthesis algorithms. We propose the use of partitioned-ROBDDs to reduce the memory explosion problem associated with symbolic state space exploration techniques. In our technique, the reachable state set is represented as a partitioned-ROBDD Different partitions of the Boolean space are allowed to have different variable orderings and only one partition needs to be in memory at any given time. We show the effectiveness of our approach on a set of ISCAS89 benchmark circuits. Our techniques result in a significant reduction in total memory utilization. For a given memory limit, partitioned-ROBDD based method can complete traversal for many circuits for which monolithic ROBDDs fail. For circuits where both partitioned-ROBDDs as well as monolithic-ROBDDs cannot complete traversal, partitioned-ROBDDs can reach a significantly larger set of states.

99 citations


Book•
17 Apr 1997
TL;DR: The authors introduce techniques for converting a symbolic description of an FSM into a hardware implementation and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly.
Abstract: Synthesis of Finite State Machines: Logic Optimization is the second in a set of two monographs devoted to the synthesis of Finite State Machines (FSMs). The first volume, Synthesis of Finite State Machines: Functional Optimization, addresses functional optimization, whereas this one addresses logic optimization. The result of functional optimization is a symbolic description of an FSM which represents a sequential function chosen from a collection of permissible candidates. Logic optimization is the body of techniques for converting a symbolic description of an FSM into a hardware implementation. The mapping of a given symbolic representation into a two-valued logic implementation is called state encoding (or state assignment) and it impacts heavily area, speed, testability and power consumption of the realized circuit. The first part of the book introduces the relevant background, presents results previously scattered in the literature on the computational complexity of encoding problems, and surveys in depth old and new approaches to encoding in logic synthesis. The second part of the book presents two main results about symbolic minimization; a new procedure to find minimal two-level symbolic covers, under face, dominance and disjunctive constraints, and a unified frame to check encodability of encoding constraints and find codes of minimum length that satisfy them. The third part of the book introduces generalized prime implicants (GPIs), which are the counterpart, in symbolic minimization of two-level logic, to prime implicants in two-valued two-level minimization. GPIs enable the design of an exact procedure for two-level symbolic minimization, based on a covering step which is complicated by the need to guarantee encodability of the final cover. A new efficient algorithm to verify encodability of a selected cover is presented. If a cover is not encodable, it is shown how to augment it minimally until an encodable superset of GPIs is determined. To handle encodability the authors have extended the frame to satisfy encoding constraints presented in the second part. The covering problems generated in the minimization of GPIs tend to be very large. Recently large covering problems have been attacked successfully by representing the covering table with binary decision diagrams (BDD). In the fourth part of the book the authors introduce such techniques and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly. Synthesis of Finite State Machines: Logic Optimization will be of interest to researchers and professional engineers who work in the area of computer-aided design of integrated circuits.

84 citations


Proceedings Article•DOI•
10 Dec 1997
TL;DR: A novel approach to the control of an automotive engine in the cut-off region is presented, which is formulated as a hybrid optimization problem, whose solution is obtained by relaxing it to the continuous domain and mapping its solution back into the hybrid domain.
Abstract: A novel approach to the control of an automotive engine in the cut-off region is presented. First, a hybrid model which describes the torque generation mechanism and the power-train dynamics is developed. Then, the cut-off control problem is formulated as a hybrid optimization problem, whose solution is obtained by relaxing it to the continuous domain and mapping its solution back into the hybrid domain. A formal analysis as well as simulation results demonstrate the properties and the quality of the control law.

75 citations


01 Jan 1997
TL;DR: A denotational framework (a “meta model”) within which certain properties of models of computation can be understood and compared is given, which describes concurrent processes in general terms as sets of possible behaviors.
Abstract: A DENOTATIONAL FRAMEWORK FOR COMPARING MODELS OF COMPUTATION Edward A. Lee and Alberto Sangiovanni-Vincentelli EECS, University of California, Berkeley, CA, USA 94720. Abstract We give a denotational framework (a “meta model”) within which certain properties of models of computation can be understood and compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if given the constraints imposed by the inputs there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.

56 citations


DOI•
24 Mar 1997
TL;DR: A methodology to specify, simulate, and partition tasks that can be implemented on programmable micro-controller peripherals such as timing processing units (TPUs) to efficiently simulate and evaluate a particular implementation choice; and to automate downstream synthesis for software, hardware, as well as peripheral programming routines.
Abstract: Luciano Lavagno, Claudio Passerone, and Claudio SansoePolitecnico di TorinoMapping a behavior on an embedded system involves hardware-software partitioning and assignment of software and hardware tasks to different components. In particular, software tasks in embedded controllers are mostly assigned to a micro-controller. However, some micro-controller peripherals are implemented with partly programmable components that can be regarded as very simple co-processors with limited instruction sets and capabilities. Embedded system designers are used to mapping some simple software tasks onto these simple co-processors, obtaining overall performances that can be orders of magnitude superior to the ones obtained mapping all software tasks to the micro-controller itself. In this paper, we propose a methodology to specify, simulate, and partition tasks that can be implemented on programmable micro-controller peripherals such as Timing Processing Units (TPUs). Following our general philosophy, we let the designer propose a partition, and we provide an environment to: - efficiently simulate and evaluate a particular implementation choice, - automate downstream synthesis for software, hardware, as well as peripheral programming routines.


Proceedings Article•DOI•
13 Jun 1997
TL;DR: A techniqueto simulate hardware and software that is almost cycle-accurate, and uses the same model for both types of components, to decide the implementation of areal-life example, a car dashboard controller.
Abstract: Hardware/software co-simulation is generally performed withseparate simulation models.This makes trade-off evaluationdifficult, because the models must be re-compiled wheneversome architectural choice is changed.We propose a techniqueto simulate hardware and software that is almost cycle-accurate,and uses the same model for both types of components.Only the timing information used for synchronizationneeds to be changed to modify the processor choice, the implementationchoice, or the scheduling policy.We show howthis technique can be used to decide the implementation of areal-life example, a car dashboard controller.


Proceedings Article•DOI•
04 Jan 1997
TL;DR: This paper surveys some state-of-the-art techniques used to perform automatic verification of combinational circuits and classifies the current approaches into two categories functional and structural.
Abstract: With the increase in the complexity of present day systems, proving the correctness of a design has become a major concern. Simulation based methodologies are generally inadequate to validate the correctness of a design with a reasonable confidence. More and more designers are moving towards formal methods to guarantee the correctness of their designs. In this paper we survey some state-of-the-art techniques used to perform automatic verification of combinational circuits. We classify the current approaches for combinational verification into two categories functional and structural. The functional methods consist of representing a circuit as a canonical decision diagram. Two circuits are equivalent if and only if their decision diagrams are equal. The structural methods consist of identifying related nodes in the circuit and using them to simplify the problem of verification. We briefly describe some of the methods in both the categories and discuss their merits and drawbacks.

Proceedings Article•DOI•
12 Oct 1997
TL;DR: The authors survey some state-of-the-art techniques used to perform automatic verification of combinational circuits and classify the current approaches for combinational verification into two categories: functional and structural.
Abstract: With the increase in the complexity of present day systems, proving the correctness of a design has become a major concern. Simulation based methodologies are generally inadequate to validate the correctness of a design with a reasonable confidence. More and more designers are moving towards formal methods to guarantee the correctness of their designs. The authors survey some state-of-the-art techniques used to perform automatic verification of combinational circuits. They classify the current approaches for combinational verification into two categories: functional and structural. The functional methods consist of representing a circuit as a canonical decision diagram. Two circuits are equivalent if and only if their decision diagrams are equal. The structural methods consist of identifying related nodes in the circuit and using them to simplify the problem of verification. They briefly describe some of the methods in both the categories and discuss their merits and drawbacks.

Book Chapter•DOI•
01 Jan 1997
TL;DR: The design of embedded controllers, i.e., of embedded systems that perform control actions on physical systems, is addressed and Hybrid and heterogeneous systems are used as mathematical models to define a design methodology that could shorten considerably the time from the conception of the system to its implementation.
Abstract: Reactive real-time embedded systems are pervasive in the electronics system industry. The design of embedded controllers, i.e., of embedded systems that perform control actions on physical systems, is addressed. Hybrid and heterogeneous systems are used as mathematical models to define a design methodology that could shorten considerably the time from the conception of the system to its implementation.

Proceedings Article•DOI•
13 Jun 1997
TL;DR: A static priority scheme is proposed here that can be formally validated both for preemptive and non-preemptive schedules and is conservative in the sense that a valid schedule may be bedeclared invalid, but no invalid schedule may bedeClared valid.
Abstract: Task scheduling for reactive real time systems is adifficult problem due to tight constraints that theschedule must satisfy.A static priority schemeis proposed here that can be formally validated.The method is applicable both for preemptiveand non-preemptive schedules and is conservativein the sense that a valid schedule may bedeclared invalid, but no invalid schedule may bedeclared valid.Experimental results show thatthe run time of our validation method is negligiblewith respect to other steps in system designprocess, and compares favorably with othermethods of schedule validation.

Journal Article•DOI•
TL;DR: This paper presents a symbolic minimization procedure to obtain optimal two-level implementations of finite-state machines and shows that in some cases, this procedure improves on the best results of state-of-art tools.
Abstract: In this paper, we present a symbolic minimization procedure to obtain optimal two-level implementations of finite-state machines. Encoding based on symbolic minimization consists of optimizing the symbolic representation, and then transforming the optimized symbolic description into a compatible two-valued representation by satisfying encoding constraints (bitwise logic relations) imposed on the binary codes that replace the symbols. Our symbolic minimization procedure captures the sharing of product terms due to ORing effects in the output part of a two-level implementation of the symbolic cover. Face, dominance, and disjunctive constraints are generated. Product terms are accepted in a symbolic minimized cover only when they induce compatible encoding constraints. At the end, a set of codes that satisfy all constraints is computed. The quality of this synthesis procedure is shown by the fact that the cardinality of the cover obtained by symbolic minimization and of the cover obtained by replacing the codes in the initial cover and then minimizing it with ESPRESSO are very close. Experiments show that in some cases, our procedure improves on the best results of state-of-art tools.

Proceedings Article•DOI•
28 Jan 1997
TL;DR: This paper describes how to help the designer in this task, by providing a flexible co-simulation environment in which these alternatives can be interactively evaluated.
Abstract: Current design methodologies for embedded systems often force the designer to evaluate early in the design process architectural choices that will heavily impact the cost and performance of the final product. Examples of these choices are hardware/software partitioning, choice of the micro-controller, and choice of a run-time scheduling method. This paper describes how to help the designer in this task, by providing a flexible co-simulation environment in which these alternatives can be interactively evaluated.


Proceedings Article•DOI•
13 Nov 1997
TL;DR: A new technique to solve exactly a discrete optimization problem, based on the paradigm of "negative" thinking, which outperforms both espresso and the enhancement of espresso using Coudert's limit lower bound.
Abstract: We introduce a new technique to solve exactly a discrete optimization problem, based on the paradigm of "negative" thinking. The motivation is that when searching the space of solutions, often a good solution is reached quickly and then improved only a few times before the optimum is found: hence most of the solution space is explored to certify optimality, but it does not yield any improvement of the cost function. So it is quite natural for an algorithm to be "skeptical" about the chance to improve the current best solution. For illustration we have applied our approach to the unate covering problem. We designed a procedure, raiser, implementing a negative thinking search, which is incorporated into a common branch-and-bound procedure. Raiser is invoked at a node of the search tree which is deep enough to justify negative thinking. Raiser tries to detect a hard core of the matrix corresponding to the node by augmenting an independent set of rows in order to increase incrementally the cost of the minimum solutions covering the matrix. Eventually either raiser prunes the subtree rooted at the node (having found a lower bound equal or greater than the current best solution) or returns a new solution that becomes the current best one. Experiments show that our program, aura, outperforms both espresso and our enhancement of espresso using Coudert's limit lower bound. It is always faster and in the most difficult examples either has a running time better by up to two orders of magnitude, or the other programs fail to finish due to timeout or spaceout. The package scherzo is faster on some examples and loses on others, due to a less powerful pruning strategy of the search space, partially mitigated by a more effective computation of the maximal independent set.

Proceedings Article•DOI•
13 Nov 1997
TL;DR: An algorithm for area optimisation of sequential circuits through redundancy removal that finds compatible redundancies by implying values over nets in the circuit and simplifies the circuit by propagating them through the circuit.
Abstract: We propose an algorithm for area optimization of sequential circuits through redundancy removal. The algorithm finds compatible redundancies by implying values over nets in the circuit. The potentially exponential cost of state space traversal is avoided and the redundancies found can all be removed at once. The optimized circuit is a safe delayed replacement of the original circuit. The algorithm computes a set of compatible sequential redundancies and simplifies the circuit by propagating them through the circuit. We demonstrate the efficacy of the algorithm even for large circuits through experimental results on benchmark circuits.

Journal Article•DOI•
TL;DR: A fully implicit algorithm for state minimization of pseudo nondeterministic FSM's (PNDFSMs) is described, and an algorithmic framework to explore behaviors contained in a general NDFSM is given.
Abstract: This paper addresses state minimization problems of different classes of nondeterministic finite-state machines (NDFSMs). We describe a fully implicit algorithm for state minimization of pseudo nondeterministic FSM's (PNDFSMs). The results of our implementation are reported and shown to be superior to a previous explicit formulation. We could solve exactly all but one problem of a published benchmark, while an explicit program could complete approximately one half of the examples, and in those cases, with longer run times. Then we present a theoretical solution to the problem of exact state minimization of general NDFSMs, based on the proposal of generalized compatibles. This gives an algorithmic framework to explore behaviors contained in a general NDFSM.

DOI•
24 Mar 1997
TL;DR: RTOSs created in the POLIS-generated RTOS offer an ease of use comparable to commercial RTOSs, and yet since they are generated for a specific example, they can be optimized based on the same information used to optimize hand-written code.
Abstract: Embedded systems are typically implemented as a set of communicating components some of which are implemented in hardware and some of which are implemented in software. Usually many software components share a processor. A real-time operating system (RTOS) is used to enable sharing and provide a communication mechanism between components. Commercial RTOSs are available for many popular micro-controllers. Using them provides significant reduction in design time and often leads to better structured and more maintainable systems. However, since they have to be quite general, they are not efficient enough for many applications, either in memory usage or in run times. Thus, it is often the case that RTOSs are hand coded by an expert for a particular application. This approach is obviously slow, expensive and error-prone.In this paper we propose an alternative where a RTOS is automatically generated based on a high-level description of the system. RTOSs created in our approach offer an ease of use comparable to commercial RTOSs, and yet since they are generated for a specific example, they can be optimized based on the same information used to optimize hand-written code. We have implemented our approach within POLIS, a system for HW/SW co-design of embedded system. To evaluate the POLIS-generated RTOS we have developed a prototyping environment which we use to compare POLIS against a commercial operating system.

01 Jan 1997
TL;DR: This thesis introduces techniques to predict signal interaction followed by layout synthesis algorithms to maintain signal integrity, and finds that significant switching isolation can be extracted with efficient sensitivity analysis algorithms in general sequential machine networks.
Abstract: Signals in digital systems represent Boolean valued variables. Historically their integrity been preserved through the use of level-restoring logic. This has given designers a key tool, the logic abstraction, which has allowed them to focus on signal behavior at a more abstract logical rather than electrical level. In turn, this has enabled large scales of integration, supported by ever-increasing application of CAD techniques. Designers now commonly work with logic and finite state machines, simulating complete systems using this abstraction. As we shrink semiconductor processes and move to faster circuits, however, analog noise is beginning to erode the logic abstraction. No longer do circuit techniques alone provide the complete abstraction. The rising number of analog effects in deep submicron design make it increasingly difficult to maintain digital signal integrity. One such effect, the signal coupling problem, is now a serious design concern. Electrical signal coupling, a parasitic from physical implementation, can create a logic error in the abstract machine, causing anything from an incorrect result to a system crash. Yet, signals which couple may not affect system behavior at all because of timing or function in the digital domain. Our approach to this problem relies on a novel concept, called digital sensitivity, which analyzes signal interaction at a functional level to make sure it is observable. We also characterize the analog nature of signal coupling. But we abstract our analysis by incorporating digital sensitivity. This has powerful benefits, dramatically relieving constraints on CAD algorithms and making our synthesized circuits robustly tolerant to coupling noise. In this thesis, we study the complex manufacturing process tradeoffs which place in opposition design performance (the RC delay problem) and safety (signal integrity). We introduce techniques to predict signal interaction followed by layout synthesis algorithms to maintain signal integrity. In general sequential machine networks we find that significant switching isolation can be extracted with efficient sensitivity analysis algorithms. We find enough isolation using these analyses that we are able to demonstrate a layout methodology capable of synthesizing circuits free from coupling effects, maintaining signal integrity and thus upholding the logic abstraction.


Book Chapter•DOI•
01 Jan 1997
TL;DR: It is shown that the BDD minimization problem can be formulated as a binate covering problem and solved using implicit enumeration techniques similar to the ones used in the reduction of incompletely specified finite state machines.
Abstract: This paper addresses the problem of binary decision diagram (BDD) minimization in the presence of don’t care sets. Specifically, given an incompletely specified function g and a fixed ordering of the variables, we propose an exact algorithm for selecting f such that f is a cover for g and the binary decision diagram for f is of minimum size. We proved that this problem is NP-complete. Here we show that the BDD minimization problem can be formulated as a binate covering problem and solved using implicit enumeration techniques similar to the ones used in the reduction of incompletely specified finite state machines.

01 Jan 1997
TL;DR: A Top-Down, Constraint-Driven Design Methodology for Analog Integrated Circuits (TDCDF) as discussed by the authors is a top-down, constraint-driven design paradigm for analog circuits.
Abstract: Analog circuit design is often the bottleneck when designing mixed analog-digital systems. A Top-Down, Constraint-Driven Design Methodology for Analog Integrated Circuits presents a new methodology based on a top-down, constraint-driven design paradigm that provides a solution to this problem. This methodology has two principal advantages: (1) it provides a high probability for the first silicon which meets all specifications, and (2) it shortens the design cycle. A Top-Down, Constraint-Driven Design Methodology for Analog Integrated Circuits is part of an ongoing research effort at the University of California at Berkeley in the Electrical Engineering and Computer Sciences Department. Many faculty and students, past and present, are working on this design methodology and its supporting tools. The principal goals are: (1) developing the design methodology, (2) developing and applying new tools, and (3) `proving' the methodology by undertaking `industrial strength' design examples. The work presented here is neither a beginning nor an end in the development of a complete top-down, constraint-driven design methodology, but rather a step in its development. This work is divided into three parts. Chapter 2 presents the design methodology along with foundation material. Chapters 3-8 describe supporting concepts for the methodology, from behavioral simulation and modeling to circuit module generators. Finally, Chapters 9-11 illustrate the methodology in detail by presenting the entire design cycle through three large-scale examples. These include the design of a current source D/A converter, a Sigma-Delta A/D converter, and a video driver system. Chapter 12 presents conclusions and current research topics. A Top-Down, Constraint-Driven Design Methodology for Analog Integrated Circuits will be of interest to analog and mixed-signal designers as well as CAD tool developers.