Showing papers by "Alberto Sangiovanni-Vincentelli published in 1994"
••
TL;DR: Maintaining a finite-state machine model throughout, this approach automatically synthesizes the entire design, including hardware-software interfaces, and preserves the formal properties of the design.
Abstract: Designers generally implement embedded controllers for reactive real-time applications as mixed software-hardware systems. In our formal methodology for specifying, modeling, automatically synthesizing, and verifying such systems, design takes place within a unified framework that prejudices neither hardware nor software implementation. After interactive partitioning, this approach automatically synthesizes the entire design, including hardware-software interfaces. Maintaining a finite-state machine model throughout, it preserves the formal properties of the design. It also allows verification of both specification and implementation, as well as the use of specification refinement through formal verification. >
214 citations
••
TL;DR: Algorithms for fault-driven test set selection are presented based on an analysis of the types of tests needed for different types of faults, and a major reduction in testing time should come from reducing the number of specification tests that need to be performed.
Abstract: Analog testing is a difficult task without a clearcut methodology. Analog circuits are tested for satisfying their specifications, not for faults. Given the high cost of testing analog specifications, it is proposed that tests for analog circuits should be designed to detect faults. Therefore analog fault modeling is discussed. Based on an analysis of the types of tests needed for different types of faults, algorithms for fault-driven test set selection are presented. A major reduction in testing time should come from reducing the number of specification tests that need to be performed. Hence algorithms are presented for minimizing specification testing time. After specification testing time is minimized, the resulting test sets are supplemented with some simple, possibly non-specification, tests to achieve 100% fault coverage. Examples indicate that fault-driven test set development can lead to drastic reductions in production testing time. >
182 citations
••
06 Jun 1994TL;DR: A systematic study of the problem of finding a minimum BDD size cover of an incompletely specified function, establishing a unified framework for heuristic algorithms, proving optimality in some cases, and presenting experimental results.
Abstract: We present heuristic algorithms for finding a minimum BDD size cover of an incompletely specified function, assuming the variable ordering is fixed. In some algorithms based on BDDs, incompletely specified functions arise forwhich any cover of the functionwill suffice. Choosing a cover that has a small BDD representation may yield significant performance gains. We present a systematic study of this problem, establishing a unified framework for heuristic algorithms, proving optimality in some cases,and presenting experimental results.
99 citations
••
01 May 1994TL;DR: Simulation techniques are described in the framework of phase/delay-locked systems, but simulation methodology and the results attained in this work are applicable to the behavioral simulation of mixed-mode nonlinear dynamic systems.
Abstract: This paper presents behavioral simulation techniques for phase/delay-locked systems. Numerical simulation algorithms are compared and the issue of numerical noise is discussed. Behavioral phase noise simulation for phase/delay-locked systems is described. The role of behavioral simulation for phase/delay-locked systems in our top-down constraint-driven design methodology, and in bottom-up verification of designs, is explained with examples. Accuracy and efficiency comparisons with other methods are made. Simulation techniques are described in the framework of phase/delay-locked systems, but simulation methodology and the results attained in this work are applicable to the behavioral simulation of mixed-mode nonlinear dynamic systems. >
78 citations
••
06 Nov 1994TL;DR: In constraint-driven synthesis, it is shown that a fundamental subproblem of crosstalk channel routing, coupling-constrained graph levelization (CCL), is NP-complete, and a novel heuristic algorithm is developed.
Abstract: Interconnect performance does not scale well into deep submicron dimensions, and the rising number of analog effects erodes the digital abstraction necessary for high levels of integration. In particular, crosstalk is an analog phenomenon of increasing relevance. To cope with the increasingly analog nature of high-performance digital system design, we propose using a constraint-driven methodology. In this paper we describe new constraint generation ideas incorporating digital sensitivity. In constraint-driven synthesis, we show that a fundamental subproblem of crosstalk channel routing, coupling-constrained graph levelization (CCL), is NP-complete, and develop a novel heuristic algorithm. To demonstrate the viability of our methodology, we introduce a gridless crosstalk-avoiding channel router as an example of a robust and truly constraint-driven synthesis tool.
74 citations
••
06 Jun 1994TL;DR: The first published algorithm for fully implicit exact binate covering is presented, which can reduce and solve binate tables with up to 10 /sup 6/ rows and columns and the entire branch-and-bound procedure is carried on implicitly.
Abstract: State minimization of incompletely specified machines is an important step of FSM synthesis. An exact algorithm consists of generation of prime compatibles and solution of a binate covering problem. This paper presents an implicit algorithm for exact state minimization of FSM's. We describe how to do implicit prime computation and implicit binate covering. We show that we can handle sets of compatibles and prime compatibles of cardinality up to 2/sup 1500/. We present the first published algorithm for fully implicit exact binate covering. We show that we can reduce and solve binate tables with up to 10 /sup 6/ rows and columns. The entire branch-and-bound procedure is carried on implicitly. We indicate also where such examples arise in practice.
71 citations
••
06 Jun 1994TL;DR: The essential features of HSIS, a BDD-based environment for formal verification, are described, which allows us to experiment with formal verification techniques on a variety of design problems and provides an environment for further research in formal verification.
Abstract: Functional and timing verification are currently the bottlenecks in many design efforts. Simulation and emulation are extensively used for verification. Formal verification is now gaining acceptance in advanced design groups. This has been facilitated by the use of binary decision diagrams (BDDs). This paper describes the essential features of HSIS, a BDD-based environment for formal verification: 1. Open language design, made possible by using a compact and expressive intermediate format known as BLIF-MV. Currently, a synthesis subset of Verilog is supported. 2. Support for both model checking and language containment in a single unified environment, using expressivefairness constraints. 3. Efficient BDD-based algorithms. 4. Debugging environment for both language containment and model checking. 5. Automatic algorithms for the early quantification problem. 6. Support for state minimization using bisimulation and similar techniques. HSIS allows us to experiment with formal verification techniques on a variety of design problems. It also provides an environment for further research in formal verification.
67 citations
••
06 Nov 1994TL;DR: The new methodology is based on extracting the mismatch information from a fully functional circuit rather than on probing individual devices; this extraction leads to more efficient and more accurate mismatch measurement.
Abstract: This paper presents a new methodology for measuring MOS transistor current mismatch and a new transistor current mismatch model The new methodology is based on extracting the mismatch information from a fully functional circuit rather than on probing individual devices; this extraction leads to more efficient and more accurate mismatch measurement The new model characterizes the total mismatch as a sum of two components, one systematic and the other random For our process, we attribute nearly half of the mismatch to the systematic component, which we model as a linear gradient across the die Furthermore, we present a new model for the random component of the mismatch which is 60% more accurate, on average, than existing models
54 citations
••
14 Dec 1994
TL;DR: In this article, the problem of model matching for finite state machines (FSMs) is addressed, which consists of finding a controller for a given open loop system so that the resulting closed loop system matches a desired input-output behavior.
Abstract: The problem of model matching for finite state machines (FSMs) is addressed. This problem consists of finding a controller for a given open loop system so that the resulting closed loop system matches a desired input-output behavior. A characterization of all feasible control laws is given and an efficient synthesis procedure is proposed. >
48 citations
••
06 Jun 1994TL;DR: This paper compares the original implementation of functional decomposition with the new version that uses encoding while doing decomposition, and obtains an average improvement of over 20% on a set of standard benchmarks for look-up table architectures.
Abstract: In this paper, we revisit the classical problem of functional decomposition [1, 2] that arises so often in logic synthesis. One basic problem that has remained largely unaddressed to the best of our knowledge is that of decomposing a function such that the resulting sub-functions are simple, i.e., have small number of cubes or literals. In this paper, we show how to solve this problem optimally. We show that the problem is intimately related to the encoding problem, which is also of fundamental importance in sequential synthesis, especially state-machine synthesis. We formulate the optimum decomposition problem using encoding. In general, an input-output encoding formulation has to be employed. However, for field-programmable gate array architectures that use look-up tables, the input encoding formulation suffices, provided we use minimum-length codes. The last condition is really not a constraint, since each extra code bit means that an extra table has to be used (and that could be expensive). The unused codes are used as don't cares for simplifying the sub-functions. We compare the original implementation of functional decomposition, which ignores the encoding problem, with the new version that uses encoding while doing decomposition. We obtain an average improvement of over 20% on a set of standard benchmarks for look-up table architectures.
45 citations
•
22 Sep 1994TL;DR: An application of the methodology and of the various software tools embedded in the POLIS co-design system is presented, in the realm of automotive electronics: a shock absorber controller whose specification comes from an actual product.
Abstract: With our codesign system, POLIS, we have specified and implemented a real-life design: a shock absorber controller. Through this experiment, we have shown the possibility of using such a system to design complex applications and to speed up the design cycle dramatically. All aspects of the design process are closely scrutinized including high level language translation and automatic hardware and software synthesis. We analyze different software implementation styles and draw some conclusions about our design process. >
••
TL;DR: It is proved that constraint satisfaction is NP-complete, and a framework for the satisfaction of both input and output encoding constraints is developed, and an exact algorithm to determine the minimum number of encoding bits required to satisfy all the given constraints is provided.
Abstract: Three encoding problems relevant to the synthesis of digital circuits are input, output, and state encoding. Several encoding strategies have been proposed in the past that decompose the encoding problem into a two step process of constraint generation and constraint satisfaction. The latter requires the assignment of binary codes to symbols subject to the satisfaction of constraints on the codes. This paper focuses on the constraint satisfaction problem. We prove that constraint satisfaction is NP-complete. We develop a framework for the satisfaction of both input and output encoding constraints, and describe a polynomial time (in the number of symbols to be encoded) algorithm to check for the existence of a solution for a set of input and output constraints. An exact algorithm to determine the minimum number of encoding bits required to satisfy all the given constraints is provided, and a heuristic algorithm is also described. The application of this framework to a variety of encoding problems with different cost functions is illustrated. Experimental results on standard benchmarks are given for the exact and heuristic algorithms. >
••
01 May 1994TL;DR: This paper presents an efficient strategy for testability analysis and fault diagnosis of analog circuits using behavioral models and develops a new algorithm for determining analog testability.
Abstract: This paper presents an efficient strategy for testability analysis and fault diagnosis of analog circuits using behavioral models. A key contribution is a new algorithm for determining analog testability. Experimentally, we determined the testability and faults of a fabricated 10 bit digital-to-analog converter modeled using the analog hardware description language, Cadence-AHDL. Also, we applied the testability analysis at the circuit level using SPICE sensitivity analysis. >
••
06 Jun 1994TL;DR: A new delay optimization procedure that optimizes only sensitizable paths greater than the desired delay t is described, and this method accounts for both functional and topological interactions in the circuit.
Abstract: A common approach to performance optimization of circuits focuses on re-synthesis to reduce the length of all paths greater than the desired delay t. We describe a new delay optimization procedure that optimizes only sensitizable paths greater than t. Unlike previous methods that use topological analysis only, this method accounts for both functional and topological interactions in the circuit. Comprehensive experimental results comparing the proposed technique to a state-of-the-art performance optimization procedure are presented for combinational and sequential logic circuits.
••
01 May 1994TL;DR: In this article, a top-down, constraint-driven design methodology is proposed to accelerate the design cycle for analog circuits and mixed-signal systems, and a design which demonstrates the two principal advantages that this methodology provides-a high probability for first silicon which meets all specifications and fast design times.
Abstract: To accelerate the design cycle for analog circuits and mixed-signal systems, we have proposed a top-down, constraint-driven design methodology. In this paper we present a design which demonstrates the two principal advantages that this methodology provides- a high probability for first silicon which meets all specifications and fast design times. We examine the design of three different 10-bit digital-to-analog (D/A) converters beginning from their performance and functional specifications and ending with the testing of the fabricated parts. Critical technology mismatch information gathered from the testing phase is provided. >
••
11 Jul 1994TL;DR: In this paper, the notion of bisimulation was extended to Kripke structures with fairness, and it was shown that the addition of fairness might cause two Krizke structures, which can be distinguished by a CTL* formula, to become indistinguishable by any CTL formula.
Abstract: We extend the notion of bisimulation to Kripke structures with fairness. We define equivalences that preserve fairness and are akin to bisimulation. Specifically we define an equivalence and show that it is complete in the sense that it is the coarsest equivalence that preserves the logic CTL* interpreted with respect to the fair paths. We show that the addition of fairness might cause two Kripke structures, which can be distinguished by a CTL* formula, to become indistinguishable by any CTL formula. We also define another weaker equivalence that is the weakest equivalence preserving CTL interpreted on the fair paths. As a consequence of our proof, we also obtain characterizations of states in the fair structure in terms of CTL* and CTL formulae.
••
TL;DR: The main bottlenecks of the KMS algorithm are resolved by providing an efficient single-pass algorithm to simultaneously remove all long false paths from a given circuit by relating a circuit structure property based on path lengths to the testability and delay.
Abstract: The existence of redundant stuck-faults in a logic circuit is potentially detrimental to high-speed operation, especially when there are false paths that are longer than the circuit delay. Keutzer, Malik, and Saldanha (KMS) in IEEE transactions of Computer Aided Design, vol. 10, no. 4, p. 427, April 1991 have proved that redundancy is not necessary to reduce delay by presenting an algorithm that derives an equivalent irredundant circuit from a given redundant circuit, with no increase in delay. The KMS algorithm consists of an iterative loop of timing analysis, gate duplications, and redundancy removal to successively eliminate long false paths. In this paper we resolve the main bottlenecks of the KMS algorithm by providing an efficient single-pass algorithm to simultaneously remove all long false paths from a given circuit. We achieve this by relating a circuit structure property based on path lengths to the testability (redundancy) and delay. The application of this algorithm to a variety of related logic synthesis problems is described. >
••
06 Nov 1994TL;DR: Any nonlinear dynamic circuit with any kind of excitation, which can be simulated by the transient analysis routine in a circuit simulator, can be simulation by the noise simulator in time-domain to produce the noise variances and covariances of circuit variables as a function of time.
Abstract: A new, time-domain, non-Monte Carlo method for computer simulation of electrical noise in nonlinear dynamic circuits with arbitrary excitations is presented. This time-domain noise simulation method is based on the results from the theory of stochastic differential equations. The noise simulation method is general in the sense that any nonlinear dynamic circuit with any kind of excitation, which can be simulated by the transient analysis routine in a circuit simulator, can simulated by our noise simulator in time-domain to produce the noise variances and covariances of circuit variables as a function of time, provided that noise models for the devices in the circuit are available. Noise correlations between circuit variables at different time points can also be calculated. Previous work on computer simulation of noise in integrated circuits is reviewed with comparisons to our method. Shot, thermal and flicker noise models for integrated-circuit devices, in the context of our time-domain noise simulation method, are described. The implementation of this noise simulation method in a circuit simulator (SPICE) is described. Two examples of noise simulation (a CMOS ring-oscillator and a BJT active mixer) are given.
••
06 Jun 1994TL;DR: A sequential approach to compute the minimum cycle times of finite state machines, taking into account the effects of gate delay variations, reachable state space, initial states, unrealizable transitions, multiple cycle false paths, and periodicity of the present state vector sequences is presented.
Abstract: In current research, the minimum cycle times of finite state machines are estimated by computing the delays of the combinational logic in the finite state machines. Even though these methods deal with false paths, they ignore the sequential and periodic nature of minimum cycle times, and hencemay give pessimistic results. In this paper, we first prove conditions under which combinational delays are correct upper bounds on minimum cycle times. Then, we present a sequential approach to compute the minimum cycle times of finite state machines, taking into account the effects of gate delay variations, reachable state space, initial states, unrealizable transitions, multiple cycle false paths, and periodicity of the present state vector sequences. We formulate and solve the problem exactly using Timed Boolean Functions, and give an efficient algorithm to solve for upper bounds of minimum cycle times. The exact formulation with Timed Boolean Functions provides a framework for further improvements on existing algorithms to compute the minimum cycle times. We implemented the algorithm and obtained the tightest bounds known on ISCAS benchmarks. From the experiments, we found that for about 20%of the circuits (not all shown in section 8), combinational delays, e.g. floating, viability, and transition delays, give pessimistic upper bounds for cycle times by as much as 25%.
••
06 Jun 1994TL;DR: The flexibility of the annealing algorithm has been significantly improved, thus making it possible to more efficiently exploit the tradeoffs between area, parasitics and matching.
Abstract: New placement techniques are presented which substantially improve the process of automatic layout generation of analog IC's. Extremely tight specifications can be enforced on high-performance analog circuits by using simultaneous placement and module optimization. An algorithmic approach to module generation provides alternative sets of modules optimized with respect to area and performance but equivalent in terms of parasitics and topology. The final module selection is performed during the placement phase, based on Simulated Annealing. The flexibility of the annealing algorithm has been significantly improved, thus making it possible to more efficiently exploit the tradeoffs between area, parasitics and matching.
••
01 May 1994
TL;DR: An algorithmic approach to module generation provides alternative sets of modules, optimized with respect to performance but with different trade-offs among area, parasitics and matching, allowing the enforcement of tighter specifications.
Abstract: Techniques are presented for simultaneous placement and module optimization for analog ICs. An algorithmic approach to module generation provides alternative sets of modules, optimized with respect to performance but with different trade-offs among area, parasitics and matching. A simulated annealing algorithm performs the placement, selecting among the available configurations the one that best fulfils all performance and geometric requirements. Compared to standard approaches, the flexibility of placement is considerably increased, thus allowing the enforcement of tighter specifications. >
••
06 Nov 1994TL;DR: A new CAD algorithm which performs automatic test pattern generation (ATPG) for a general class of analog systems, namely those circuits which can be efficiently modeled as an additive combination of user-defined basis functions.
Abstract: This paper describes a new CAD algorithm which performs automatic test pattern generation (ATPG) for a general class of analog systems, namely those circuits which can be efficiently modeled as an additive combination of user-defined basis functions. The algorithm is based on the statistical technique of I-optimal experimental design, in which test vectors are chosen to be maximally independent so that circuit performance will be characterized as accurately as possible in the presence of measurement noise and model inaccuracies. This technique allows analog systems to be characterized more accurately and more efficiently, thereby significantly reducing system test time and hence total manufacturing cost.
••
21 Jun 1994TL;DR: The decidability of the existence of a network invariant is studied, a procedure that will find it is presented, and conditions under which such an invariant does not exist are given.
Abstract: We study network invariants, abstractions of systems consisting of arbitrary many identical components. In particular, we study a case when an instance of some fixed size serves as an invariant. We study the decidability of the existence of such an invariant, present a procedure that will find it, if one exists, and finally give conditions under which such an invariant does not exist. These conditions can be checked in finite time, and if satisfied, they can be used in further searches for an invariant.
••
14 Nov 1994TL;DR: Preconditioning, partitioning and communication scheduling algorithms are developed to implement an efficient and robust iterative linear solver with preconditioning to solve drift-diffusion semiconductor device equations using an irregular grid discretization.
Abstract: Presents the use of parallel processors for the solution of drift-diffusion semiconductor device equations using an irregular grid discretization. Preconditioning, partitioning and communication scheduling algorithms are developed to implement an efficient and robust iterative linear solver with preconditioning. The parallel program is executed on a 64-node CM-5 and is compared with PILS (a solver for ill-conditioned systems) running on a single processor. We observe an efficiency increase in obtaining parallel speed-ups as the problem size increases. We obtain 60% efficiency for CGS (a fast Lanczos-type solver for nonsymmetric linear systems) with no preconditioning for large problems. Using CGS with processor ILU preconditioning and magnitude threshold-fill-in preconditioning for the CM-5, and CGS with ILU for PILS, we attain 50% efficiency for the solution of large matrices. >
••
06 Nov 1994TL;DR: This work presents an algorithm based on timed automata, a model where a finite state system is augmented with time measuring devices called timers, and presents a semi-decision procedure for an extended model where timers can be decremented.
Abstract: Most embedded real-time systems consist of many concurrent components operating at significantly different speeds. Thus, an algorithm for formal verification of such systems must efficiently deal with a large number of states and large ratios of timing constants. We present such an algorithm based on timed automata, a model where a finite state system is augmented with time measuring devices called timers. We also present a semi-decision procedure for an extended model where timers can be decremented. This extension allows describing behaviors that are not expressible by timed automata, for example interrupts in a real-time operating system.
••
01 Feb 1994TL;DR: This article describes how to analyze the STG specification and the synthesized circuit, using bounded delay information, to formulate the problem and use a branch-and-bound procedure to solve it.
Abstract: Hazards can be globally eliminated from an asynchronous circuit synthesized from a Signal Transition Graph by repeatedly solving an appropriate Linear Program. This article describes how to analyze the STG specification and the synthesized circuit, using bounded delay information, to formulate the problem and use a branch-and- bound procedure to solve it. Known information about the environment delays can be expressed as time bounds on the external signal transitions, and it can be exploited by the proposed methodology.
••
06 Jun 1994TL;DR: This panel will assess the outlook for verification for complex systems, focusing on enabling technologies which show promise in this area, both now and for the future and providing their own perspective of system-level verification challenges.
Abstract: Time-to-market continues to be The Challenge faced by developers of complex electronic systems. The bottleneck - which was traditionally in the design phase of the project - has now moved downstream to the system-level verification stage. The growing adoption of top-down design methodologies based on HDL and synthesis has made the generation of large multi-million gate designs easier than before. The efficient verification of those newly created gates in the final system is now the key to solving The Challenge. Technologies such as rapid system prototyping, ASIC emulation and formal verification offer the potential to completely verify the full system. Hardware and software co-design and HDL test benches offer the potential to feed real world inputs to the verification process. This panel will assess the outlook for verification for complex systems. We will focus on enabling technologies which show promise in this area, both now and for the future. The panel will present a mixture of tutorial material, leading edge academic work, current technology and the user's perspective. In addition to current and future technologies, each speaker will specifically address how design methodology is impacted by their choice of verification methodology. The discussion of the panel will focus on: -What's required from the EDA industry to make their customers successful in the future? -System level verification can be quite expensive. Is the promised Return-On-Investment really there? -Which technologies are usable without turning design methodologies upside down? -Just what is product and what is research in this area? The target audience includes chip and system designers, design managers and executive management who are pondering the benefits and costs of implementing system-level verification. The panelists represent a mix of academia, companies offering verification solutions and users of system-level verification products. The panel will begin with a tutorial on Formal Verification. All other panelists will be limited to a short position statement, allowing ample time for discussion. Each panelist will provide their own perspective of system-level verification challenges.
•
••
06 Jun 1994TL;DR: An efficient algorithm based on a purely geometric approach that generates feasible configurations very efficiently is presented thus making full conformational analysis possible even for fairly large cyclic structures.
Abstract: Conformational analysis is the problem of finding all minimal energy three-dimensional configurations of molecules. Cyclic structures are of particular interest. An efficient algorithm based on a purely geometric approach that generates feasible configurations very efficiently is presented thus making full conformational analysis possible even for fairly large cyclic structures.
••
01 Apr 1994-Compel-the International Journal for Computation and Mathematics in Electrical and Electronic Engineering
TL;DR: A generalized self‐scattering method for generating carrier free flight times in Monte Carlo simulation that results in fewer fictitious scatterings, which is especially appealing for load balance and efficiency when a SIMD parallel computer is used.
Abstract: We present a generalized self‐scattering method for generating carrier free flight times in Monte Carlo simulation. Compared to traditional approaches, the added flexibility of this approach results in fewer fictitious scatterings, which is especially appealing for load balance and efficiency when a SIMD parallel computer is used. Speedups from 19% to 69% over an optimized variable‐Γ approach are shown for an implementation on the Connection Machine CM‐2. The performance sensitivities to applied fields and grid spacings are also presented. The conversion of existing variable‐Γ software to this new approach requires only a few changes.