scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Sangiovanni-Vincentelli published in 1995"


Proceedings Article•DOI•
01 Dec 1995
TL;DR: An approach for fast discrete function evaluation based on multi-valued decision diagrams (MDD) based on decision-diagram based function evaluation offers orders-of-magnitude potential speedup over traditional logic simulation methods.
Abstract: An approach for fast discrete function evaluation based on multi-valued decision diagrams (MDD) is proposed. The MDD for a logic function is translated into a table on, which function evaluation is performed by a sequence of address lookups. The value of a function for a given input assignment is obtained with at most one lookup per input. The main application is to cycle-based logic simulation of digital circuits, where the principal difference from other logic simulators is that only values of the output and latch ports are computed. Theoretically, decision-diagram based function evaluation offers orders-of-magnitude potential speedup over traditional logic simulation methods. In practice, memory bandwidth becomes the dominant consideration on large designs. We describe techniques to optimize usage of the memory hierarchy.

108 citations


Journal Article•DOI•
TL;DR: An analytical-model generator for interconnect capacitances is presented, which obtains analytical expressions of self and coupling capacitance of interconnects for commonly encountered configurations, based on a series of numerical simulations and a partial knowledge of the flux components associated with the configurations.
Abstract: An analytical-model generator for interconnect capacitances is presented. It obtains analytical expressions of self and coupling capacitances of interconnects for commonly encountered configurations, based on a series of numerical simulations and a partial knowledge of the flux components associated with the configurations. The configurations which are currently considered by this model generator are: (a) single line; (b) crossing lines; (c) parallel lines on the same layer; and (d) parallel lines on different layers (both overlapping and nonoverlapping). >

94 citations


Proceedings Article•DOI•
01 Jan 1995
TL;DR: A software generation methodology that takes advantage of the very restricted class of specifications and allows for tight control over the implementation cost, and exploits several techniques from the domain of Boolean function optimization is proposed.
Abstract: Software components for embedded reactive real-time applications must satisfy tight code size and run-time constraints. Cooperating Finite State Machines provide a convenient intermediate format for embedded system co-synthesis, between high-level specification languages and software or hardware implementations. We propose a software generation methodology that takes advantage of the very restricted class of specifications and allows for tight control over the implementation cost. The methodology exploits several techniques from the domain of Boolean function optimization. We also describe how the simplified control/data-flow graph used as an intermediate representation can be used to accurately estimate the size and timing cost of the final executable code.

94 citations


Journal Article•DOI•
TL;DR: It is proved that 100% delay fault testability is not necessary to guarantee the speed of a combinational circuit and the test set size can be reduced while maintaining the delay fault coverage for the specified circuit speed.
Abstract: The main disadvantage of the path delay fault model is that to achieve 100% testability every path must be tested. Since the number of paths is usually exponential in circuit size, this implies very large test sets for most circuits. Not surprisingly, all known analysis and synthesis techniques for 100% path delay fault testability are computationally infeasible on large circuits. We prove that 100% delay fault testability is not necessary to guarantee the speed of a combinational circuit. There exist path delay faults which can never impact the circuit delay (computed using any correct timing analysis method) unless some other path delay faults also affect it. These are termed robust dependent delay faults and need not be considered in delay fault testing. Necessary and sufficient conditions under which a set of path delay faults is robust dependent are proved; this yields more accurate and increased delay fault coverage estimates than previously used. Next, assuming only the existence of robust delay fault tests for a very small set of paths, we show how the circuit speed (clock period) can be selected such that 100% robust delay fault coverage is achieved. This leads to a quantitative tradeoff between the testing effort (measured by the size of the test set) for a circuit and the verifiability of its performance. Finally, under a bounded delay model, we show that the test set size can be reduced while maintaining the delay fault coverage for the specified circuit speed. Examples and experimental results are given to show the effect of these three techniques on the amount of delay fault testing necessary to guarantee correct operation. >

60 citations


Proceedings Article•DOI•
01 Jan 1995
TL;DR: A method of synthesizing low-power combinational logic circuits from Shannon Graphs is proposed such that an n input, m output circuit realization using 2-input gates with unbounded fanout has O(nm) transitions per input vector.
Abstract: A method of synthesizing low-power combinational logic circuits from Shannon Graphs is proposed such that an n input, m output circuit realization using 2-input gates with unbounded fanout has O(nm) transitions per input vector. Under a bounded fanout model, the transition activity is increased at most by a factor of n. Moreover, the power consumption is independent of circuit delays.

50 citations


Book•
31 Jul 1995
TL;DR: This paper presents a meta-modelling architecture for combinational logic that automates the very labor-intensive and therefore time-heavy and expensive process of modeling and optimization of computational logic.
Abstract: Preface. Part I: Introduction. 1. Introduction. 2. Background. Part II: Look-up table (LUT) architectures. 3. Mapping computational logic. 4. Logic optimization. 5. Complexity issues. 6. Mapping sequential logic. 7. Performance directed synthesis. Part III: Multiplexor-based architectures. 8. Mapping combinational logic. Part IV: Conclusions. 10. Conclusions. References. Index.

46 citations


Proceedings Article•DOI•
01 Dec 1995
TL;DR: The logic S1S is used to derive simple, rigorous, and constructive solutions to problems in sequential synthesis, and exact and approximate sets of permissible FSM network behavior are obtained.
Abstract: We propose the use of the logic S1S as a mathematical framework for studying the synthesis of sequential designs. We will show that this leads to simple and mathematically elegant solutions to problems arising in the synthesis and optimization of synchronous digital hardware. Specifically, we derive a logical expression which yields a single finite state automaton characterizing the set of implementations that can replace a component of a larger design. The power of our approach is demonstrated by the fact that it generalizes immediately to arbitrary interconnection topologies, and to designs containing nondeterminism and fairness. We also describe control aspects of sequential synthesis and relate controller realizability to classical work on program synthesis and tree automata.

43 citations


Patent•
12 Jun 1995
TL;DR: In this article, a data structure that completely and accurately models a system of discrete function elements is presented, and a discrete function simulator is used to simulate the system using the data structure.
Abstract: A system and method increases discrete function simulator performance by creating a data structure that completely and accurately models a system of discrete function elements. A discrete function simulator simulates the system using the data structure. Sequential circuits are converted into blocks of combinational elements having latch variables stored to and read from memory. The simulator performance is dependent upon the number of system inputs and outputs and not on the number of discrete function elements in the circuit being simulated.

38 citations


Proceedings Article•DOI•
06 Mar 1995
TL;DR: This paper shows how to obtain a transition-optimum binary tree decomposition for some specific functions (AND, OR, and EX-OR) for zero gate delay model, and proposes a straightforward extension of this algorithm for arbitrary functions and Boolean networks.
Abstract: In this age of portable electronic systems, the problem of logic synthesis for low power has acquired great importance. The most popular approach has been to target the widely-accepted two-phase paradigm of technology-independent optimization and technology mapping for power minimization. Before mapping, each function of a multi-level network is decomposed into two-input gates. How this decomposition is done can have a significant impact on the power dissipation of the final implementation. The problem of decomposition for low power was recently addressed by Pedram et al. (1993). However, they ignore the power consumption due to glitches, which can be a sizeable fraction of the total power. In this paper, we show how to obtain a transition-optimum binary tree decomposition (i.e., the one which has minimum number of transitions in the worst case, including those due to glitches) for some specific functions (AND, OR, and EX-OR) for zero gate delay model. For a non-zero gate delay model, we present conditions under which our algorithm yields an optimum solution for such functions. We propose a straightforward extension of this algorithm for arbitrary functions and Boolean networks. Experimental results on a set of standard combinational benchmarks indicate that on average, our algorithm generates networks (using two-input gates) that have 16% fewer transitions in the worst case than the networks generated by a simple-minded two-input technology-decomposition algorithm implemented in sis, a widely used logic synthesis system. >

36 citations


Journal Article•DOI•
TL;DR: A proof that STG persistency is neither necessary nor sufficient for hazard-free implementation is given, and a new synthesis methodology for asynchronous sequential control circuits from a high level specification, the signal transition graph is introduced.
Abstract: This paper introduces a new synthesis methodology for asynchronous sequential control circuits from a high level specification, the signal transition graph (STG). The methodology is guaranteed to generate hazard-free circuits with the bounded wire-delay model, if the STG is live and has the complete state coding property. The methodology exploits knowledge of the environmental delays, speed-independence with respect to externally visible signals, and logic synthesis techniques. A proof that STG persistency is neither necessary nor sufficient for hazard-free implementation is given. >

36 citations


Proceedings Article•DOI•
01 May 1995
TL;DR: A complete design flow is presented to illustrate this top-down, constraint-driven design methodology as it applies to the design of a second order sigma-delta (/spl Sigma/-/spl Delta/) analog-to-digital (A/D) converter.
Abstract: To accelerate the design cycle for analog circuits and mixed-signal systems, we have proposed a top-down, constraint-driven design methodology. In this paper we present a complete design flow to illustrate this design methodology as it applies to the design of a second order sigma-delta (/spl Sigma/-/spl Delta/) analog-to-digital (A/D) converter. We start from its performance and functional specifications and ending with the testing of the fabricated parts. Experimental results are presented.

Proceedings Article•DOI•
01 May 1995
TL;DR: In this paper, performance sensitivities are used to derive a set of bounds on critical parasitics and to generate weights for a cost function which drives an area router for routing of very high-frequency circuits.
Abstract: Techniques are proposed for the routing of very high-frequency circuits. In this approach, performance sensitivities are used to derive a set of bounds on critical parasitics and to generate weights for a cost function which drives an area router. In addition to these bounds, design often requires that the length of interconnect lines be equal to predefined values, The routing scheme enforces both types of constraints in two phases. During the first phase all parasitic constraints are enforced on all nets. Equality constraints are enforced during the second phase by expanding each net simultaneously while ensuring that no additional violations to parasitic constraints are introduced in the layout. During both phases accurate and efficient parasitic estimations are guaranteed by compact analytical models, based on 2D and 3D field analysis. Finally, a global check on all distributed parasitics is performed. If the original constraints are not satisfied, the weights are updated based on severity of the violation and routing is applied iteratively. Several layouts synthesized using this technique have been fabricated and successfully tested, confirming the effectiveness of the approach.

Proceedings Article•DOI•
13 Dec 1995
TL;DR: A characterization of all feasible control laws is given and an efficient synthesis procedure is proposed for finding a controller for a given open loop system so that the resulting closed loop system matches one of several acceptable input-output behaviors described by a possibly non-deterministic FSM.
Abstract: This paper addresses the problem of finding a controller for a given open loop system so that the resulting closed loop system matches one of several acceptable input-output behaviors described by a possibly non-deterministic FSM. A characterization of all feasible control laws is given and an efficient synthesis procedure is proposed.

Journal Article•DOI•
TL;DR: Using the behavioral representation of Nyquist data converters, a novel strategy to calculate system performance is developed and results agree well with SPICE simulations and confirm the validity of the model.
Abstract: A behavioral representation of Nyquist data converters is presented. The representation captures the static behavior of a memoryless Nyquist data converter including statistical variations. The variations are classified into noise and process variations according to how these nonidealities affect the converter behavior. To describe noise effects, a joint probability density function is used. To describe behavioral effects due to process variations, a Gaussian model is used. Using the behavioral representation, a novel strategy to calculate system performance is developed. The performance specifications of a converter, including offset error, full scale gain error, integral nonlinearity, differential nonlinearity, harmonic distortion, and signal-to-noise ratio, are calculated in two steps. First, the converter model parameters are extracted from the circuit. Then, the converter performance is computed using only the model parameters since the model captures the converter behavior. Experimental results agree well with SPICE simulations and confirm the validity of the model. >

Book Chapter•DOI•
09 Jul 1995
TL;DR: An heuristic algorithm is proposed that induces decision graphs from training sets using Rissanen's minimum description length principle to control the tradeoff between accuracy in the training set and complexity of the hypothesis description.
Abstract: We propose an heuristic algorithm that induces decision graphs from training sets using Rissanen's minimum description length principle to control the tradeoff between accuracy in the training set and complexity of the hypothesis description.

Journal Article•DOI•
TL;DR: A robust and efficient constraint graph compaction algorithm which produces a compacted layout which satisfies the high-level performance constraints and is feasible for practical use within industrial-strength analogue synthesis systems is presented.
Abstract: A tool named SPARCS-A for compaction of integrated circuits with analogue constraints is presented. the approach is structured in two steps. First a robust and efficient constraint graph compaction algorithm produces a compacted layout quickly, where parasitics are controlled so as to guarantee that the performance constraints are met. Next the layout produced by the first step is fed into a linear programming (LP) solver which enforces symmetries and performs global interconnect length minimization. the computational cost of the iterative LP solver is modest, because its initial state is the configuration found by the constraint graph algorithm and only symmetry constraints need to be enforced. With considerable computational efficiency this algorithm produces a compacted layout which satisfies the high-level performance constraints and is feasible for practical use within industrial-strength analogue synthesis systems. the use of such a compactor allows one to relax the requirements on parasitic control during placement and routing, thus improving the efficiency of the entire layout design process.

Journal Article•DOI•
TL;DR: A novel framework to solve the state assignment problem arising from the signal transition graph (STG) representation of an asynchronous circuit by minimizing the number of states in the corresponding FSM and by using a critical race-free state assignment technique.
Abstract: We propose a novel framework to solve the state assignment problem arising from the signal transition graph (STG) representation of an asynchronous circuit. We first establish a relation between STG's and finite state machines (FSM's). Then we solve the STG state assignment problem by minimizing the number of states in the corresponding FSM and by using a critical race-free state assignment technique. State signal transitions may be added to the original STG. A lower bound on the number of signals necessary to implement the STG is given. Our technique significantly increases the STG applicability as a specification for asynchronous circuits. >

Proceedings Article•DOI•
01 May 1995
TL;DR: A module generator for Digital/Analog Converter (DAC) circuits is presented, using a combination of circuit simulation and DAC design equations to estimate performance and a new constrained optimization method is used to determine design variable values.
Abstract: This paper presents a module generator for Digital/Analog Converter (DAC) circuits. A combination of circuit simulation and DAC design equations is used to estimate performance. A new constrained optimization method is used to determine design variable values. The layout is created using stretching and tiling operations on a set of primitive cells. Close coupling of optimization and layout allows accurate incorporation of layout parasitics in optimization. Prototypes have been demonstrated for an 8-bit, 100-MHz specification, driving a 37.5-ohm video load, and a static 10-bit specification, driving a 4 mA full-scale output current. Both designs use a 5-V supply in a standard 1.2 /spl mu/m CMOS process.

Proceedings Article•DOI•
02 Oct 1995
TL;DR: A fully implicit algorithm for state minimization of pseudo non-deterministic FSMs (PNDFSMs) is described, which could solve exactly all but one problem of a published benchmark, while an explicit program could complete approximately one half of the examples, and in those cases with longer run times.
Abstract: This paper addresses state minimization problems of different classes of non-deterministic finite state machines (NDFSMs). We present a theoretical solution to the problem of exact state minimization of general NDFSMs, based on the proposal of generalized compatibles. This gives an algorithmic frame to explore behaviors contained in a general NDFSM. Then we describe a fully implicit algorithm for state minimization of pseudo non-deterministic FSMs (PNDFSMs). The results of our implementation are reported and shown to be superior to a previous explicit formulation. We could solve exactly all but one problem of a published benchmark, while an explicit program could complete approximately one half of the examples, and in those cases with longer run times.

01 Jan 1995
TL;DR: It is shown that for a large number of applications, it is more efficient to construct the ROBDD by a suitable combination of top-down (decomposition based) and bottom-up (com composition based) approaches.
Abstract: ROBDDs have traditionally been built in a bottom-up fashion, though the recursive use of Bryant''s apply procedure [8], or the ITE [4] procedure. With these methods, the peak memory utilization is often larger than the final ROBDD size. Though methods like Dynamic Variable Reordering [21] have been proposed to reduce the memory utilization, such schemas have an associated time penalty. In this paper, we show that for a large number of applications, it is more efficient to construct the ROBDD by a suitable combination of top-down (decomposition based) and bottom-up (composition based) approaches. We suitably select decomposition points during the construction of the ROBDD, and follow it by a symbolic composition to get the final ROBDD. We propose two heuristic algorithms for decomposition. One is based on a topological analysis of a given combinational netlist, while the other is purely functional, making no assumptions about the underlying topology of the circuit. We demonstrate the utility of our scheme on the standard benchmark circuits. Our results show that for a given variable ordering, our method usually has significantly better time as well as memory characteristics than existing techniques. Our methods are easily extended to many variants of ROBDDs, and in that sense are powerful in their scope.

Proceedings Article•DOI•
01 May 1995
TL;DR: A technique to update dynamically the bounds used during constraint-driven routing, so that nets requiring an implementation with large parasitics can take advantage of the margin made available to them by other parameters maintained within their own bounds.
Abstract: We propose a technique to update dynamically the bounds used during constraint-driven routing. Moderate bound violations are allowed as long as no constraint violations are induced. Adaptive net scheduling is made possible during routing, so that nets requiring an implementation with large parasitics can take advantage of the margin made available to them by other parameters maintained within their own bounds. The user is provided with a quantitative evaluation of the effectiveness of the tool in enforcing the set of constraints on the given design.

Proceedings Article•DOI•
06 Feb 1995
TL;DR: This work presents a method for graph partitioning that is suitable for parallel implementation and scales well with the number of processors and the problem size, and uses hierarchical partitioning.
Abstract: In order to realize the full potential of speed-up by parallelization, it is essential to partition a problem into small tasks with minimal interactions without making this process itself a bottleneck. We present a method for graph partitioning that is suitable for parallel implementation and scales well with the number of processors and the problem size. Our algorithm uses hierarchical partitioning. It exploits the parallel resources to minimize the dependence on the starting point with multiple starts at the higher levels of the hierarchy. These decrease at the lower levels as it zeroes in on the final partitioning. This is followed by a last-gasp phase that randomly collapses partitions and repartitions to further improve the quality of the fmat solution. Each individual 2-way partitioning step can be performed by any standard partitioning algorithm. Results are presented on a set of benchmarks representing connectivity graphs of device and circuit simulation problems. >

Journal Article•DOI•
01 Jan 1995
TL;DR: A new verification procedure is proposed, where increasingly more complex abstractions of the region automaton are iteratively constructed, and in many cases the procedure can be stopped early, and thus can avoid the state space explosion problem.
Abstract: Verification of real-time systems is a complex problem, requiring construction of aregion automaton with a state space growing exponentially in the number of timing constraints and the sizes of constants in those constraints. However, some properties can be verified even when some quantitative timing information is abstracted. We propose a new verification procedure, where increasingly more complex abstractions of the region automaton are iteratively constructed. In many cases, the procedure can be stopped early, and thus can avoid the state space explosion problem.

Book Chapter•DOI•
01 Jan 1995
TL;DR: A typical section of an LUT architecture is shown, where the interconnections to realize the circuit are programmed using scarce wiring resources provided on the chip.
Abstract: Figure 7.1 shows a typical section of an LUT architecture. The interconnections to realize the circuit are programmed using scarce wiring resources provided on the chip. There are three kinds of interconnect resources: 1. long lines: run across the chip; mainly used for clocks and global signals.

Proceedings Article•DOI•
04 Jan 1995
TL;DR: A novel algorithm is proposed that accounts for false paths (over several time frames) in level-sensitive sequential circuits to obtain tighter bounds on the optimum clock schedule than previously obtainable.
Abstract: All existing algorithms for clock schedule optimization are conservative since they use only topological analysis to estimate the delays of paths between latches This paper proposes a novel algorithm that accounts for false paths (over several time frames) in level-sensitive sequential circuits to obtain tighter bounds on the optimum clock schedule than previously obtainable