scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Sangiovanni-Vincentelli published in 1996"


Book Chapter•DOI•
03 Aug 1996
TL;DR: VIS provides the capability to check the combinational equivalence of two designs and provides traditional verification in the form of a cycle-based simulator that uses BDD techniques.
Abstract: ion Manual abstraction can be performed by giving a file containing the names of variables to abstract. For each variable appearing in the file, a new primary input node is created to drive all the nodes that were previously driven by the variable. Abstracting a net effectively allows it to take any value in its range, at every clock cycle. Fair CTL model checking and language emptiness check VIS performs fair CTL model checking under Buchi fairness constraints. In addition, VIS can perform language emptiness checking by model checking the formula EG true. The language of a design is given by sequences over the set of reachable states that do not violate the fairness constraint. The language emptiness check can be used to perform language containment by expressing the set of bad behaviors as another component of the system. If model checking or language emptiness fail, VIS reports the failure with a counterexample, (i.e., behavior seen in the system that does not satisfy the property for model checking, or valid behavior seen in the system for language emptiness). This is called the “debug” trace. Debug traces list a set of states that are on a path to a fair cycle and fail the CTL formula. Equivalence checking VIS provides the capability to check the combinational equivalence of two designs. An important usage of combinational equivalence is to provide a sanity check when re-synthesizing portions of a network. VIS also provides the capability to test the sequential equivalence of two designs. Sequential verification is done by building the product finite state machine, and checking whether a state where the values of two corresponding outputs differ, can be reached from the set of initial states of the product machine. If this happens, a debug trace is provided. Both combinational and sequential verification are implemented using BDD-based routines. Simulation VIS also provides traditionaldesign verification in the form of a cycle-based simulator that uses BDD techniques. Since VIS performs both formal verification and simulation using the same data structures, consistency between them is ensured. VIS can generate random input patterns or accept user-specified input patterns. Any subtree of the specified hierarchy may be simulated.

655 citations


Journal Article•DOI•
TL;DR: The algorithm, Test Generation Using Satisfiability (TEGUS), solves a simplified test set characteristic equation using straightforward but powerful greedy heuristics, ordering the variables using depth-first search and selecting a variable from the next unsatisfied clause at each branching point.
Abstract: We present a robust, efficient algorithm for combinational test generation using a reduction to satisfiability (SAT) The algorithm, Test Generation Using Satisfiability (TEGUS), solves a simplified test set characteristic equation using straightforward but powerful greedy heuristics, ordering the variables using depth-first search and selecting a variable from the next unsatisfied clause at each branching point For difficult faults, the computation of global implications is iterated, which finds more implications than previous approaches and subsumes structural heuristics such as unique sensitization Without random tests or fault simulation, TEGUS completes on every fault in the ISCAS networks, demonstrating its robustness, and is ten times faster for those networks which have been completed by previous algorithms Our implementation of TEGUS can be used as a base line for comparing test generation algorithms; we present comparisons with 45 recently published algorithms TEGUS combines the advantages of the elegant organization of SAT-based algorithms with the efficiency of structural algorithms

329 citations


Journal Article•DOI•
TL;DR: A methodology for the automatic synthesis of full-custom IC layout with analog constraints is presented, guaranteeing that all performance constraints are met when feasible, or otherwise, infeasibility is detected as soon as possible, thus providing a robust and efficient design environment.
Abstract: A methodology for the automatic synthesis of full-custom IC layout with analog constraints is presented. The methodology guarantees that all performance constraints are met when feasible, or otherwise, infeasibility is detected as soon as possible, thus providing a robust and efficient design environment. In the proposed approach, performance specifications are translated into lower-level bounds on parasitics or geometric parameters, using sensitivity analysis. Bounds can be used by a set of specialized layout tools performing stack generation, placement, routing, and compaction. For each tool, a detailed description is provided of its functionality, of the way constraints are mapped and enforced, and of its impact on the design flow. Examples drawn from industrial applications are reported to illustrate the effectiveness of the approach.

162 citations


Proceedings Article•DOI•
10 Nov 1996
TL;DR: A new representation for Boolean functions called Partitioned ROBDDs is presented, which is divide the Boolean space into 'k' partitions and represents a function over each partition as a separate ROBDD.
Abstract: We present a new representation for Boolean functions called Partitioned ROBDDs. In this representation we divide the Boolean space into 'k' partitions and represent a function over each partition as a separate ROBDD. We show that partitioned-ROBDDs are canonical and can be efficiently manipulated. Further they can be exponentially more compact than monolithic ROBDDs and even free BDDs. Moreover, at any given time, only one partition needs to be manipulated which further increases the space efficiency. In addition to showing the utility of partitioned-ROBDDs on special classes of functions, we provide automatic techniques for their construction. We show that for large circuits our techniques are more efficient in space as well as time over monolithic ROBDDs. Using these techniques, some complex industrial circuits could be verified for the first time.

127 citations


Proceedings Article•DOI•
10 Nov 1996
TL;DR: A denotational framework (a meta model) within which certain properties of models of computation can be understood and compared is given, which describes concurrent processes as sets of possible behaviors and compositions of processes are given as intersections of their behaviors.
Abstract: We give a denotational framework (a meta model) within which certain properties of models of computation can be understood and compared. It describes concurrent processes as sets of possible behaviors. Compositions of processes are given as intersections of their behaviors. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous systems contain synchronous signals. Strict causality (in timed systems) and continuity (in untimed systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.

117 citations


Journal Article•DOI•
TL;DR: A time-domain, non-Monte Carlo method for computer simulation of electrical noise in nonlinear dynamic circuits with arbitrary excitations and arbitrary large-signal waveforms is presented, based on results from the theory of stochastic differential equations.
Abstract: A time-domain, non-Monte Carlo method for computer simulation of electrical noise in nonlinear dynamic circuits with arbitrary excitations and arbitrary large-signal waveforms is presented. This time-domain noise simulation method is based on results from the theory of stochastic differential equations. The noise simulation method is general in the following sense. Any nonlinear dynamic circuit with any kind of excitation, which can be simulated by the transient analysis routine in a circuit simulator, can be simulated by our noise simulator in time-domain to produce the noise variances and covariances of circuit variables as a function of time, provided that noise models for the devices in the circuit are available. Noise correlations between circuit variables at different time points can also be calculated. Previous work on computer simulation of noise in electronic circuits is reviewed with comparisons to our method. Shot, thermal, and flicker noise models for integrated-circuit devices, in the context of our time-domain noise simulation method, are discussed. The implementation of this noise simulation method in a circuit simulator (SPICE) is described. Two examples of noise simulation (a CMOS inverter and a BJT active mixer) are given.

106 citations


Proceedings Article•DOI•
01 Jun 1996
TL;DR: This work presents two estimation methods at different levels of abstraction for use in the POLIS hardware/software codesign system, including both the execution time and the code size.
Abstract: The performance estimation of a target system at a higher level of abstraction is very important in hardware/software codesign. We focus on software performance estimation, including both the execution time and the code size. We present two estimation methods at different levels of abstraction for use in the POLIS hardware/software codesign system. The experimental results show that the accuracy of our methods is usually within /spl plusmn/20%.

99 citations


Proceedings Article•DOI•
10 Nov 1996
TL;DR: A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented and permits the statistical characterization of large analog and mixed-signal systems.
Abstract: A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.

74 citations


Proceedings Article•DOI•
01 Jun 1996
TL;DR: A high performance BDD package that enables manipulation of very large BDDs by using an iterative breadth-first technique directed towards localizing the memory accesses to exploit the memory system hierarchy is presented.
Abstract: The success of binary decision diagram (BDD) based algorithms for verification depend on the availability of a high performance package to manipulate very large BDDs. State-of-the-art BDD packages, based on the conventional depth-first technique, limit the size of the BDDs due to a disorderly memory access patterns that results in unacceptably high elapsed time when the BDD size exceeds the main memory capacity. We present a high performance BDD package that enables manipulation of very large BDDs by using an iterative breadth-first technique directed towards localizing the memory accesses to exploit the memory system hierarchy. The new memory-oriented performance features of this package are: 1) an architecture independent customized memory management scheme, 2) the ability to issue multiple independent BDD operations (superscalarity), and 3) the ability to perform multiple BDD operations even when the operands of some BDD operations are the result of some other operations yet to be completed (pipelining). A comprehensive set of BDD manipulation algorithms are implemented using the above techniques. Unlike the breadth-first algorithms presented in the literature, the new package is faster than the state-of-the-art BDD package by a factor of up to 15, even for the BDD sizes that fit within the main memory. For BDD sizes that do not fit within the main memory, a performance improvement of up to a factor of 100 can be achieved.

69 citations


01 Jan 1996
TL;DR: This dissertation addresses the the state reachability problem in FSMs, which is the problem of determining if one set of states can reach another, and an algorithm to approximate a Boolean function by another function having a smaller BDD.
Abstract: This dissertation addresses three separate, but related problems concerning the formal analysis of synchronous circuits and their associated finite state machines. The first problem is the logical analysis of synchronous circuits containing combinational cycles. The presence of such cycles can cause unstable behavior at the outputs of a circuit, but this is not necessarily always the case. This work determines when cycles are harmless, and when they are not. In particular, three classes of circuits are defined that tradeoff time to decide the class, with the permissiveness of the class. For each class, the complexity of the corresponding decision problem is proven and a procedure to decide the class is given. In addition, if a circuit is determined to be within a given class, then a new circuit can be generated with the same input/output behavior, but without combinational cycles. This is an important utility, as many CAD tools do not accept circuits with combinational cycles. The second problem that is addressed is the CTL model checking of interacting FSMs. A state equivalence is presented that is defined with respect to a given CTL formula. Since it does not attempt to preserve all CTL formulas, like bisimulation does, we can expect to compute coarser equivalences. This equivalence is used to manage the size of the transition relations encountered when model checking a system of interacting FSMs. Specifically, the equivalence is used to reduce the size of each component FSM, so that their product will be smaller. We show how to apply the method, whether an explicit representation is used for the FSMs, or BDDs are used. Also, we show that in some cases this approach can detect if a formula passes or fails, without composing all the component machines. The method is exact and completely automatic, and handles full CTL. These two problems are PSPACE-hard (in the number of flip-flops) to decide; approximate methods may be useful to find a solution in affordable CPU time. To demonstrate the use of approximate methods in logical analysis, we address the the state reachability problem in FSMs, which is the problem of determining if one set of states can reach another. State reachability has broad applications in formal verification, synthesis, and testing of synchronous circuits. This work attacks this problem by making a series of under- and over-approximations to the state transition graph, using the over-approximations to guide the search in the under-approximations for a potential path from one state set to the other. Central to this method is an algorithm to approximate a Boolean function by another function having a smaller BDD.

58 citations


Proceedings Article•DOI•
01 Jun 1996
TL;DR: This paper presents a formal verification methodology for embedded systems, and demonstrates that abstractions and separation of timing and functionality is crucial for the successful use of formal verification for this example of POLIS.
Abstract: Both timing and functional properties are essential to characterize the correct behavior of an embedded system. Verification is in general performed either by simulation, or by bread-boarding. Given the safety requirements of such systems, a formal proof that the properties are indeed satisfied is highly desirable. In this paper, we present a formal verification methodology for embedded systems. The formal model for the behavior of the system used in POLIS is a network of Codesign Finite State Machines (CFSM). This model is translated into automata, and verified using automata-theoretic techniques. An industrial embedded system is verified using the methodology. We demonstrate that abstractions and separation of timing and functionality is crucial for the successful use of formal verification for this example. We also show that in POLIS abstractions and separation of timing and functionality can be done by simple syntactic modification of the representation of the system.

Proceedings Article•DOI•
10 Nov 1996
TL;DR: It is found that it is possible to predict signal interaction by signal functionality alone, leading to a significant amount of robust switching isolation, independent of parasitics introduced by layout or semiconductor process.
Abstract: Maintaining signal integrity in digital systems is becoming increasingly difficult due to the rising number of analog effects seen in deep submicron design. One such effect, the signal crosstalk problem, is now a serious design concern. Signals which couple electrically may not affect system behavior because of timing or function in the digital domain. If we can isolate observable coupling effects then we can constrain layout synthesis to eliminate them. In this paper, we find that it is possible to predict signal interaction by signal functionality alone, leading to a significant amount of robust switching isolation, independent of parasitics introduced by layout or semiconductor process. We introduce techniques to predict signal interaction using functional sensitivity analysis. In general sequential networks we find that significant switching isolation can be extracted with efficient sensitivity analysis algorithms, thus giving promise to the goal of synthesizing layout free from crosstalk effects.

Proceedings Article•DOI•
05 May 1996
TL;DR: A methodology is presented for generating compact models of substrate noise injection in complex logic networks and preliminary results demonstrate the validity of the assumptions and the accuracy of the approach on a set of standard benchmark circuits.
Abstract: A methodology is presented for generating compact models of substrate noise injection in complex logic networks. For a given gate library, the injection patterns associated with a gate and an input transition scheme are accurately evaluated using device-level simulation. Assuming spatial independence of all noise generating devices, the cumulative switching noise resulting from all injection patterns is efficiently computed using a gate-level event-driven simulator. The resulting injected signal is then sampled and translated into an energy spectrum which accounts for fundamental frequencies as well as glitch energy. Preliminary results demonstrate the validity of the assumptions and the accuracy of the approach on a set of standard benchmark circuits.

Proceedings Article•DOI•
07 Oct 1996
TL;DR: Algorithms for manipulation of very large Binary Decision Diagrams (BDDs) on a network of workstations (NOW) are presented to demonstrate the capability and point towards the potential impact for manipulating very large BDDs.
Abstract: The success of all binary decision diagram (BDD) based synthesis and verification algorithms depend on the ability to efficiently manipulate very large BDDs. We present algorithms for manipulation of very large Binary Decision Diagrams (BDDs) on a network of workstations (NOW). A NOW provides a collection of main memories and disks which can be used effectively to create and manipulate very large BDDs. To make efficient use of memory resources of a Now, while completing execution in a reasonable amount of wall clock time, extension of breadth-first technique is used to manipulate BDDs. BDDs are partitioned such that nodes for a set of consecutive variables are assigned to the same workstation. We present experimental results to demonstrate the capability of such an approach and point towards the potential impact for manipulating very large BDDs.

Journal Article•DOI•
TL;DR: In this article, an application of the methodology and of the various software tools embedded in the POLIS co-design system is presented in the realm of automotive electronics: a shock absorber controller, whose specification comes from an actual product.
Abstract: We present an application of the methodology and of the various software tools embedded in the POLIS co-design system. The application is in the realm of automotive electronics: a shock absorber controller, whose specification comes from an actual product. All aspects of the design process are closely examined, including high level language specification and automatic hardware and software synthesis. We analyze different software implementation styles, compare the results, and outline the future developments of our work.

Proceedings Article•DOI•
01 Jun 1996
TL;DR: A design methodology should on one hand put to good use all techniques and methods developed thus far for verification, from formal verification to simulation, from visualization to timing analysis, but should also have specific conceptual devices for dealing with correctness in the face of complexity.
Abstract: The complexity of electronic systems is rapidly reaching a point where it will be impossible to verify correctness of the design without introducing a verification-aware discipline in the design process. Even though computers and design tools have made important advances, the use of these tools in the commonly practised design methodology is not enough to address the design correctness problem since verification is almost always an after-thought in the mind of the designer. A design methodology should on one hand put to good use all techniques and methods developed thus far for verification, from formal verification to simulation, from visualization to timing analysis, but should also have specific conceptual devices for dealing with correctness in the face of complexity. This paper is organized as follows: we review the available verification tools. Formalization is investigated in several contexts. Abstraction is presented with a set of examples. Decomposition is introduced. Finally a design methodology that includes all these aspects is proposed.

Proceedings Article•DOI•
05 May 1996
TL;DR: In this article, the authors present a discussion about the definition of phase noise for general oscillation waveforms; a numerical method for transistor-level simulation and characterization of phase noises in open-loop (free running) oscillators; application of the phase noise simulation method to harmonic, relaxation and ring oscillators and a technical discussion of the characterization results.
Abstract: This paper presents a discussion about the definition of phase noise for general oscillation waveforms; a numerical method for transistor-level simulation and characterization of phase noise in open-loop (free running) oscillators; application of the phase noise simulation method to harmonic, relaxation and ring oscillators and a technical discussion of the phase noise characterization results; comparisons of the results of phase noise simulations for harmonic, relaxation and ring oscillators with the results of related work in the literature.

Book Chapter•DOI•
06 Nov 1996
TL;DR: It is shown that for a large number of applications, it is more efficient to construct the ROBDD by a suitable combination of top-down and bottom-up approaches than a purely bottom- up approach.
Abstract: In this paper, we address the problem of memory-efficient construction of ROBDDs for a given Boolean network. We show that for a large number of applications, it is more efficient to construct the ROBDD by a suitable combination of top-down and bottom-up approaches than a purely bottom-up approach. We first build a decomposed ROBDD of the target function and then follow it by a symbolic composition to get the final ROBDD. We propose two heuristic algorithms for decomposition. One is based on a topological analysis of the given netlist, while the other is purely functional, making no assumptions about the underlying circuit topology. We demonstrate the utility of our methods on standard benchmark circuits as well as some hard industrial circuits. Our results show that this method requires significantly less memory than the conventional bottom-up construction. In many cases, we are able to build the ROBDDs of outputs for which the conventional method fails. In addition, in most cases this memory reduction is accompanied by a significant speed up in the ROBDD construction process.

Proceedings Article•DOI•
10 Nov 1996
TL;DR: This paper presents the complete design flow for a video driver system, where a jitter constraint is imposed at the system level and then propagated hierarchically to the circuit blocks and layout, using behavioral modeling and simulation.
Abstract: To accelerate the design cycle for analog and mixed-signal systems, we have proposed a top-down, constraint-driven design methodology. The key idea of the proposed methodology is hierarchically propagating constraints from performance specifications to layout. Consequently, it is essential to provide the necessary tools and techniques enabling the efficient constraint propagation. To illustrate the applicability of the proposed methodology to the design of larger systems, we present in this paper the complete design flow for a video driver system. Critical advantages of the methodology illustrated with this design example include avoiding costly low level re-designs and getting working silicon parts from the first run. Following our approach, a jitter constraint is imposed at the system level and then is propagated hierarchically to the circuit blocks and layout, using behavioral modeling and simulation. Experimental results are presented from working fabricated parts.

Journal Article•DOI•
TL;DR: This work proposes a local optimization algorithm that generates compact decision graphs by performing local changes in an existing graph until a minimum is reached and uses Rissanen’s minimum description length principle to control the tradeoff between accuracy in the training set and complexity of the description.
Abstract: We propose an algorithm for the inference of decision graphs from a set of labeled instances. In particular, we propose to infer decision graphs where the variables can only be tested in accordance with a given order and no redundant nodes exist. This type of graphs, reduced ordered decision graphs, can be used as canonical representations of Boolean functions and can be manipulated using algorithms developed for that purpose. This work proposes a local optimization algorithm that generates compact decision graphs by performing local changes in an existing graph until a minimum is reached. The algorithm uses Rissanen's minimum description length principle to control the tradeoff between accuracy in the training set and complexity of the description. Techniques for the selection of the initial decision graph and for the selection of an appropriate ordering of the variables are also presented. Experimental results obtained using this algorithm in two sets of examples are presented and analyzed.

Journal Article•DOI•
TL;DR: A module generator (DSYN) creates optimized digital/analog converter (DAC) layouts given a set of specifications including performance constraints, a description of the implementation technology, and aSet of design parameters.
Abstract: A module generator (DSYN) creates optimized digital/analog converter (DAC) layouts given a set of specifications including performance constraints, a description of the implementation technology, and a set of design parameters. The generation process consists of a synthesis step followed by a layout step. During synthesis, a new constrained optimization method is coupled with combination of circuit simulation and DAC design equations. The layout step uses stretching and tiling operations on a set of primitive cells. Prototypes have been demonstrated for an 8-b, 100-MS/s specification, driving a 37.5-ohm video load, and a static 10-b specification, driving a 4 mA full-scale output current. Both designs use a 5-V supply in a 1.2 /spl mu/m CMOS process.

Journal Article•DOI•
TL;DR: This work derives analytical expressions for valid clocking intervals in terms of topological, 2-vector, and single vector delays, both the longest and the shortest, and shows that these intervals subsume Cotten's lower bound on valid clock period.
Abstract: It is known that wavepipelined circuits offer high performance, because their maximum clock frequencies are limited only by the path delay differences of the circuits, as opposed to the longest path delays. For proper operation, precision in clock frequency is essential. Using a new representation, Timed Boolean Functions, we derive analytical expressions for valid clocking intervals in terms of topological, 2-vector, and single vector delays, both the longest and the shortest. These intervals take into account both circuit functionality and timing characteristics, thus eliminating the pessimism caused by long and short false paths, and include effects of circuit parameters such as delay variations, clock skews, and setup and hold times of flip flops. In addition, we show that these intervals subsume Cotten's lower bound on valid clock period. Further, we study the problem of computing all enact valid clocking intervals and its computational complexity by demonstrating discontinuity and nonmonotonicity of the harmonic number H(/spl tau/) (the number of valid simultaneous data waves allowed) as a function of the clock period /spl tau/. Finally, we propose algorithms to compute the exact valid intervals for a given set of harmonic numbers and demonstrate performance enhancement of balanced circuits from ISCAS benchmarks with gate delay variations.

Journal Article•DOI•
01 Nov 1996
TL;DR: This work offers both low-level and high-level models for asynchronous circuits and the environment where they operate, together with strong equivalence results between the properties at the two levels, and outlines a design methodology based on these models.
Abstract: Characterization of the behavior of an asynchronous system depending on the delay of components and wires is a major task facing designers. Some of these delays are outside the designer's control, and in practice may have to be assumed unbounded. The existing literature offers a number of analysis and specification models, but lacks a unified framework to verify directly if the circuit specification admits a correct implementation under these hypotheses. Our aim is to fill exactly this gap, offering both low-level (analysis-oriented) and high-level (specification-oriented) models for asynchronous circuits and the environment where they operate, together with strong equivalence results between the properties at the two levels. One interesting side result is the precise characterization of classical static and dynamic hazards in terms of our model. Consequently the designer can check the specification and directly decide if the behavior of any implementation will depend, e.g., on the delays of the signals described by such specification. We also outline a design methodology based on our models, pointing out how they can be used to select appropriate high and low-level models depending on the desired characteristics of the system.

Proceedings Article•DOI•
10 Nov 1996
TL;DR: In this article, a three-dimensional Green's Function based substrate representation, in combination with the use of the Fast Fourier Transform, significantly speeds up the computation of sensitivities with respect to all parameters associated with a given architecture and is used in a number of physical optimization tools, such as placement and trend analysis for the estimation of the impact of technology migration and/or layout re-design.
Abstract: A number of methods are presented for highly efficient calculation of substrate current transport. A three-dimensional Green's Function based substrate representation, in combination with the use of the Fast Fourier Transform, significantly speeds up the computation of sensitivities with respect to all parameters associated with a given architecture. Substrate sensitivity analysis is used in a number of physical optimization tools, such as placement and trend analysis for the estimation of the impact of technology migration and/or layout re-design.

Proceedings Article•DOI•
01 Jun 1996
TL;DR: A new formalism for the Engineering Change (EC) problem in a finite state machine (FSM) setting that derives the necessary and sufficient conditions for the existence of a solution to the problem.
Abstract: We propose a new formalism for the Engineering Change (EC) problem in a finite state machine (FSM) setting. Given an implementation that violates the specification, the problem is to alter the behavior of the implementation so that it meets the specification. The implementation can be a pseudo-nondeterministic FSM while the specification may be a nondeterministic FSM. The EC problem is cast as the existence of an "appropriate" simulation relation from the implementation into the specification. We derive the necessary and sufficient conditions for the existence of a solution to the problem. We synthesize all possible solutions, if the EC is feasible. Our algorithm works in space which is linear, and time which is quadratic, in the product of the sizes of implementation and specification. Previous formulations of the problem which admit nondeterministic specifications, although more general, lead to an algorithm which is exponential. We have implemented our procedure using Reduced Ordered Binary Decision Diagrams.

Proceedings Article•DOI•
19 Jun 1996
TL;DR: A flexible board-level rapid-prototyping environment for embedded control applications based on an APTix board populated by Xilinx FPGA devices, a 68hcll emulator, and APTIX programmable interconnect devices is described.
Abstract: This paper describes a flexible board-level rapid-prototyping environment for embedded control applications The environment is based on an APTIX board populated by Xilinx FPGA devices, a 68hcll emulator, and APTIX programmable interconnect devices Given a design consisting of logic and of software running on a micro-controller that implement a set of tasks, the prototype is obtained by programming the FPGA devices, the micro-controller emulator and the APTIX devices This environment being based on programmable devices offers the flexibility to perform engineering changes, the performance needed to validate complex systems and the hardware set up for field tests The key point in our approach is the use of results of our previous research on software and hardware synthesis as well as on some commercial tools to provide the designer with fast programming data from a high level description of the algorithms to be implemented We demonstrate the effectiveness of the approach by showing a close-to real-life example from the automotive world

Proceedings Article•DOI•
10 Nov 1996
TL;DR: In this paper, a branch and bound procedure is used to generate test sets for multiple stuck-faults in a logic circuit, independent of the occurrence of other faults, and the technique is complete and applies to all circuits.
Abstract: We propose a novel procedure for testing all multiple stuck-faults in a logic circuit using two complementary algorithms. The first algorithm finds pairs of input vectors to detect the occurrence of target single stuck-faults independent of the occurrence of other faults. The second uses a sophisticated branch and bound procedure to complete the test set generation on the faults undetected by the first algorithm. The technique is complete and applies to all circuits. Experimental results presented in this paper demonstrate that compact and complete test sets can be quickly generated for standard benchmark circuits.


Proceedings Article•DOI•
03 Jan 1996
TL;DR: Four heuristic algorithms for finding good composition orders are detailed, and their utility on a set of standard benchmark circuits is compared to offer a matrix of time-memory tradeoff points.
Abstract: Reduced Ordered Binary Decision Diagrams (ROBDDs) have traditionally been built in a bottom-up fashion. In this scheme, the intermediate peak memory utilization is often larger than the final ROBDD size, limiting the complexity of the circuits which can be processed using ROBDDS. Recently we showed that for a large number of applications, the peak memory requirement can be substantially reduced by a suitable combination of bottom up (decomposition based) and top down (composition based) approaches of building ROBDDs. In this paper, we focus on the composition process. We detail four heuristic algorithms for finding good composition orders, and compare their utility on a set of standard benchmark circuits. Our schemes offer a matrix of time-memory tradeoff points.

Book Chapter•DOI•
01 Jan 1996
TL;DR: Home personal computers will not be as pervasive as they are today because more dedicated electronic components will be more appealing and cost-effective for the final user.
Abstract: The electronics industry has been growing at an impressive rate for the past few years. A reason for its growth is the use of electronics components in almost all traditional systems such as automobiles, home appliances, and personal communication devices. In this framework, objects assume an electronic dimension that makes them more effective, more reliable, and less expensive. Home personal computers will not be as pervasive as they are today because more dedicated electronic components will be more appealing and cost-effective for the final user.