scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 1993"


Journal Article•DOI•
Farid N. Najm1•
TL;DR: It is shown how the density values at internal nodes can be used to study circuit reliability by estimating theaverage power and ground currents; the average power dissipation; the susceptibility to electromigration failures; and the extent of hot-electron degradation.
Abstract: Noting that a common element in most causes of runtime failure is the extent of circuit activity, i.e. the rate at which its nodes are switching, the author proposes a measure of activity, called the transition density, which may be defined as the average switching rate at a circuit node. An algorithm is also presented to propagate density values from the primary inputs to internal and output nodes. To illustrate the practical significance of this work, it is shown how the density values at internal nodes can be used to study circuit reliability by estimating the average power and ground currents; the average power dissipation; the susceptibility to electromigration failures; and the extent of hot-electron degradation. The density propagation algorithm has been implemented in a prototype density simulator which is used to assess the validity and feasibility of the approach experimentally. The results show that the approach is very efficient, and makes possible the analysis of VLSI circuits. >

430 citations


Journal Article•DOI•
TL;DR: Experimental results obtained by adding the proposed heuristics to a simple PODEM procedure and applying it to the ISCas-85 and fully-scanned ISCAS-89 benchmark circuits are presented to substantiate the effectiveness of the proposedHeuristics.
Abstract: Heuristics to aid the derivation of small test sets that detect single stuck-at faults in combinational logic circuits are proposed. The heuristics can be added to existing test pattern generators without compromising fault coverage. Experimental results obtained by adding the proposed heuristics to a simple PODEM procedure and applying it to the ISCAS-85 and fully-scanned ISCAS-89 benchmark circuits are presented to substantiate the effectiveness of the proposed heuristics. >

332 citations


Journal Article•DOI•
TL;DR: An efficient convex optimization algorithm is used here, guaranteed to find the exact solution to the convex programming problem, and improved upon existing methods for computing the circuit delay as an Elmore time constant to achieve higher accuracy.
Abstract: A general sequential circuit consists of a number of combinational stages that lie between latches For the circuit to meet a given clocking specification, it is necessary for each combinational stage to satisfy a certain delay requirement Roughly speaking, increasing the sizes of some transistors in a stage reduces the delay, with the penalty of increased area The problem of transistor sizing is to minimize the area of a combinational stage, subject to its delay being less than a given specification Although this problem has been recognized as a convex programming problem, most existing approaches do not take full advantage of this fact, and often give nonoptimal results An efficient convex optimization algorithm has been used here This algorithm is guaranteed to find the exact solution to the convex programming problem We have also improved upon existing methods for computing the circuit delay as an Elmore time constant, to achieve higher accuracy, CMOS circuit examples, including a combinational circuit with 832 transistors are presented to demonstrate the efficacy of the new algorithm >

301 citations


Journal Article•DOI•
Ren-Song Tsay1•
TL;DR: An exact zero-skew clock routing algorithm using the Elmore delay model is presented, ideal for hierarchical methods of constructing large systems that can be constructed in parallel and independently, then interconnected with exact zero skew.
Abstract: An exact zero-skew clock routing algorithm using the Elmore delay model is presented. The results have been verified with accurate waveform simulation. The authors first review a linear time delay computation method. A recursive bottom-up algorithm is then proposed for interconnecting two zero-skewed subtrees to a new tree with zero skew. The algorithm can be applied to single-staged clock trees, multistaged clock trees, and multi-chip system clock trees. The approach is ideal for hierarchical methods of constructing large systems. All subsystems can be constructed in parallel and independently, then interconnected with exact zero skew. Extensions to the routing of optimum nonzero-skew clock trees (for cycle stealing) and multiphased clock trees are also discussed. >

219 citations


Journal Article•DOI•
Jacob Savir1, S. Patil1•
TL;DR: In this paper, several issues of skewed-load transition test are investigated, such as transition test calculus, detection probability of transition faults, transition fault coverage, and enhancement of transition test quality.
Abstract: Skewed-load transition test is a form of scan-based transition test where the second vector of the delay test pair is a one bit shift over the first vector in the pair. This situation occurs when testing the combinational logic residing between scan chains. In the skewed-load test protocol, in order not to disturb the logic initialized by the first vector of the delay test pair, the second vector of the pair (the one that launches the transition) is required to be the next (i.e., one-bit-shift) pattern in the scan chain. Although a skewed-load transition test is attractive from a timing point of view, there are various problems that may arise if this strategy is used. Here, several issues of skewed-load transition test are investigated. Issues such as transition test calculus, detection probability of transition faults, transition fault coverage, and enhancement of transition test quality are thoroughly studied. >

207 citations


Journal Article•DOI•
TL;DR: Berkeley reliability tools (BERT) simulates the circuit degradation (drift) due to hot-electron degradation in MOSFETs and bipolar transistors and predicts circuit failure rates due to oxide breakdown and electromigration in CMOS, bipolar, and BiCMOS circuits.
Abstract: Berkeley reliability tools (BERT) simulates the circuit degradation (drift) due to hot-electron degradation in MOSFETs and bipolar transistors and predicts circuit failure rates due to oxide breakdown and electromigration in CMOS, bipolar, and BiCMOS circuits. With the increasing importance of reliability in today's and future technology, a reliability simulator such as this is expected to serve as the engine of design-for-reliability in a building-in-reliability paradigm. BERT works in conjunction with a circuit simulator such as SPICE in order to simulate reliability for actual circuits, and, like SPICE, acts as an interactive tool for design. BERT is introduced and the current work being done is summarized. BERT is used to study the reliability of a BiCMOS inverter chain, and performance data are presented. >

202 citations


Journal Article•DOI•
TL;DR: Methods for the cost-effective design of combinational and sequential self-checking functional circuits and checkers are examined and the area overhead for all proposed design alternatives is studied in detail.
Abstract: Self-checking circuits can detect the presence of both transient and permanent faults. A self-checking circuit consists of a functional circuit that produces encoded output vectors and a checker that checks the output vectors. The checker has the ability to expose its own faults as well. The functional circuit can be either combinational or sequential. A self-checking system consists of an interconnection of self-checking circuits. The advantage of such a system is that errors can be caught as soon as they occur; thus, data contamination is prevented. Methods for the cost-effective design of combinational and sequential self-checking functional circuits and checkers are examined. The area overhead for all proposed design alternatives is studied in detail. >

185 citations


Journal Article•DOI•
TL;DR: The application of globally convergent probability-one homotopy methods to various systems of nonlinear equations that arise in circuit simulation is discussed and the theoretical claims of global convergence for such methods are substantiated.
Abstract: Efficient and robust computation of one or more of the operating points of a nonlinear circuit is a necessary first step in a circuit simulator. The application of globally convergent probability-one homotopy methods to various systems of nonlinear equations that arise in circuit simulation is discussed. The coercivity conditions required for such methods are established using concepts from circuit theory. The theoretical claims of global convergence for such methods are substantiated by experiments with a collection of examples that have proved difficult for commercial simulation packages that do not use homotopy methods. Moreover, by careful design of the homotopy equations, the performance of the homotopy methods can be made quite reasonable. An extension to the steady-state problem in the time domain is also discussed. >

184 citations


Journal Article•DOI•
TL;DR: A transitive-closure-based test generation algorithm is presented that dependences derived from the transitive closure are used to reduce ternary relations to binary relations that in turn dynamically update the transitives closure.
Abstract: A transitive-closure-based test generation algorithm is presented. A test is obtained by determining signal values that satisfy a Boolean equation derived from the neural network model of the circuit incorporating necessary conditions for fault activation and path sensitization. The algorithm is a sequence of two main steps that are repeatedly executed: transitive closure computation and decision-making. A key feature of the algorithm is that dependences derived from the transitive closure are used to reduce ternary relations to binary relations that in turn dynamically update the transitive closure. The signals are either determined from the transitive closure or are enumerated until the Boolean equation is satisfied. Experimental results on the ISCAS 1985 and the combinational parts of ISCAS 1989 benchmark circuits are presented to demonstrate efficient test generation and redundancy identification. Results on four state-of-the-art production VLSI circuits are also presented. >

179 citations


Journal Article•DOI•
TL;DR: A simplification algorithm that iteratively reduces the number of the products in ESOPs and then reduces theNumber of the literals is presented, used to replace a pair of products with another one.
Abstract: Minimization of AND-EXOR programmable logic arrays (PLAs) with input decoders corresponds to minimization of the number of products in Exclusive-OR sum-of-products expressions (ESOPs) for multiple-valued-input two-valued-output functions. A simplification algorithm for ESOPs that iteratively reduces the number of the products in ESOPs and then reduces the number of the literals is presented. Various rules are used to replace a pair of products with another one. Many AND-EXOR PLAs for arithmetic circuits have been simplified. In most cases, AND-EXOR PLAs required fewer products than AND-OR PLAs. >

135 citations


Journal Article•DOI•
TL;DR: An efficient method based on reachability analysis of the fault-free machine (three-phase ATPG) in addition to the powerful but more resource-demanding product machine traversal is presented.
Abstract: Finite state machine (FSM) verification based on implicit state enumeration can be extended to test generation and redundancy identification. The extended method constructs the product machine of two FSMs to be compared, and reachability analysis is performed by traversing the product machine to find any difference in I/O behavior. When an output difference is detected, the information obtained by reachability analysis is used to generate a test sequence. This method is complete, and it generates one of the shortest possible test sequences for a given fault. However, applying this method indiscriminately for all faults may result in unnecessary waste of computer resources. An efficient method based on reachability analysis of the fault-free machine (three-phase ATPG) in addition to the powerful but more resource-demanding product machine traversal is presented. The application of these algorithms to the problems of generating test sequences, identifying redundancies, and removing redundancies is reported. >

Journal Article•DOI•
Janusz Rajski1, Jerzy Tyszer1•
TL;DR: An accumulator-based compaction (ABC) scheme for parallel compaction of test responses is presented, and it is shown that the asymptotic coverage drop depends both on the size of the accumulator and the probability of a fault injection.
Abstract: An accumulator-based compaction (ABC) scheme for parallel compaction of test responses is presented. In this scheme an accumulator with an n-bit binary adder is slightly modified such that the quality of compaction defined by the asymptotic coverage drop is similar to that offered by shift registers with irreducible polynomials of cellular automata. A Markov-chain model is used to analyze both the asymptotic coverage drop introduced by this scheme, and its transient behavior. It is shown that the asymptotic coverage drop depends both on the size of the accumulator and the probability of a fault injection. The upper bound of the coverage drop during the transition phrase is also provided. The proposed scheme is compatible with the width of the data path, and the test can be applied at the normal mode speed. The minimal hardware overhead involves only one-bit register to implement the feedback between the carry-out and carry-in lines. >

Journal Article•DOI•
Kwang-Ting Cheng1•
TL;DR: Experimental results on large benchmark circuits show that a high transition fault coverage can be achieved for the partial scan circuits designed using the cycle breaking technique and deterministic test generation for transition faults is required.
Abstract: Addresses the problem of simulating and generating tests for transition faults in nonscan and partial scan synchronous sequential circuits. A transition fault model for sequential circuits is first proposed. In this fault model, a transition fault is characterized by the fault site, the fault type, and the fault size. The fault type is either slow-to-rise or slow-to-fall. The fault size is specified in units of clock cycles. Fault simulation and test generation algorithms for this fault model are presented. The fault simulation algorithm is a modification of PROOFS, a parallel, differential fault simulation algorithm for stuck faults. Experimental results show that neither a comprehensive functional verification sequence nor a test sequence generated by a sequential circuit test generator for stuck faults produces a high fault coverage for transition faults. Deterministic test generation for transition faults is required to raise the coverage to a reasonable level. With the use of a novel fault injection technique, tests for transition faults can be generated by using a stuck fault test generation algorithm with some modifications. Experimental results for ISCAS-89 benchmark circuits and some AT&T designs are presented. Modifications to test generation and fault simulation algorithms required for partial scan circuits are presented. Experimental results on large benchmark circuits show that a high transition fault coverage can be achieved for the partial scan circuits designed using the cycle breaking technique. >

Journal Article•DOI•
TL;DR: In this article, a method for weighted pseudorandom test generation based on a deterministic test set is described, where only three easily generated weights (0, 0.5 and 1) are used, and a minimum number of shift register cells is used, thus leading to minimal hardware for built-in test applications.
Abstract: A method for weighted pseudorandom test generation based on a deterministic test set is described. The main advantages of the method described over existing methods are: (1) only three easily generated weights (0, 0.5 and 1) are used, (2) a minimum number of shift register cells is used, thus leading to minimal hardware for built-in-test applications, and (3) the weights are selected to allow the same coverage of target faults attained by the deterministic test set to be attained by weighted random patterns. The weights are computed by walking through the range of test generation approaches from pure random at one extreme to deterministic at the other extreme, dynamically selecting the weight assignments to correspond to the remaining faults at every stage. Hardware suitable for the generation of random patterns under the proposed method is described. The method is suitable for both combinational and sequential circuits. Experimental results are provided for ISCAS-85 and MCNC benchmark circuits. >

Journal Article•DOI•
TL;DR: Algorithms to automatically realize delays in combinational logic circuits to achieve wave pipelining are presented and the algorithms adjust gate speeds and insert a minimal number of active delay elements to balance input-output path lengths in a circuit.
Abstract: Algorithms to automatically realize delays in combinational logic circuits to achieve wave pipelining are presented. The algorithms adjust gate speeds and insert a minimal number of active delay elements to balance input-output path lengths in a circuit. For both normal and wave-pipelined circuits, the algorithms also optimally minimize power under delay constraints. The authors analyze the algorithms and comment on their implementation. They report experimental results, including the design and testing of a 63-bit population counter in CML bipolar technology. A brief analysis of circuit technologies shows that CML and super-buffered ECL without stacked structures are well suited for wave pipelining because they have uniform delay. Static CMOS and ordinary ECL including stacked structures and emitter-followers do have some delay variations. A high degree of wave pipelining is still possible in those technologies if special design techniques are followed. >

Journal Article•DOI•
TL;DR: A heuristic state assignment algorithm that results in highly gate-delay-fault testable sequential FSMs is developed and a two-time-frame expansion of the combinational logic of the circuit and the use of backtracking heuristics tailored for the problem are considered.
Abstract: The problems of test generation and synthesis aimed at producing VLSI sequential circuits that are delay-fault testable under a standard scan design methodology are considered. Theoretical results regarding the standard scan-delay testability of finite state machines (FSMs) described at the state transition graph (STG) level are given. It is shown that a one-hot coded and optimized FSM whose STG satisfies a certain property is guaranteed to be fully gate-delay-fault testable under standard scan. This result is extended to arbitrary-length encodings, and a heuristic state assignment algorithm that results in highly gate-delay-fault testable sequential FSMs is developed. The authors also consider the problem of delay test generation for large sequential circuits and modify a PODEM-based combinational test pattern generator. The modifications involve a two-time-frame expansion of the combinational logic of the circuit and the use of backtracking heuristics tailored for the problem. A version of the scan shifting technique is also used in the test pattern generator. Test generation, flip-flop ordering, flip-flop selection and test set compaction results on large benchmark circuits are presented. >

Journal Article•DOI•
TL;DR: The authors introduce the notion of static cosensitization of paths which leads to necessary and sufficient conditions for determining the truth or falsity of a single path, or a set of paths.
Abstract: Addresses the problem of accurately computing the delay of a combinational logic circuit in the floating mode of operation. (In this mode the state of the circuit is considered to be unknown when a vector is applied at the inputs.) It is well known that using the length of the topologically longest path as an estimate of circuit delay may be pessimistic since this path may be false, i.e., it cannot propagate an event. Thus, the true delay corresponds to the length of the longest true path. This forces one to examine the conditions under which a path is true. The authors introduce the notion of static cosensitization of paths which leads to necessary and sufficient conditions for determining the truth or falsity of a single path, or a set of paths. The authors apply these results to develop a delay computation algorithm that has the unique feature that it is able to determine the truth or falsity of entire sets of paths simultaneously. This algorithm uses conventional stuck-at-fault testing techniques to arrive at a delay computation method that is both correct and computationally practical, even for particularly difficult circuits. >

Journal Article•DOI•
TL;DR: Algorithms and a computer-aided design tool for technology mapping of both completely specified and incompletely specified logic networks are introduced and a novel matching algorithm, using ordered binary decision diagrams, is described.
Abstract: Algorithms and a computer-aided design tool, called Ceres, for technology mapping of both completely specified and incompletely specified logic networks are introduced. The algorithms are based on Boolean techniques for matching, i.e., for the recognition of the equivalence between a portion of a network and library cells. A novel matching algorithm, using ordered binary decision diagrams, is described. It exploits the notion of symmetry to achieve higher computational efficiency. A matching technique that takes advantage of don't-care conditions by means of a compatibility graph is also described. A strategy for timing-driven technology mapping, based on iterative improvement, is presented. Experimental results indicate that these techniques generate good-quality solutions and require short run times and limited memory space. >

Journal Article•DOI•
TL;DR: SE-based synthesis explores the design space by repeatedly ripping up parts of a design in a probabilistic manner and reconstructing them using application-specific heuristics that combine rapid design iterations and Probabilistic hill climbing to achieve effective design space exploration.
Abstract: A general optimization algorithm known as simulated evolution (SE) is applied to the tasks of scheduling and allocation in high level synthesis. Basically, SE-based synthesis explores the design space by repeatedly ripping up parts of a design in a probabilistic manner and reconstructing them using application-specific heuristics that combine rapid design iterations and probabilistic hill climbing to achieve effective design space exploration. Benchmark results are presented to demonstrate the effectiveness of this approach. The results of a number of experiments that provide insight into why SE-based synthesis works so well are given. >

Journal Article•DOI•
TL;DR: It is shown how identifying clusters in a circuit can simplify two important CAD problems-system-level clustering and module (layout) generation and the essential role such a system could play in aiding the high-level system designer.
Abstract: The authors point out that proper usage of regularity in digital systems leads to efficient as well as economical designs. This important question of regularity extraction is examined, and a general and efficient methodology for component clustering based on the concept of structural regularity is presented. While the concept of regularity can be employed to simplify many problems in the area of design automation, system- and logic-level applications are emphasized here. The authors show how identifying clusters in a circuit can simplify two important CAD problems-system-level clustering and module (layout) generation. A prototype system based on these ideas has been built, and some real-life examples are considered for testing. The results are encouraging; they demonstrate the essential role such a system could play in aiding the high-level system designer. Research is under way to explore some of the other promising applications that such a system could have. >

Journal Article•DOI•
TL;DR: The authors present a simple linear time algorithm to compute a correct initial state for a retimed circuit that can be used whenever the initial state of the original circuit satisfies a simple condition.
Abstract: Retiming is an optimization technique for sequential circuits which consists in modifying the position of latches relative to blocks of combinational logic in order to minimize the maximum propagation delay between latches or to meet a given delay requirement while minimizing the number of latches If the initial state of the circuit is meaningful, one must compute an equivalent initial state for the retimed circuit after retiming The authors present a simple linear time algorithm to compute a correct initial state for a retimed circuit that can be used whenever the initial state of the original circuit satisfies a simple condition If this condition is not originally satisfied, it is shown how it can be automatically enforced by a logic synthesis tool with no need for user intervention >

Journal Article•DOI•
TL;DR: Three new path sensitization criteria are proposed in a general framework and an approximate criterion is also proposed and used to develop an efficient critical path algorithm for combinational circuits.
Abstract: An important aspect of the critical path problem is deciding whether a path is sensitizable. Three new path sensitization criteria are proposed in a general framework. Other path sensitization criteria can be presented in the same framework, enabling them to be compared with each other. An approximate criterion is also proposed and used to develop an efficient critical path algorithm for combinational circuits. >

Journal Article•DOI•
TL;DR: The results of experiments in variable ordering using an experimentally practical algorithm are presented, which is basically a depth-first traversal through a circuit from the output to the inputs.
Abstract: Ordered binary decision diagrams (OBDDs) use restricted decision trees with shared subgraphs. The ordering of variables is fixed throughout an OBDD diagram. However, the size of an OBDD is very sensitive to variable ordering, especially for large circuits. The results of experiments in variable ordering using an experimentally practical algorithm are presented. The algorithm is basically a depth-first traversal through a circuit from the output to the inputs. With this algorithm, circuits having more than 3000 gates and more than 100 inputs can be expressed in reasonable CPU time and with practical memory requirements. >

Journal Article•DOI•
TL;DR: A general clock routing scheme that achieves extremely small clock skews while still using a reasonable amount of wirelength is presented, based on the construction of a binary tree using geometric matching.
Abstract: The authors point out that minimizing clock skew is important in the design of high-performance VLSI systems. A general clock routing scheme that achieves extremely small clock skews while still using a reasonable amount of wirelength is presented. The routing solution is based on the construction of a binary tree using geometric matching. For cell-based designs, the total wirelength of the clock routing tree is on average within a constant factor of the wirelength in an optimal Steiner tree, and in the worst case is bounded by O( square root l/sub 1/l/sub 2/*1 square root n) for n terminals arbitrarily distributed in the l/sub 1/*l/sub 2/ grid. The bottom-up construction readily extends to general cell layouts, where it also achieves essentially zero clock skew within reasonably bounded total wirelength. The algorithms have been tested on numerous random examples and also on layouts of industrial benchmark circuits. The results are very promising: the clock routing yields near-zero average clock skew while using total wirelength competitive with that used by previously known methods. >

Journal Article•DOI•
TL;DR: The results indicate that the statistical methods investigated potentially yield low detection and classification error rates.
Abstract: The standard multivariate techniques of hypothesis testing and discrimination analysis have been applied to detect and classify faults in a variety of linear IC designs. These techniques are potentially useful for tracking IC failures during processing or for assessing failure mechanisms of ICs once the circuits are in field use. The results indicate that the statistical methods investigated potentially yield low detection and classification error rates. >

Journal Article•DOI•
TL;DR: A stochastic model that facilitates exploration of a wide range of FPGA routing architectures using a theoretical approach is described and the routability predictions from the model are validated by comparing them with the results of a previously published experimental study on FPGa routability.
Abstract: One area of particular importance is the design of an FPGA routing architecture, which houses the user-programmable switches and wires that are used to interconnect the FPGAs logic resources. Because the routing switches consume significant chip area and introduce propagation delays, the design of the routing architecture greatly influences both the area utilization and speed performance of an FPGA. FPGA routing architectures have already been studied using experimental techniques. This paper describes a stochastic model that facilitates exploration of a wide range of FPGA routing architectures using a theoretical approach. In the stochastic model an FPGA is represented as an N*N array of logic blocks separated by both horizontal and vertical routing channels, similar to a Xilinx FPGA. A circuit to be routed is represented by additional parameters that specify the total number of connections, and each connection's length and trajectory. The stochastic model gives an analytic expression for the routability of the circuit in the FPGA. Practically speaking, routability can be viewed as the likelihood that a circuit can be successfully routed in a given FPGA. The routability predictions from the model are validated by comparing them with the results of a previously published experimental study on FPGA routability. >

Journal Article•DOI•
Sy-Yen Kuo1•
TL;DR: Experimental results show that large reduction in the number of critical areas and significant improvement in yield are achieved, particularly for practical size channels such as Deutsch's difficult problem.
Abstract: The author points out that the goal of a channel routing algorithm is to route all the nets with as few tracks as possible to minimize chip areas and achieve 100% connection. However, the manufacturing yield may not reach a satisfactory level if care is not taken to reduce critical areas which are susceptible to defects. These critical areas are caused by the highly compacted adjacent wires and vias in the routing region. A channel routing algorithm, the yield optimizing routing (YOR) algorithm, is presented to deal with this problem. It systematically eliminates critical areas by floating, burying, and bumping net segments as well as shifting vias. The YOR algorithm also minimizes the number of vias since vias in a chip will increase manufacturing complexity and hence degrade the yield. YOR has been implemented and applied to benchmark routing layouts in the literature. Experimental results show that large reduction in the number of critical areas and significant improvement in yield are achieved, particularly for practical size channels such as Deutsch's difficult problem. >

Journal Article•DOI•
TL;DR: It is shown that the path selection is different from path sensitization, and an input vector-oriented path selection algorithm is proposed, which may be infeasible for complex circuits with many primary inputs.
Abstract: The problem of selecting a set of paths to optimize the performance of a combinational circuit is studied, assuming that gate resizing and buffer insertion are the two possible optimizing techniques for reducing the delay of a circuit. The objective of the path selection problem is to select as small as possible a set of paths to ease the optimization processing to guarantee that the delay of the circuit is no longer than a given threshold tau if the delays of all the selected paths are no longer than tau . It is shown that the path selection is different from path sensitization. An input vector-oriented path selection algorithm is proposed. Because it may be infeasible for complex circuits with many primary inputs, a path-oriented algorithm is also developed and implemented. Experimental results on ISCAS85 benchmark circuits show a potentially big improvement for the optimization process. >

Journal Article•DOI•
TL;DR: A high-level process model of parallel simulation is presented and algorithms for parallel logic and fault simulation of synchronous gate-level designs are introduced, based on a partitioning approach that reduces the number of necessary synchronizations between processors.
Abstract: The authors define a general framework for the parallel simulation of digital systems and develop and evaluate tools for logic and fault simulation that have a good cost-performance ratio. They first review previous work and identify central issues. Then a high-level process model of parallel simulation is presented to clarify essential design choices. Algorithms for parallel logic and fault simulation of synchronous gate-level designs are introduced. The algorithms are based on a partitioning approach that reduces the number of necessary synchronizations between processors. A simple performance model characterizes the dependence on some crucial parameters. Experimental results for some large benchmarks are given, using prototype implementations for both message-passing and shared-memory machines. >

Journal Article•DOI•
TL;DR: Experimental results are presented to show that oriented dynamic learning is far more efficient than dynamic learning in SOCRATES.
Abstract: An efficient technique for dynamic learning called oriented dynamic learning is proposed. Instead of learning being performed for almost all signals in the circuit, it is shown that it is possible to determine a subset of these signals to which all learning operations can be restricted. It is further shown that learning for this set of signals provides the same knowledge about the nonsolution areas in the decision trees as the dynamic learning of SOCRATES. High efficiency is achieved by limiting learning to certain learning lines that lie within a certain area of the circuit, called the active area. Experimental results are presented to show that oriented dynamic learning is far more efficient than dynamic learning in SOCRATES. >