# Showing papers in "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 1995"

â€˘â€˘

[...]

Bell Labs

^{1}TL;DR: In this article, the Lanczos process is used to compute the Pade approximation of Laplace-domain transfer functions of large linear networks via a Lanczos Process (PVL) algorithm.

Abstract: In this paper, we introduce PVL, an algorithm for computing the Pade approximation of Laplace-domain transfer functions of large linear networks via a Lanczos process. The PVL algorithm has significantly superior numerical stability, while retaining the same efficiency as algorithms that compute the Pade approximation directly through moment matching, such as AWE and its derivatives. As a consequence, it produces more accurate and higher-order approximations, and it renders unnecessary many of the heuristics that AWE and its derivatives had to employ. The algorithm also computes an error bound that permits to identify the true poles and zeros of the original network. We present results of numerical experiments with the PVL algorithm for several large examples. >

1,292Â citations

â€˘â€˘

[...]

TL;DR: The results indicate that more than an order of magnitude reduction in power can be achieved over current-day design methodologies while maintaining the system throughput; in some cases this can be accomplished while preserving or reducing the implementation area.

Abstract: The increasing demand for portable computing has elevated power consumption to be one of the most critical design parameters. A high-level synthesis system, HYPER-LP, is presented for minimizing power consumption in application specific datapath intensive CMOS circuits using a variety of architectural and computational transformations. The synthesis environment consists of high-level estimation of power consumption, a library of transformation primitives, and heuristic/probabilistic optimization search mechanisms for fast and efficient scanning of the design space. Examples with varying degree of computational complexity and structures are optimized and synthesized using the HYPER-LP system. The results indicate that more than an order of magnitude reduction in power can be achieved over current-day design methodologies while maintaining the system throughput; in some cases this can be accomplished while preserving or reducing the implementation area. >

458Â citations

â€˘â€˘

[...]

TL;DR: A multipoint moment-matching, or complex frequency hopping (CFH) technique is introduced which extracts accurate dominant poles of a linear subnetwork up to any predefined maximum frequency and provides for a CPU/accuracy tradeoff.

Abstract: With increasing miniaturization and operating speeds, loss of signal integrity due to physical interconnects represents a major performance limiting factor of chip-, board- or system-level design. Moment-matching techniques using Pade approximations have recently been applied to simulating modelled interconnect networks that include lossy coupled transmission lines and nonlinear terminations, giving a marked increase in efficiency over traditional simulation techniques. Nevertheless, moment-matching can be inaccurate in high-speed circuits due to critical properties of Pade approximations. Further, moment-generation for transmission line networks can be shown to have increasing numerical truncation error with higher order moments. These inaccuracies are reflected in both the frequency and transient response and there is no criterion for determining the limits of the error. In this paper, a multipoint moment-matching, or complex frequency hopping (CFH) technique is introduced which extracts accurate dominant poles of a linear subnetwork up to any predefined maximum frequency. The method generates a single transfer function for a large linear subnetwork and provides for a CPU/accuracy tradeoff. A new algorithm is also introduced for generating higher-order moments for transmission lines without incurring increasing truncation error. Several interconnect examples are considered which demonstrate the accuracy and efficiency in both the time and frequency domains of the new method. >

330Â citations

â€˘â€˘

[...]

Osaka University

^{1}TL;DR: New cost-effective heuristics for the generation of minimal test sets that reduce the number of tests and allow tests generated earlier in the test generation process to be dropped are presented.

Abstract: This paper presents new cost-effective heuristics for the generation of minimal test sets. Both dynamic techniques, which are introduced into the test generation process, and a static technique, which is applied to already generated test sets, are used. The dynamic compaction techniques maximize the number of faults that a new test vector detects out of the yet-undetected faults as well as out of the already-detected ones. Thus, they reduce the number of tests and allow tests generated earlier in the test generation process to be dropped. The static compaction technique replaces N test vectors by M

220Â citations

â€˘â€˘

[...]

TL;DR: A pattern-independent, linear time algorithm (iMax) that estimates at every contact point, an upper bound envelope of all possible current waveforms that result by the application of different input patterns to the circuit is proposed.

Abstract: Currents flowing in the power and ground (P&G) buses of CMOS digital circuits affect both circuit reliability and performance by causing excessive voltage drops. Excessive voltage drops manifest themselves as glitches on the P&G buses and cause erroneous logic signals and degradation in switching speeds. Maximum current estimates are needed at every contact point in the buses to study the severity of the voltage drop problems and to redesign the supply lines accordingly. These currents, however, depend on the specific input patterns that are applied to the circuit. Since it is prohibitively expensive to enumerate all possible input patterns, this problem has, for a long time, remained largely unsolved. In this paper, we propose a pattern-independent, linear time algorithm (iMax) that estimates at every contact point, an upper bound envelope of all possible current waveforms that result by the application of different input patterns to the circuit. The algorithm is extremely efficient and produces good results for most circuits as is demonstrated by experimental results on several benchmark circuits. The accuracy of the algorithm can be further improved by resolving the signal correlations that exist inside a circuit. We also present a novel partial input enumeration (PIE) technique to resolve signal correlations and significantly improve the upper bounds for circuits where the bounds produced by iMax are not tight. We establish with extensive experimental results that these algorithms represent a good time-accuracy trade-off and are applicable to VLSI circuits. >

156Â citations

â€˘â€˘

[...]

TL;DR: A polynomial-time optimal wiresizing algorithm for arbitrary interconnect tree structures under Elmore delay model is developed that reduces interconnect delay by up to 51% when compared to the uniform-width solution of the same routing topology.

Abstract: In this paper, we study the optimal wiresizing problem under the Elmore delay model. We show that the optimal wiresizing solutions satisfy a number of interesting properties, including the separability, the monotone property, and the dominance property. Based on these properties, we have developed a polynomial-time optimal wiresizing algorithm for arbitrary interconnect tree structures under Elmore delay model. Extensive experimental results have shown that our wiresizing solution reduces interconnect delay by up to 51% when compared to the uniform-width solution of the same routing topology. Furthermore, compared to the wiresizing solution based on a simpler RC delay model our wiresizing solution reduces the total wiring area by up to 28% while further reducing the interconnect delays to the timing-critical sinks by up to 12%. >

126Â citations

â€˘â€˘

[...]

TL;DR: This paper presents a method for multilevel logic optimization for combinational and synchronous sequential circuits that can efficiently identify those wires for addition that would create more redundancies elsewhere in the network.

Abstract: This paper presents a method for multilevel logic optimization for combinational and synchronous sequential circuits. The circuits are optimized through iterative addition and removal of redundancies. Adding redundant wires to a circuit may cause one or many existing irredundant wires and/or gates to become redundant. If the amount of added redundancies is less than the amount of created redundancies, the transformation of adding followed by removing redundancies will result in a smaller circuit. Based upon the Automatic Test Pattern Generation (ATPG) techniques, the proposed method can efficiently identify those wires for addition that would create more redundancies elsewhere in the network. Experiments on ISCAS-85 combinational benchmark circuits show that best results are obtained for most of them. For sequential circuits, experimental results on MCNC FSM benchmarks and ISCAS-89 sequential benchmark circuits show that a significant amount of area reduction can be achieved beyond combinational optimization and sequential redundancy removal. >

125Â citations

â€˘â€˘

[...]

TL;DR: A new approach for realistic worst-case analysis of VLSI circuit performances and a novel methodology for circuit performance optimization that is formulated as a constrained multicriteria optimization are presented.

Abstract: In this paper, we present a new approach for realistic worst-case analysis of VLSI circuit performances and a novel methodology for circuit performance optimization. Circuit performance measures are modeled as response surfaces of the designable and uncontrollable (noise) parameters. Worst-case analysis proceeds by first computing the worst-case circuit performance value and then determining the worst-case noise parameter values by solving a nonlinear programming problem. A new circuit optimization technique is developed to find an optimal design point at which all of the circuit specifications are met under worst-case conditions. This worst-case design optimization method is formulated as a constrained multicriteria optimization. The methodologies described in this paper are applied to several VLSI circuits to demonstrate their accuracy and efficiency. >

116Â citations

â€˘â€˘

[...]

TL;DR: A new approach to cell-level analog circuit synthesis is presented that formulates analog synthesis as a Mixed-Integer Nonlinear Programming (MINLP) problem in order to allow simultaneous topology and parameter selection.

Abstract: A new approach to cell-level analog circuit synthesis is presented. This approach formulates analog synthesis as a Mixed-Integer Nonlinear Programming (MINLP) problem in order to allow simultaneous topology and parameter selection. Topology choices are represented as binary integer variables and design parameters (e.g., device sizes and bias voltages) as continuous variables. Examples using a Branch and Bound method to efficiently solve the MINLP problem for CMOS two-stage op amps are given. >

106Â citations

â€˘â€˘

[...]

TL;DR: This paper describes a new method for exact hazard-free logic-minimization of Boolean functions, a constrained version of the Quine-McCluskey algorithm which produces a minimum-cost sum-of-products implementation which is hazard- free for a given set of multiple-input changes.

Abstract: This paper describes a new method for exact hazard-free logic-minimization of Boolean functions. Given an incompletely-specified Boolean function, the method produces a minimum-cost sum-of-products implementation which is hazard-free for a given set of multiple-input changes, if such a solution exists. The method is a constrained version of the Quine-McCluskey algorithm. It has been automated and applied to a number of examples. Results are compared with results of a comparable non-hazard-free method (espresso-exact). Overhead due to hazard elimination is shown to be negligible. >

98Â citations

â€˘â€˘

[...]

TL;DR: This work presents critical-sink routing tree (CSRT) constructions which exploit available critical-path information to yield high-performance routing trees that significantly improve over minimum Steiner routings in terms of delays to identified critical sinks.

Abstract: We present critical-sink routing tree (CSRT) constructions which exploit available critical-path information to yield high-performance routing trees. Our CS-Steiner and "global slack removal" algorithms together modify traditional Steiner tree constructions to optimize signal delay at identified critical sinks. We further propose an iterative Elmore routing tree (ERT) construction which optimizes Elmore delay directly, as opposed to heuristically abstracting linear or Elmore delay as in previous approaches. Extensive timing simulations on industry IC and MCM interconnect parameters show that our methods yield trees that significantly improve (by averages of up to 67%) over minimum Steiner routings in terms of delays to identified critical sinks. ERTs also serve as generic high-performance routing trees when no critical sink is specified: for 8-sink nets in standard IC (MCM) technology, we improve average sink delay by 19% (62%) and maximum sink delay by 22% (52%) over the minimum Steiner routing. These approaches provide simple, basic advances over existing performance-driven routing tree constructions. Our results are complemented by a detailed analysis of the accuracy and fidelity of the Elmore delay approximation; we also exactly assess the suboptimality of our heuristic tree constructions. In achieving the latter result, we develop a new characterization of Elmore-optimal routing trees, as well as a decomposition theorem for optimal Steiner trees, which are of independent interest.

â€˘â€˘

[...]

TL;DR: The approach synthesizes instruction sets from application benchmarks, given a machine model, an objective function, and a set of design constraints, capable of synthesizing powerful instructions for modern pipelined microprocessors and running with reasonable time and a modest amount of memory for large applications.

Abstract: In instruction set serves as the interface between hardware and software in a computer system. In an application specific environment, the system performance can be improved by designing an instruction set that matches the characteristics of hardware and the application. We present a systematic approach to generate application-specific instruction sets so that software applications can be efficiently mapped to a given pipelined micro-architecture. The approach synthesizes instruction sets from application benchmarks, given a machine model, an objective function, and a set of design constraints. In addition, assembly code is generated to show how the benchmarks can be compiled with the synthesized instruction set. The problem of designing instruction sets is formulated as a modified scheduling problem. A binary tuple is proposed to model the semantics of instructions and integrate the instruction formation process into the scheduling process. A simulated annealing scheme is used to solve for the schedules. Experiments have shown that the approach is capable of synthesizing powerful instructions for modern pipelined microprocessors, and running with reasonable time and a modest amount of memory for large applications. >

â€˘â€˘

[...]

TL;DR: Two active compaction methods based on essential faults are developed to reduce a given test set, forced pair-merging and essential fault pruning, which achieves further compaction from removal of a pattern by modifying other patterns of the test set to detect the essential faults of the target pattern.

Abstract: Test set compaction for combinational circuits is studied in this paper. Two active compaction methods based on essential faults are developed to reduce a given test set. The special feature is that the given test set will be adaptively renewed to increase the chance of compaction. In the first method, forced pair-merging, pairs of patterns are merged by modifying their incompatible specified bits without sacrificing the original fault coverage. The other method, essential fault pruning, achieves further compaction from removal of a pattern by modifying other patterns of the test set to detect the essential faults of the target pattern. With these two developed methods, the compacted test size on the ISCAS'85 benchmark circuits is smaller than that of COMPACTEST by more than 20%, and 12% smaller than that by ROTCO+COMPACTEST. >

â€˘â€˘

[...]

TL;DR: An analytical-model generator for interconnect capacitances is presented, which obtains analytical expressions of self and coupling capacitance of interconnects for commonly encountered configurations, based on a series of numerical simulations and a partial knowledge of the flux components associated with the configurations.

Abstract: An analytical-model generator for interconnect capacitances is presented. It obtains analytical expressions of self and coupling capacitances of interconnects for commonly encountered configurations, based on a series of numerical simulations and a partial knowledge of the flux components associated with the configurations. The configurations which are currently considered by this model generator are: (a) single line; (b) crossing lines; (c) parallel lines on the same layer; and (d) parallel lines on different layers (both overlapping and nonoverlapping). >

â€˘â€˘

[...]

TL;DR: This paper presents the first theoretical treatment of the min-cut replication problem, which is to determine replicated logic that minimizes cut size, and a polynomial time algorithm for determining min- cut replication sets for k-partitioned graphs is derived.

Abstract: Logic replication has been shown empirically to reduce pin count and partition size in partitioned networks. This paper presents the first theoretical treatment of the min-cut replication problem, which is to determine replicated logic that minimizes cut size. A polynomial time algorithm for determining min-cut replication sets for k-partitioned graphs is derived by reducing replication to the problem of finding a maximum flow. The algorithm is extended to hypergraphs and replication heuristics are proposed for the NP-hard problem with size constraints on partition components. These heuristics, which reduce the worst-case running time by a factor of O(k/sup 2/) over previous methods, are applied to designs that have been partitioned into multiple FPGA's. Experimental results demonstrate that min-cut replication provides substantial reductions in the numbers of FPGA's and pins required. >

â€˘â€˘

[...]

TL;DR: The problem-space genetic algorithm based datapath synthesis system (PSGA-Synth) combines a standard genetic algorithm with a known heuristic to search the large design space in an intelligent manner and has the ability to efficiently handle large problems.

Abstract: This paper presents a new approach to datapath synthesis based on a problem-space genetic algorithm (PSGA). The proposed technique performs concurrent scheduling and allocation of functional units, registers, and multiplexers with the objective of finding both a schedule and an allocation which minimizes the cost function of the hardware resources and the total time of execution. The problem-space genetic algorithm based datapath synthesis system (PSGA-Synth) combines a standard genetic algorithm with a known heuristic to search the large design space in an intelligent manner. PSGA-Synth handles multicycle functional units, structural pipelining, conditional code and loops, and provides a mechanism to specify lower and upper bounds on the number of control steps. The PSGA-Synth was tested on a set of problems selected from the literature, as well as larger problems created by us, with promising results. PSGA-Synth not only finds the best known results for all the test problems examined in a relatively small amount of CPU time, but also has the ability to efficiently handle large problems. >

â€˘â€˘

[...]

TL;DR: This paper addresses high-level synthesis methodologies for dedicated digital signal processing (DSP) architectures used in the iterative Loop-based Minnesota Architecture Synthesis (MARS) design system with a novel concurrent scheduling and resource allocation algorithm which exploits inter-iteration and intra-iterations precedence constraints.

Abstract: This paper addresses high-level synthesis methodologies for dedicated digital signal processing (DSP) architectures used in the iterative Loop-based Minnesota Architecture Synthesis (MARS) design system. We present a novel concurrent scheduling and resource allocation algorithm which exploits inter-iteration and intra-iteration precedence constraints. The novel algorithm implicitly performs algorithmic transformations, such as pipelining and retiming, on the data-flow graphs during the scheduling process to produce solutions which are as good as those previously published and which executes in less time. MARS is capable of producing optimal and near-optimal schedules in fractions of seconds. Previous synthesis systems have focused on DSP algorithms which have single or lumped delays in the recursive loops. In contrast, MARS is capable of generating valid architectures for algorithms which have randomly distributed delays. MARS exploits these delays to produce more efficient architectures and allows our system to be more general. We are able to synthesize architectures which meet the iteration bound of any algorithm by unfolding, retiming, and pipelining the original data-flow graph. >

â€˘â€˘

[...]

TL;DR: Timing simulations for a range of IC and MCM interconnect technologies show that the wirelength savings yield reduced signal delays when compared to shallow-light or standard minimum spanning tree and Steiner tree routing.

Abstract: Analysis of Elmore delay in distributed RC tree structures shows the influence of both tree cost and tree radius on signal delay in VLSI interconnects We give new and efficient interconnection tree constructions that smoothly combine the minimum cost and the minimum radius objectives, by combining respectively optimal algorithms due to Prim (1957) and Dijkstra (1959) Previous "shallow-light" techniques are both less direct and less effective: in practice, our methods achieve uniformly superior cost-radius tradeoffs Timing simulations for a range of IC and MCM interconnect technologies show that our wirelength savings yield reduced signal delays when compared to shallow-light or standard minimum spanning tree and Steiner tree routing >

â€˘â€˘

[...]

Bell Labs

^{1}TL;DR: Experiments on ISCAS benchmarks show that using a small array size (typically, two to four blocks) the authors can identify a large number of sequentially untestable faults.

Abstract: We give two theorems for identifying untestable faults in sequential circuits. The first, the single-fault theorem, states that if a single fault in a combinational array is untestable then that fault is untestable in the sequential circuit. The array replicates the combinational logic and can have any finite length. We assume that the present state inputs of the left-most block are completely controllable. The next state outputs of the right-most block are considered observable. A combinational test pattern generator determines the detectability of single faults in the right-most block. The second theorem, called the multifault theorem, uses the array model with a multifault consisting of a single fault in every block. The theorem states that an untestable multifault in the array corresponds to an untestable single fault in the sequential circuit. For the array with a single block both theorems identify combinational redundancies. Experiments on ISCAS benchmarks show that using a small array size (typically, two to four blocks) we can identify a large number of sequentially untestable faults. >

â€˘â€˘

[...]

TL;DR: The paper presents two algorithms to generate test sequences that reduce the number of test clocks required to apply the test sequences and proposes approximate measures that can be used for selection of a target fault during sequential test generation.

Abstract: Scan designs alleviate the test generation problem for sequential circuits. However, scan operations substantially increase the total number of test clocks during test application stage. Classical methods used to solve this problem perform test compaction and obtain fewer test vectors. In this paper we show that such a strategy does not always reduce the test clocks or test application time. Our approach is to associate a scan strategy function with each test vector during test generation for circuits with full or partial scan. The paper presents two algorithms to generate test sequences that reduce the number of test clocks required to apply the test sequences. The algorithms are based on: (1) heuristics that determine the need for scan operations; and (2) controlling sequential test generation process by choosing an appropriate target fault. In this paper we define and investigate different scan strategies for full and partial scan designs. We propose approximate measures that can be used for selection of a target fault during sequential test generation. These concepts are integrated into the algorithms Test Application time Reduction for Full scan (TARF) and Test Application time Reduction for Partial scan (TARP). The algorithms are implemented, and their efficiencies are demonstrated by using them for a set of ISCAS sequential benchmark circuits. The experiments show that, in full scan designs, TARF generated vectors require 36% fewer test clocks compared to the vectors from COMPACTEST that produces near optimal test sets. Similarly for partial scan designs, TARP achieves over 30% cumulative test clock reduction compared to the results from FASTEST which produced generally fewer vectors than other ATPG systems. >

â€˘â€˘

[...]

TL;DR: In this paper, a new conceptual model, called Program-State Machines (PSM), is introduced to support the specification of embedded systems, making the task of specifying such systems tedious and error-prone.

Abstract: VHDL and other hardware description languages are commonly used as specification languages during system design. However, the underlying model of those languages does not directly support the specification of embedded systems, making the task of specifying such systems tedious and error-prone. We introduce a new conceptual model, called Program-State Machines (PSM), that caters to embedded systems. We describe SpecCharts, a VHDL extension that supports capture of the PSM model. The extensions we describe can also be applied to other languages. SpecCharts can be easily incorporated into a VHDL design environment using automatic translation to VHDL. We highlight several experiments that demonstrate the advantages of significantly reduced specification time, fewer errors, and improved specification readability. >

â€˘â€˘

[...]

TL;DR: In this approach, SPICE-compatible lumped element RC substrate macromodels are efficiently generated from the circuit layout using a geometric construct called the Voronoi tessellation, and a model topology which automatically adapts itself to the local densities of substrate features associated with the noise coupling is derived.

Abstract: We present a modeling technique for assessing the impact of substrate-coupled switching noise in CMOS mixed-signal circuits. Since the magnitude of the noise problem is a function of the relative proximity of noisy and sensitive devices, design aids are required which can incorporate the switching noise effects at the post-layout phase of design verification. In our approach, SPICE-compatible lumped element RC substrate macromodels are efficiently generated from the circuit layout using a geometric construct called the Voronoi tessellation. The new models retain the accuracy of previously reported models, but contain orders of magnitude fewer circuit nodes, and are suitable for analyzing larger circuits. The node count reduction is realized by deriving a model topology which automatically adapts itself to the local densities of substrate features associated with the noise coupling. Our strategy has been verified using detailed 2-D device simulation, and successfully applied to some mixed-A/D circuit examples.

â€˘â€˘

[...]

Philips

^{1}TL;DR: Modifications are presented that improve the effectiveness and the efficiency of the force-directed scheduling algorithm and its application in the design of high-throughput DSP systems, such as real-time video VLSL circuits.

Abstract: This paper discusses improved force-directed scheduling and its application in the design of high-throughput DSP systems, such as real-time video VLSL circuits. We present a mathematical justification of the technique of force-directed scheduling, introduced by Paulin and Knight (1989), and we show how the algorithm can be used to find cost-effective time assignments and resource allocations, allowing trade-offs between processing units and memories. Furthermore, we present modifications that improve the effectiveness and the efficiency of the algorithm. The significance of the improvements is illustrated by an empirical performance analysis based on a number of problem instances. >

â€˘â€˘

[...]

TL;DR: The research presented in this paper is concerned with the automation of analog integrated circuit design and, in particular, with a description of methods and techniques employed by the ISAID design system developed at Imperial College, UK.

Abstract: The research presented in this paper is concerned with the automation of analog integrated circuit design and, in particular, with a description of methods and techniques employed by the ISAID design system developed at Imperial College, UK. ISAID is comprised of two modules: the circuit generator and the circuit corrector. The circuit generator is based on newly developed methods that are used to handle hierarchical generation of topologies and size MOS transistors so that the performance of designed circuits compare satisfactorily with their specifications. To avoid long design times, simulation is used only after the generation of an initial circuit topology. Simulated performances may therefore be found to differ from the required. One novel feature of the proposed methodology is that in such cases a circuit corrector is invoked to correct the initial design. The circuit corrector is essentially a novel application of qualitative reasoning, which, without iterative simulation analyses performance trade-offs, thereby selects circuit adjustments-transistor size adjustments or topological modifications-that would improve the problematic performances. Several design examples have demonstrated the benefits of the ISAID design approach. >

â€˘â€˘

[...]

TL;DR: It is proved that 100% delay fault testability is not necessary to guarantee the speed of a combinational circuit and the test set size can be reduced while maintaining the delay fault coverage for the specified circuit speed.

Abstract: The main disadvantage of the path delay fault model is that to achieve 100% testability every path must be tested. Since the number of paths is usually exponential in circuit size, this implies very large test sets for most circuits. Not surprisingly, all known analysis and synthesis techniques for 100% path delay fault testability are computationally infeasible on large circuits. We prove that 100% delay fault testability is not necessary to guarantee the speed of a combinational circuit. There exist path delay faults which can never impact the circuit delay (computed using any correct timing analysis method) unless some other path delay faults also affect it. These are termed robust dependent delay faults and need not be considered in delay fault testing. Necessary and sufficient conditions under which a set of path delay faults is robust dependent are proved; this yields more accurate and increased delay fault coverage estimates than previously used. Next, assuming only the existence of robust delay fault tests for a very small set of paths, we show how the circuit speed (clock period) can be selected such that 100% robust delay fault coverage is achieved. This leads to a quantitative tradeoff between the testing effort (measured by the size of the test set) for a circuit and the verifiability of its performance. Finally, under a bounded delay model, we show that the test set size can be reduced while maintaining the delay fault coverage for the specified circuit speed. Examples and experimental results are given to show the effect of these three techniques on the amount of delay fault testing necessary to guarantee correct operation. >

â€˘â€˘

[...]

TL;DR: It is demonstrated that geometric embeddings of the circuit netlist can lead to high-quality k- way partitionings, and a new partitioning algorithm is presented that exploits both the geometric embedding and netlist information, as well as a restricted partitioning formulation that requires each cluster of the k-way partitioning to be contiguous in a given linear ordering.

Abstract: This paper presents effective algorithms for multiway partitioning. Confirming ideas originally due to Hall (1970), we demonstrate that geometric embeddings of the circuit netlist can lead to high-quality k-way partitionings. The netlist embeddings are derived via the computation of d eigenvectors of the Laplacian for a graph representation of the netlist. As Hall did not specify how to partition such geometric embeddings, we explore various geometric partitioning objectives and algorithms, and find that they are limited because they do not integrate topological information from the netlist. Thus, we also present a new partitioning algorithm that exploits both the geometric embedding and netlist information, as well as a restricted partitioning formulation that requires each cluster of the k-way partitioning to be contiguous in a given linear ordering. We begin with a d-dimensional spectral embedding and construct a heuristic 1-dimensional ordering of the modules (combining spacefilling curve with 3-Opt approaches originally proposed for the traveling salesman problem). We then apply dynamic programming to efficiently compute the optimal k-way split of the ordering for a variety of objective functions, including Scaled Cost and Absorption. This approach can transparently integrate user-specified cluster size bounds. Experiments show that this technique yields multiway partitionings with lower Sealed Cost than previous spectral approaches. >

â€˘â€˘

[...]

TL;DR: A general simulation method for three-dimensional surface advancement has been developed and coupled with physical models for etching and deposition and can support very complex structures with tunnels or regions of material which are completely disconnected from other regions.

Abstract: A general simulation method for three-dimensional surface advancement has been developed and coupled with physical models for etching and deposition. The surface advancement algorithm is based on morphological operations derived from image processing which are performed on a cellular material representation. This method allows arbitrary changes of the actual geometry according to a precalculated etch or deposition rate distribution and can support very complex structures with tunnels or regions of material which are completely disconnected from other regions. Surface loops which result from a growing or etching surface intersecting with itself are inherently avoided. The etch or deposition rate distribution along the exposed surface is obtained from macroscopic point advancement models which consider information about flux distributions and surface reactions of directly and indirectly incident particles. >

â€˘â€˘

[...]

TL;DR: A theory of composition for symbolic trajectory evaluation is presented and it is shown how implementing this theory using a specialized theorem prover is very attractive.

Abstract: Formal hardware verification based on symbolic trajectory evaluation shows considerable promise in verifying medium to large scale VLSI designs with a high degree of automation. However, in order to verify today's designs, a method for composing partial verification results is needed. This paper presents a theory of composition for symbolic trajectory evaluation and shows how implementing this theory using a specialized theorem prover is very attractive. Symbolic trajectory evaluation is used to prove low level properties of a circuit, and these properties are combined using the prover. Providing a powerful and flexible interface to a coherent system (with automatic assistance in parts) reduces the load on the human verifier. This hybrid approach, coupled with powerful and simple data representation, increases the range of circuits which can be verified using trajectory evaluation. The paper concludes with two examples. One example is the complete verification of a 64 b multiplier which takes approximately 15 minutes on a SPARC 10 machine. >

â€˘â€˘

[...]

TL;DR: This work considers the problem of correcting multiple design errors in combinational circuits and in finite-state machines and proposes methods based on the use of pairwise distinguishing sequences for specification and implementation states and employs the same hardware correction scheme.

Abstract: We consider the problem of correcting multiple design errors in combinational circuits and in finite-state machines. The correction method introduced for combinational circuits uses a single error correction scheme iteratively to correct multiple errors. It uses a heuristic measure that guides the selection of single, local circuit modifications that reduce the distance between the incorrect implementation and the specification. The distance is measured by the size of a correction hardware, which is a block of logic that can be added to the implementation in order to correct it without performing additional circuit modifications. The correction method for finite-state machines is based on the use of pairwise distinguishing sequences for specification and implementation states, and employs the same hardware correction scheme. Experimental results are presented to support the effectiveness of the proposed methods. >

â€˘â€˘

[...]

TL;DR: A new resonant-tunnel diode (RTD) large-signal DC model suitable for PSPICE simulation is presented, which gives better accuracy and has less convergence problems than other RTD DC models.

Abstract: A new resonant-tunnel diode (RTD) large-signal DC model suitable for PSPICE simulation is presented in this paper. For better accuracy, the model equations are deliberately chosen through the combination of Gaussian and/or exponential functions, and it can be easily implemented in PSPICE using the FUNCTION statement. Most of the associated parameters required in this new model have explicit relations to the measured I-V curves, and can be easily extracted. This new DC model has been successfully applied to simulating single RTD devices, and a RTD-based three-state memory circuit. Compared to other RTD DC models, the presented model gives better accuracy and has less convergence problems. In addition, the new model can be used to simulate hysteresis effect, and can easily incorporate AC effects. >