scispace - formally typeset
Search or ask a question
Book•

Digital Systems Testing and Testable Design

TL;DR: The new edition of Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Abstract: For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Citations
More filters
Journal Article•DOI•
28 Apr 1996
TL;DR: A general method for determining whether a certain design is initializable, and for generating its initialization sequence, is presented, based on structural decomposition of the circuit and can handle both logical and functional initializability.
Abstract: A general method for determining whether a certain design is initializable, and for generating its initialization sequence, is presented in this paper. This method is based on structural decomposition of the circuit, and can handle both logical (using X-value simulation) and functional initializability. The routines developed are then used for ATPG of sequential circuits.

12 citations


Cites background from "Digital Systems Testing and Testabl..."

  • ...Most logic simulators use ternary logic [ 3 ], with three logic values: 0, 1 and X, which is referred to as X-value simulation....

    [...]

Proceedings Article•DOI•
10 Oct 2004
TL;DR: A fading scheme is proposed that can effectively reduce the potentially huge memory requirement and long running time without sacrificing much accuracy and show that sequential fault diagnosis can actually be done effectively and accurately with reasonable CPU time.
Abstract: Fault diagnosis algorithms for logic designs with only partial scan support remains inadequate so far because of the difficulties in dealing with the sequential fault effect. In this paper, we enhance our previous symbolic techniques to address such a challenge. Along with the baseline enhancement, we also propose a fading scheme that can effectively reduce the potentially huge memory requirement and long running time without sacrificing much accuracy. This fading algorithm incorporates a commonly used concept called 'local fault effect', using symbolic techniques. Experimental results show that sequential fault diagnosis can actually be done effectively and accurately with reasonable CPU time.

12 citations

Book Chapter•DOI•
Zurab Khasidashvili1, Alexander Nadel1•
06 Dec 2011
TL;DR: It is shown that the new invariant strengthening algorithm alone performs better than induction and interpolation, and that the absolutely best result is achieved when it is combined with interpolation.
Abstract: This paper proposes an efficient algorithm for the systematic learning of implications. This is done as part of a new search and restart strategy in the SAT solver. We evaluate the new algorithm within a number of applications, including BMC and induction with invariant strengthening for equivalence checking. We provide extensive experimental evidence attesting to a speedup of one and often two orders of magnitude with our algorithm, on a representative set of industrial and publicly available test suites, as compared to a basic version of invariant strengthening. Moreover, we show that the new invariant strengthening algorithm alone performs better than induction and interpolation, and that the absolutely best result is achieved when it is combined with interpolation. In addition, we experimentally demonstrate the superiority of an application of our new algorithm to BMC.

12 citations

Journal Article•DOI•
TL;DR: An automated design validation scheme for gate-level combinational and sequential circuits that borrows methods from simulation and test generation for physical faults, and verifies a circuit with respect to a modeled set of design errors is investigated.
Abstract: We investigate an automated design validation scheme for gate-level combinational and sequential circuits that borrows methods from simulation and test generation for physical faults, and verifies a circuit with respect to a modeled set of design errors. The error models used in prior research are examined and reduced to five types: gate substitution errors (GSEs), gate count errors (GCEs), input count errors (ICEs), wrong input errors (WIEs), and latch count errors (LCEs). Conditions are derived for a gate to be testable for GSEs, which lead to small, complete test sets for GSEss near-minimal test sets are also derived for GCEs. We analyze undetectability in design errors and relate it to single stuck-line (SSL) redundancy. We show how to map all the foregoing error types into SSL faults, and describe an extensive set of experiments to evaluate the proposed method. These experiments demonstrate that high coverage of the modeled errors can be achieved with small test sets obtained with standard test generation and simulation tools for physical faults.

12 citations

Proceedings Article•DOI•
11 Dec 2009
TL;DR: A scalable SAT-based design debugging algorithm that uses interpolants to over-approximate sets of constraints that model the erroneous behavior and significantly reduces the number of simultaneous time-frames examined in the error trace.
Abstract: Given an erroneous design, functional verification returns an error trace exhibiting a mismatch between the specification and the implementation of a design. Automated design debugging uses these error traces to identify potentially erroneous modules causing the error. With the increasing size and complexity of modern VLSI designs, error traces have become longer and harder to analyze. At the same time, design debugging has become one of the most resource-intensive steps in the chip design cycle. This work proposes a scalable SAT-based design debugging algorithm that uses interpolants to over-approximate sets of constraints that model the erroneous behavior. The algorithm partitions the original problem into a sequence of smaller subproblems by using subsections of the error trace that are examined iteratively. This is made possible by using interpolants to properly constrain the erroneous behavior for each subproblem, significantly reducing the number of simultaneous time-frames examined in the error trace. The described method is shown to be complete and an additional technique is presented to improve the quality of the debugging results using multiple interpolants. Experiments on real designs show a 57% reduction in memory and 23% decrease in run-time compared to previous work.

12 citations