scispace - formally typeset
Search or ask a question
Book

Digital Systems Testing and Testable Design

TL;DR: The new edition of Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Abstract: For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Citations
More filters
Proceedings ArticleDOI
03 Mar 2003
TL;DR: This work presents a novel success-driven learning algorithm which significantly accelerates an ATPG engine for enumerating all solutions (preimages) and effectively prunes redundant search space due to overlapped solutions and constructs a free BDD on the fly so that it becomes the representation of the preimage set at the end.
Abstract: Preimage computation is a key step in formal verification. Pure OBDD-based symbolic method is vulnerable to the space-explosion problem. On the other hand, conventional ATPG/SAT-based method can handle large designs but can suffer from time explosion. Unlike methods that combine ATPG/SAT and OBDD, we present a novel success-driven learning algorithm which significantly accelerates an ATPG engine for enumerating all solutions (preimages). The algorithm effectively prunes redundant search space due to overlapped solutions and constructs a free BDD on the fly so that it becomes the representation of the preimage set at the end. Experimental results have demonstrated the effectiveness of the approach, in which we are able to compute preimages for large sequential circuits, where OBDD-based methods fail.

44 citations


Cites background or methods from "Digital Systems Testing and Testabl..."

  • ...6 Experimental Results The success-driven learning algorithm together with a ba­sic PODEM and two ATPG enhancements (improved backtrace with con.ict check [12], con.ict analysis [11]) were imple­mented in 5,000 lines of C code....

    [...]

  • ...Search State Equivalence. is.ed), backtrace to PIs along all the X-paths in its fanin cone; 2. record every speci.ed input to all unspeci.ed gate-output encountered in this depth-.rst search (in our example, there is er=eonly one X-path z--, 1is the speci.ed input of the eOeunspeci.ed gate and 0is the one for unspeci.ed gate z.); reP 3. record the unspeci.ed PIs at the end of each X-Path ( in the example)....

    [...]

  • ...Notice that unlike conventional PODEM which returns either SUC-CESS or FAIL, this function does not have a return value be­cause we always enforce a backtrack when a solution is found so that it can continue to search for the next solution....

    [...]

  • ...The function eL()is a stan­dard function in ATPG that uses controllability and observabil­ity as heuristics to .nd next decision PIs through backtracing along X-paths [1]....

    [...]

  • ...A naive way of using PODEM to compute preimages is to en­force a backtrack whenever a solution is found so that the al­gorithm could continue to search for the next solution until the entire search space is implicitly enumerated....

    [...]

Proceedings ArticleDOI
01 Oct 2007
TL;DR: Practical techniques that enable diagnosis of defective library cells in a failing die and being able to accurately differentiate between cell-internal and interconnect defects leads to a faster root cause failure analysis at a reduced cost are presented.
Abstract: In this paper we present practical techniques that enable diagnosis of defective library cells in a failing die. Our technique can handle large industrial designs and practical situations like compressed test patterns with multiple exercising conditions per pattern and sequence dependent defects. Being able to accurately differentiate between cell-internal and interconnect defects leads to a faster root cause failure analysis at a reduced cost. This capability was applied on an AMD graphics chip using 90 nm at TSMC. In all of the failing dies that underwent physical failure analysis, the defective library cell identified by diagnosis was verified to be correct by failure analysis. Currently this capability is successfully used to diagnose another design using TSMC's 65 nm technology.

43 citations


Cites background from "Digital Systems Testing and Testabl..."

  • ...Logic diagnosis tools today [1][2][3][4][5][13][18] can determine the most likely location inside a failing die from which the failures originate....

    [...]

Proceedings ArticleDOI
22 Jun 1993
TL;DR: A gate-level transient fault simulation environment which has been developed based on realistic fault models and is demonstrated on ISCAS-89 sequential benchmark circuits.
Abstract: Mixed analog and digital mode simulators have been available for accurate transient fault simulation. However, they are not fast enough to simulate a large number of transient faults on a relatively large circuit in a reasonable amount of time. The authors describe a gate-level transient fault simulation environment which has been developed based on realistic fault models. The simulation environment uses a timing fault simulator as well as a zero-delay parallel fault simulator. The timing fault simulator uses high level models of the actual transient fault phenomenon and latch operation to accurately propagate the fault effects to the latch outputs, after which point the zero-delay parallel fault simulator is used to speed up the simulation without any loss in accuracy. The simulation environment is demonstrated on ISCAS-89 sequential benchmark circuits.

43 citations

Journal ArticleDOI
TL;DR: In this work, fault location based on a fault dictionary is considered at the chip level and a method to derive small dictionaries without losing resolution of modeled faults is proposed, based on extended pass/fail analysis.
Abstract: In this work, fault location based on a fault dictionary is considered at the chip level. To justify the use of a precomputed dictionary in terms of computation time, the computational effort invested in computing a dictionary is first analyzed. The number of circuit diagnoses that need to be performed dynamically, without the use of precomputed knowledge, before the overall diagnosis effort exceeds the effort of computing a dictionary, is studied. Experimental results on ISCAS-85 circuits show that for relatively small numbers of diagnoses, a precomputed dictionary is more efficient than dynamic diagnosis. Next, a method to derive small dictionaries without losing resolution of modeled faults is proposed, based on extended pass/fail analysis. The same procedure is applicable for selecting internal observation points to increase the resolution of the test set. Methods to compact the resulting dictionary further, using compaction techniques generally applied to fault detection, are then described. Experimental results are presented to demonstrate the effectiveness of the proposed methods.

43 citations


Cites background or methods or result from "Digital Systems Testing and Testabl..."

  • ...For fault detection, test compaction must be done such that the signature of the fault-free response is different from the signature of any modeled fault, otherwise, some loss of fault coverage occurs due to aliasing [ 1 ]....

    [...]

  • ...For fault detection, several methods to compact test data into a signature have been proposed earlier [ 1 ]....

    [...]

  • ...In the case where the response of the circuit cannot be explained by any of the modeled faults, the fault best matching the observed response is selected, and the fault is assumed to occur at the same location [ 1 ], [2], [3], [4], [5], [6], [7], [8]....

    [...]

  • ...The problem of fault location at the chip level has been considered in numerous works [ 1 ], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]....

    [...]

  • ...Previous works can be found in [ 1 ], [2], [3], [6]....

    [...]

Proceedings ArticleDOI
07 Mar 2005
TL;DR: The design of Seq-SAT is described, an efficient sequential SAT solver with improved search strategies over Satori, and a decision variable selection heuristic more suitable for solving the sequential problems is presented.
Abstract: A sequential SAT solver, Satori, was recently proposed (Iyer, M.K. et al., Proc. IEEE/ACM Int. Conf. on Computer-Aided Design, 2003) as an alternative to combinational SAT in verification applications. This paper describes the design of Seq-SAT, an efficient sequential SAT solver with improved search strategies over Satori. The major improvements include: (1) a new and better heuristic for minimizing the set of assignments to state variables; (2) a new priority-based search strategy and a flexible sequential search framework which integrates different search strategies; (3) a decision variable selection heuristic more suitable for solving the sequential problems. We present experimental results to demonstrate that our sequential SAT solver can achieve orders-of-magnitude speedup over Satori. We plan to release the source code of Seq-SAT.

43 citations


Cites methods from "Digital Systems Testing and Testabl..."

  • ...The algorithm follows a similar process as the D-algorithm [8] originally proposed for ATPG and utilizes 3-value simulation....

    [...]

  • ...The two # columns show the numbers after state reduction with modi.ed D-algorithm (step 1) and then, with three value simulation (step 2), respectively....

    [...]

  • ...This approach is based on the assumption that the D-algorithm can usually .nd a solution containing much fewer assignments than that of a SAT solver....

    [...]

  • ...Hence, the D-algorithm is used as a trace procedure, not as a search procedure....

    [...]

  • ...From the results, we can see that the modi.ed D-algorithm could reduce most of the unnecessary assignments to the state variables, so the number of 3-value simulation runs can be greatly reduced....

    [...]