scispace - formally typeset
Search or ask a question
Book

Digital Systems Testing and Testable Design

TL;DR: The new edition of Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Abstract: For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Citations
More filters
Proceedings ArticleDOI
20 Sep 1994
TL;DR: This paper will discuss algorithms and their complexity for diagnosing multiple failures using the information flow model and provide a formal analysis of the multiple failure problem in the context of one model based approach.
Abstract: Model based diagnostic systems have generally avoided the issue of multiple failure diagnosis due to the computational complexity of covering all possible multiple faults and still providing an efficient diagnostic strategy. Optimization of decision trees is already known to be NP-complete, and the number of combinations of multiple faults just serves to exacerbate the problem. Nevertheless, model based diagnosis is becoming popular, and the need for multiple failure diagnosis is real. In this paper, we will provide a formal analysis of the multiple failure problem in the context of one model based approach. Specifically, we will discuss algorithms and their complexity for diagnosing multiple failures using the information flow model. >

10 citations

Proceedings ArticleDOI
01 Nov 1996
TL;DR: A new approach for permanent fault-tolerance: Heterogeneous Built-In-Resiliency (HBIR) is developed and the effectiveness of the overall approach, the synthesis algorithms, and software implementations on a number of designs are demonstrated.
Abstract: Using the flexibility provided by multiple functionalities we have developed a new approach for permanent fault-tolerance: Heterogeneous Built-In-Resiliency (HBIR). HBIR processor synthesis imposes several unique tasks on the synthesis process: (i) latency determination targeting k-unit fault-tolerance, (ii) application-to-faulty-unit matching and (iii) HBIR scheduling and assignment algorithms. We address each of them and demonstrate the effectiveness of the overall approach, the synthesis algorithms, and software implementations on a number of designs.

10 citations

Proceedings ArticleDOI
08 Dec 2003
TL;DR: Experimental results indicate that the bridging faults can be accurately diagnosed delivering a reduction in the sizes of the ambiguity sets and full capture of the offending bridging fault.
Abstract: Although the stuck-at fault model is the standard fault model. the frequently occurring faults in some technologies arc unintentional shorts, denoted as bridging faults. We outline a method that utilizes the information from the stuck-at fault model to accurately diagnose the bridging faults that affect two lines. The proposed method exploits the observation that the bridging fault response matches the stuck-at fault responses on the shorted lines for the failing test vectors and generates a candidate list that accounts for all failures. A further reduction in the size of the candidate set is achieved by extracting information from the test vectors that do not fail. The proposed method uses no layout information whatsoever. Nonetheless, the experimental results indicate that the bridging faults can be accurately diagnosed delivering a reduction in the sizes of the ambiguity sets and full capture of the offending bridging fault.

10 citations


Cites background from "Digital Systems Testing and Testabl..."

  • ...AND), otherwise as an OR gate (wired-OR) [14]....

    [...]

Proceedings ArticleDOI
16 Feb 2004
TL;DR: A new approach to fault collapsing that extends fault collapsing based on fault equivalence and fault dominance based on level of similarity between faults is described and studied experimentally.
Abstract: We describe a new approach to fault collapsing that extends fault collapsing based on fault equivalence and fault dominance. The new approach is based on a metric called level of similarity between faults. Informally, a fault f/sub j/ is said to be similar to a fault f/sub i/ with a level of similarity SL/sub i,j/ /spl les/ 1 if a fraction SL/sub i,j/ of the tests for f/sub i/ also detect f/sub j/. If SL/sub i,j/ is high enough, one may exclude f/sub j/ from the set of target faults and rely on the test for f/sub i/ (and tests for other faults) to detect f/sub j/. We describe a procedure for fault collapsing based on the level of similarity, and study its effectiveness experimentally.

10 citations

Proceedings ArticleDOI
06 Nov 1994
TL;DR: The Inversion Algorithm is an event-driven algorithm, whose performance rivals or exceeds that of Levelized Compiled code simulation, even at activity rates of 50% or more, despite the small size of the run-time code.
Abstract: The Inversion Algorithm is an event-driven algorithm, whose performance rivals or exceeds that of Levelized Compiled code simulation, even at activity rates of 50% or more. The Inversion Algorithm has several unique features, the most remarkable of which is the size of the run-time code. The basic Algorithm can be implemented using no more than a page of run-time code, although in practice it is more efficient to provide several different variations of the basic algorithm. The run-time code is independent of the circuit under test, so the algorithm can be implemented either as a compiled code or an interpreted simulator with little variation in performance. Because of the small size of the run-time code, the run-time portions of the Inversion Algorithm can be implemented in assembly language for peak efficiency, and still be retargeted for new platforms with little effort.

10 citations