scispace - formally typeset
Search or ask a question
Book

Digital Systems Testing and Testable Design

TL;DR: The new edition of Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Abstract: For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Citations
More filters
Journal ArticleDOI
TL;DR: Two low-cost approaches to graceful degradation-based permanent fault tolerance of ASPPs are presented and the effectiveness of the overall approach, the synthesis algorithms, and software implementations on a number of industrial-strength designs are demonstrated.
Abstract: Application Specific Programmable Processors (ASPP) provide efficient implementation for any of m specified functionalities. Due to their flexibility and convenient performance-cost trade-offs, ASPPs are being developed by DSP, video, multimedia, and embedded lC manufacturers. In this paper, we present two low-cost approaches to graceful degradation-based permanent fault tolerance of ASPPs. ASPP fault tolerance constraints are incorporated during scheduling, allocation, and assignment phases of behavioral synthesis: Graceful degradation is supported by implementing multiple schedules of the ASPP applications, each with a different throughput constraint. In this paper, we do not consider concurrent error detection. The first ASPP fault tolerance technique minimizes the hardware resources while guaranteeing that the ASPP remains operational in the presence of all k-unit faults. On the other hand, the second fault tolerance technique maximizes the ASPP fault tolerance subject to constraints on the hardware resources. These ASPP fault tolerance techniques impose several unique tasks, such as fault-tolerant scheduling, hardware allocation, and application-to-faulty-unit assignment. We address each of them and demonstrate the effectiveness of the overall approach, the synthesis algorithms, and software implementations on a number of industrial-strength designs.

24 citations


Cites methods from "Digital Systems Testing and Testabl..."

  • ...We assume a widely used single stuck-at fault model [ 1 ]....

    [...]

Journal ArticleDOI
TL;DR: This paper shows a simple method of salvaging future test windows by adjusting their expected signatures to fit past observed errors by using the STUMPS architecture.
Abstract: This paper uses the STUMPS architecture to study the properties of a new diagnostic procedure. According to the old procedure, the process stops at the end of each test window to compare the measured signature against its precomputed value. The old procedure also calls for the abandonment of all future test windows after the first failing one is encountered. This is due to the unavailability of expected future test window signatures in the presence of a previously captured error. This paper shows a simple method of salvaging future test windows by adjusting their expected signatures to fit past observed errors. Experiments conducted using this new procedure reveal an improvement of at least one order of magnitude in diagnostic resolution over what has been previously experienced.

24 citations

Proceedings ArticleDOI
01 Nov 1996
TL;DR: A very robust test design is achieved by systematically considering measurement noise, by selecting most significant measurements, and by using most meaningful samples, as well as to the required training and validation samples.
Abstract: Test design of analog circuits based on statistical methods for decision making is a topic of growing interest. The major problem of such statistical approaches with respect to industrial applicability concerns the confidence with which the determined test criteria can be applied in production testing. This mainly refers to the consideration of measurement noise, to the selected measurements, as well as to the required training and validation samples. These crucial topics are addressed in this paper. On exploiting experience from the statistical design of analog circuits and from pattern recognition methods, efficient solutions to these problems are provided. A very robust test design is achieved by systematically considering measurement noise, by selecting most significant measurements, and by using most meaningful samples. Moreover, parametric as well as catastrophic faults are covered on application of digital testing methods.

23 citations


Cites background from "Digital Systems Testing and Testabl..."

  • ...The problem of finding a minimal number of additional tests, such that all faults that can be detected by the set of all measurements, are detected by the set of selected tests, too, can be formulated as a covering problem [25]....

    [...]

  • ...With the results of this fault simulation, a test set compaction [25] is performed, i....

    [...]

Proceedings ArticleDOI
14 Jun 2006
TL;DR: In this paper, a nonlinear model-based adaptive robust state fault detection that combines online parameter adaptation with robust filter structures to reduce the extent of model uncertainty is proposed to improve the sensitivity of the fault detection scheme to faults.
Abstract: A goal in many applications is to combine a priori knowledge of the physical system with experimental data to detect faults in a system at an early enough stage as to conduct preventive maintenance. The information available beforehand is the mathematical model of the physical system and the key issue in the design of model-based fault detection is the effect of model uncertainties such as severe parametric uncertainties and unmodeled dynamics on their performance. This paper presents the application of a nonlinear model-based adaptive robust state fault detection that combines online parameter adaptation with robust filter structures to reduce the extent of model uncertainty to help in the improvement of the sensitivity of the fault detection scheme to faults. Simulation results are presented to demonstrate the superior performance of the proposed scheme in the early and reliable detection of incipient faults.

23 citations

Journal ArticleDOI
TL;DR: A simple test structure that is a minor extension to current scan-test and built-in self-test structures and that can be used to estimate the error rate of a circuit, and an elegant mathematical model that describes the key parameters associated with this test process and incorporates bounds on the error in estimating error rate are developed.
Abstract: As feature size approaches molecular dimensions and the number of devices per chip reaches astronomical values, VLSI manufacturing yield significantly decreases. This motivates interests in new computing models. One such model is called error tolerance. Classically, during the postmanufacturing test process, chips are classified as being bad (defective) or good. The main premise in error-tolerant computing is that some bad chips that fail classical go/no-go tests and do indeed occasionally produce erroneous results actually provide acceptable performance in some applications. Thus, new test techniques are needed to classify bad chips according to categories based upon their degree, of acceptability with respect to predetermined applications. One classification criterion is error rate. In this paper, we first describe a simple test structure that is a minor extension to current scan-test and built-in self-test structures and that can be used to estimate the error rate of a circuit. We then address three theoretical issues. First, we develop an elegant mathematical model that describes the key parameters associated with this test process and incorporates bounds on the error in estimating error rate and the level of confidence in this estimate. Next, we present an efficient testing procedure for estimating the error rate of a circuit under test. Finally, we address the problem of assigning bad chips to bins based on their error rate. We show that this can be done in an efficient, hence cost-effective, way and discuss the quality of our results in terms of such concepts as increase effective yield, yield loss, and test escape

23 citations


Cites background or methods from "Digital Systems Testing and Testabl..."

  • ...Since the focus of this paper is on estimating error rate, and not BIST, we ignore issues such as aliasing, reseeding, injecting deterministic test patterns, and fault coverage [ 4 ]....

    [...]

  • ...1.2 Built-In Self-Test (BIST) for Error Rate Currently, BIST is a common technique used to support the testing of a chip [ 4 ]....

    [...]

  • ...As an aside, it is known that it is relatively time consuming for an automatic test-pattern generation (ATPG) tool to generate a test for a random pattern resistant fault [ 4 ]....

    [...]

  • ...There are many variations of this test architecture that are nearly isomorphic in function to this BILBO technique and that are widely used in commercial systems (see [ 4, Chapter 11 ])....

    [...]