scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 1990"


Journal ArticleDOI
TL;DR: A structured definition of hardware- and software-fault-tolerant architectures is presented, and three of them are analyzed and evaluated and a sidebar addresses the cost issues related to software fault tolerance.
Abstract: A structured definition of hardware- and software-fault-tolerant architectures is presented. Software-fault-tolerance methods are discussed, resulting in definitions for soft and solid faults. A soft software fault has a negligible likelihood or recurrence and is recoverable, whereas a solid software fault is recurrent under normal operations or cannot be recovered. A set of hardware- and software-fault-tolerant architectures is presented, and three of them are analyzed and evaluated. Architectures tolerating a single fault and architectures tolerating two consecutive faults are discussed separately. A sidebar addresses the cost issues related to software fault tolerance. The approach taken throughout is as general as possible, dealing with specific classes of faults or techniques only when necessary. >

359 citations


Journal ArticleDOI
TL;DR: A method of partial scan design is presented in which the selection of scan flip-flops is aimed at breaking up the cyclic structure of the circuit.
Abstract: A method of partial scan design is presented in which the selection of scan flip-flops is aimed at breaking up the cyclic structure of the circuit. Experimental data are given to show that the test generation complexity may grow exponentially with the length of the cycles in the circuit. This complexity grows only linearly with the sequential depth. Graph-theoretic algorithms are presented to select a minimal set of flip-flops for eliminating cycles and reducing the sequential depth. Tests for the resulting circuit are generated by a sequential logic test generator. An independent control of the scan clock allows insertion of scan sequences within the vector sequence produced by the test generator. An independent control of the scan clock allows insertion of scan sequences within the vector sequences produced by the test generator. 98% fault coverage is obtained for a 5000-gate circuit by scanning just 5% of the flip-flops. >

346 citations


Journal ArticleDOI
TL;DR: FIAT is capable of emulating a variety of distributed system architectures and it provides the capabilities to monitor system behavior and inject faults for the purpose of experimental characterization and validation of a system's dependability.
Abstract: The results of several experiments conducted using the fault-injection-based automated testing (FIAT) system are presented. FIAT is capable of emulating a variety of distributed system architectures, and it provides the capabilities to monitor system behavior and inject faults for the purpose of experimental characterization and validation of a system's dependability. The experiments consists of exhaustively injecting three separate fault types into various locations, encompassing both the code and data portions of memory images, of two distinct applications executed with several different data values and sizes. Fault types are variations of memory bit faults. The results show that there are a limited number of system-level fault manifestations. These manifestations follow a normal distribution for each fault type. Error detection latencies are found to be normally distributed. The methodology can be used to predict the system-level fault responses during the system design stage. >

253 citations


Journal ArticleDOI
R. Dekker1, F. Beenker1, L. Thijssen
TL;DR: A fault model for SRAMs based on physical spot defects, which are modeled as local disturbances in the layout of the SRAM, is presented and two linear test algorithms that cover 100% of the faults under the fault model are proposed.
Abstract: Testing static random access memories (SRAMs) for all possible failures is not feasible and one must restrict the class of faults to be considered. This restricted class is called a fault model. A fault model for SRAMs based on physical spot defects, which are modeled as local disturbances in the layout of the SRAM, is presented. Two linear test algorithms that cover 100% of the faults under the fault model are proposed. A general solution is given for testing word-oriented SRAMs. The practical validity of the fault model and the two test algorithms are verified by a large number of actual wafer tests and device failure analyses. >

242 citations


Journal ArticleDOI
01 Nov 1990
TL;DR: In this article, the authors describe a very accurate fault location technique which uses post-fault voltage and current derived at both line ends, independent of fault resistance and the method does not require any knowledge of source impedance.
Abstract: The authors describe a very accurate fault location technique which uses post-fault voltage and current derived at both line ends. Fault location is independent of fault resistance and the method does not require any knowledge of source impedance. It maintains high accuracy for untransposed lines and no fault type identification is required. The authors present the theory of the technique and the results of simulation studies to determine its performance.< >

239 citations


Proceedings ArticleDOI
10 Sep 1990
TL;DR: It is shown that stuck fault test generation, while inherently incapable of directly expressing many of the likely CMOS faults, was still able to generate a set of effective test patterns, and current testing produced test patterns that were consistently more effective in detecting bridging faults.
Abstract: The authors compare the performance of two test generation techniques, stuck fault testing and current testing, when applied to CMOS bridging faults. Accurate simulation of such faults mandated the development of several new design automation tools, including an analog-digital fault simulator. The results of this simulation are analyzed. It is shown that stuck fault test generation, while inherently incapable of directly expressing many of the likely CMOS faults, was still able to generate a set of effective test patterns. Current monitoring, however, by virtue of its more accurate model and less stringent detection criterion, was able to generate tests of measurably higher quality. It is concluded that the selection of one technique over the other becomes a cost tradeoff. Current testing produced test patterns that were consistently more effective in detecting bridging faults. This higher quality comes at higher start-up costs aid higher costs per chip design. >

188 citations


Journal ArticleDOI
TL;DR: A novel test generation technique for large circuits with high fault coverage requirements is described and preliminary results suggest that for circuits composed of datapath elements, speed improvements of three orders of magnitude over conventional techniques may be possible.
Abstract: A novel test generation technique for large circuits with high fault coverage requirements is described. The technique is particularly appropriate for circuits designed by silicon compilers. Circuit modules and signals are described at a high descriptive level. Test data for modules are described by predefined stimulus/response packages that are processed symbolically using techniques derived from artificial intelligence. The packages contain sequences of stimulus and response vectors which are propagated as units. Since many test vectors are processed simultaneously, a substantial increase in test generation speed can be achieved. A prototype test generator which uses the technique to generate tests for acyclic circuits has been implemented. Preliminary results from this program suggest that for circuits composed of datapath elements, speed improvements of three orders of magnitude over conventional techniques may be possible. >

180 citations


Proceedings ArticleDOI
24 Jun 1990
TL;DR: This paper describes PROOFS, a super fast fault simulator for synchronous sequential circuits that minimizes the memory requirements, reduces the number of events that need to be evaluated, and simplifies the complexity of the software implementation.
Abstract: A super-fast fault simulator for synchronous sequential circuits, called PROOFS, is described. PROOFS achieves high performance by combining all the advantages of differential fault simulation, single fault propagation, and parallel fault simulation, while minimizing their individual disadvantages. PROOFS minimizes the memory requirements, reduces the number of events that need to be evaluated, and simplifies the complexity of the software implementation. PROOFS requires an average of one fifth the memory required for concurrent fault simulation and runs 6 to 67 times faster on the ISCAS sequential benchmarks. >

145 citations


Proceedings ArticleDOI
10 Sep 1990
TL;DR: Results for some medium-sized sequential circuits which show a very large improvement in fault coverage obtained by optimally selecting a small fraction of the flip-flops in the circuit are presented.
Abstract: The problem of selecting flip-flops for inclusion into a partial scan path is formulated as an optimization problem. Scan flip-flops result in layout and delay overheads. Hence, scan flip-flops have to be chosen such that the net cost associated with these overheads is bounded by some user-specified limit. The problem then reduces to choosing a set of flip-flops which gives the best improvement in testability, while keeping the cost bounded. Cost functions are proposed for a standard cell design approach to model the effects of the overheads. Profit functions for three different testability criteria are proposed, and the optimization methodology for each is discussed. The optimization process is modeled on the lines of the 0/1 knapsack problem. Results for some medium-sized sequential circuits which show a very large improvement in fault coverage obtained by optimally selecting a small fraction of the flip-flops in the circuit are presented. >

145 citations


Journal ArticleDOI
TL;DR: It is shown that the problem can be solved using several distributions instead of a single one, and an efficient procedure for computing the optimized input probabilities is presented.
Abstract: The test of integrated circuits by random patterns is very attractive, since no expensive test pattern generation is necessary and tests can be applied with a self-test technique or externally using linear feedback shift registers. Unfortunately, not all circuits are random-testable, because either the fault coverage is too low or the required test length too large. In many cases the random test lengths can be reduced by orders of magnitude using weighted random patterns. However, there are also some circuits for which no single optimal set of weights exists. A set of weights defines a distribution of the random patterns. It is shown that the problem can be solved using several distributions instead of a single one, and an efficient procedure for computing the optimized input probabilities is presented. If a sufficient number of distributions is applied, then all combinational circuits can be tested randomly with moderate test lengths. The patterns can be produced by an external chip, and an optimized test schedule for circuits with a scan path can be obtained. Formulas are derived to determine strong bounds on the probability of detecting all faults. >

130 citations


Journal ArticleDOI
TL;DR: It is shown that testability is still guaranteed, even if only a small part of the flipflops is integrated into a scan path, and the overall test application time decreases in comparison with a complete scan path.
Abstract: The scan design is the most widely used technique used to ensure the testability of sequential circuits. In this article it is shown that testability is still guaranteed, even if only a small part of the flipflops is integrated into a scan path. An algorithm is presented for selecting a minimal number of flipflops, which must be directly accessible. The direct accessibility ensures that, for each fault, the necessary test sequence is bounded linearly in the circuit size. Since the underlying problem is NP-complete, efficient heuristics are implemented to compute suboptimal solutions. Moreover, a new algorithm is presented to map a sequential circuit into a minimal combinational one, such that test pattern generation for both circuit representations is equivalent and the fast combinational ATPG methods can be applied. For all benchmark circuits investigated, this approach results in a significant reduction of the hardware overhead, and additionally a complete fault coverage is still obtained. Amazingly the overall test application time decreases in comparison with a complete scan path, since the width of the shifted patterns is shorter, and the number of patterns increase only to a small extent.

Proceedings ArticleDOI
26 Jun 1990
TL;DR: A simulation model of the IBM RT PC was developed and injected with 18900 gate-level transient faults, showing several distinct classes of program-level error behavior, including program flow changes, incorrect memory bus traffic, and undetected but corrupted program state.
Abstract: Effects of gate-level faults on program behavior are described and used as a basis for fault models at the program level. A simulation model of the IBM RT PC was developed and injected with 18900 gate-level transient faults. A comparison of the system state of good and faulted runs was made to observe internal propagation of errors, while memory traffic and program flow comparisons detected errors in program behavior. Results show several distinct classes of program-level error behavior, including program flow changes, incorrect memory bus traffic, and undetected but corrupted program state. Additionally, the dependencies of fault location, injection time, and workload on error detection coverage are reported. For the IBM RT PC, the error detection latency was shown to follow a Weibull distribution dependent on the error detection mechanism and the two selected workloads. These results aid in the understanding of the effects of gate-level faults and allow for the generation and validation of new fault models, fault injection methods, and error detection mechanisms. >

Journal ArticleDOI
TL;DR: A relation between the average fault coverage and circuit testability is developed and the statistical formulation allows computation of coverage for deterministic and random vectors.
Abstract: A relation between the average fault coverage and circuit testability is developed. The statistical formulation allows computation of coverage for deterministic and random vectors. The following applications of this analysis are discussed: determination of circuit testability from fault simulation, coverage prediction from testability analysis, prediction of test length, and test generation by fault sampling. >

Proceedings Article
01 Jan 1990
TL;DR: In this paper, simplified ATPG and fault simulation algorithms, reduced test set sizes, and increased fault coverage are achieved with I, testing for stuck-at faults, which will detect logically redundant and multiple stuck at faults and improve the detection of non-stuck-at fault defects.
Abstract: Simplified ATPG and fault simulation algorithms, reduced test set sizes, and increased fault coverage are achieved with I, testing for stuck-at faults. In addition, IDm testing will detect logically redundant and multiple stuck-at faults and improve the detection of non-stuck-at fault defects.

Journal ArticleDOI
TL;DR: It is shown that parallel processing of HTD faults does indeed result in high fault coverage, which is otherwise not achievable by a uniprocessor algorithm, and the parallel algorithm exhibits superlinear speedups in some cases due to search anomalies.
Abstract: For circuits of VLSI complexity, test generation time can be prohibitive. Most of the time is consumed by hard-to-detect (HTD) faults, which might remain undetected even after a large number of backtracks. The problems inherent in a uniprocessor implementation of a test generation algorithm are identified, and a parallel test generation method which tries to achieve a high fault coverage for HTD faults in a reasonable amount of time is proposed. A dynamic search space allocation strategy which allocates disjoint search spaces to minimize the redundant work is proposed. The search space allocation strategy tries to utilize the partial solutions generated by other processors to increase the probability of searching in a solution area. The parallel test generation algorithm has been implemented on an Intel iPSC/2 hypercube. It is shown that parallel processing of HTD faults does indeed result in high fault coverage, which is otherwise not achievable by a uniprocessor algorithm. The parallel algorithm exhibits superlinear speedups in some cases due to search anomalies. >

Proceedings ArticleDOI
01 Jan 1990
TL;DR: An algorithm is presented that reduces functional test sets to only those that are sufficient to find out whether a circuit contains a parametric fault, demonstrating that drastic reductions in test time can be achieved without sacrificing fault coverage.
Abstract: Given the high cost of testing analog circuit functionality, it is proposed that tests for analog circuits should be designed to detect faults. An algorithm is presented that reduces functional test sets to only those that are sufficient to find out whether a circuit contains a parametric fault. Examples demonstrate that drastic reductions in test time can be achieved without sacrificing fault coverage. >

Proceedings ArticleDOI
10 Sep 1990
TL;DR: A ATPG (automatic test pattern generation) system that can efficiently create a high-coverage test for extremely large scan designs is described, formed by optimally combining a fast fault simulator with a powerful test generator.
Abstract: A ATPG (automatic test pattern generation) system that can efficiently create a high-coverage test for extremely large scan designs is described This system is formed by optimally combining a fast fault simulator with a powerful test generator For the ISCAS85 and ISCAS89 circuits, this ATPG system created a test for all testable faults and identified all redundant faults without a single aborted fault This represents the first time this has been achieved for the ISCAS89 designs, and the performance of this ATPG system is significantly better than published results Performing ATPG for the largest ISCAS89 designs, which contained about 25000 gates, required only 3 min of CPU time on an Apollo DN3550 workstation The data collected for the ISCAS designs showed that the ATPG CPU time increased linearly with gate count This strongly suggests that ATPG can be efficiently performed for circuits of 100000 and even one million gates >

Journal ArticleDOI
TL;DR: The authors have delimited, for every reconvergent fan-out stem, a region of the circuit outside of which the stem fault does not have to be simulated, and experimental results are shown for the well-known benchmark circuits.
Abstract: An exact fault simulation can be achieved by simulating only the faults on reconvergent fan-out stems, while determining the detectability of faults on other lines by critical path tracing within fan-out-free regions. The authors have delimited, for every reconvergent fan-out stem, a region of the circuit outside of which the stem fault does not have to be simulated. Lines on the boundary of such a stem region, called exit lines, have the following property: if the stem fault is detected on the line and the line is critical with respect to a primary output, then the stem fault is detected at that primary output. Any fault simulation technique can be used to simulate the stem fault within its stem region. The fault simulation complexity of a circuit is shown to be directly related to the number and size of stem regions in the circuit. The concept of stem regions has been used as a framework for an efficient fault simulator for combinational circuits. The concept allows a static reduction of the circuit area of explicit analysis for single- as well as multiple-output circuits. A dynamic reduction of processing steps is also achieved as the fault simulation progresses and fault coverage increases. The simulation algorithm is described, and experimental results are shown for the well-known benchmark circuits. >

Proceedings ArticleDOI
Kwang-Ting Cheng1, J.-Y. Jou1
10 Sep 1990
TL;DR: The authors developed an automatic test generation algorithm and built a test generation system using a single-transition fault model, which shows the effectiveness of this method is shown by experimental results on a set of benchmark finite-state machines.
Abstract: A functional test generation method for finite-state machines is described. A functional fault model, called the single-transition fault model, on the state transition level is used. In this model, a fault causes a single transition to a wrong destination state. A fault-collapsing technique for this fault model is also described. For each state transition, a small subset of states is selected as the faulty destination states so that the number of modeled faults for test generation is minimized. On the basis of this fault model, the authors developed an automatic test generation algorithm and built a test generation system. The effectiveness of this method is shown by experimental results on a set of benchmark finite-state machines. A 100% stuck-at fault coverage is achieved by the proposed method for several machines, and a very high coverage (>97%) is also obtained for other machines. In comparison with a gate-level test generator STG3, the test generation time is speeded up by a factor of 100. >

Patent
Jared L. Zerbe1
19 Sep 1990
TL;DR: In this paper, a linear feedback shift register (LFSR) was used for address generation during memory self-testing to increase fault coverage. But the LFSR was not used for data generation.
Abstract: RAM Built-In Self-Test logic is presented that utilizes a linear feedback shift register (LFSR) to generate data. Preferably, an LFSR is also utilized for address generation during memory self-testing. More than one cycle is implemented with offset of successive data sequences relative to address sequences to increase fault coverage. Memory storage is utilized in the data generation to enable a reduced area of the data generation circuitry.

Proceedings ArticleDOI
26 Jun 1990
TL;DR: A novel algorithm-based fault tolerance scheme is proposed for fast Fourier transform (FFT) networks and it is shown that the proposed scheme achieves 100% fault coverage theoretically.
Abstract: A novel algorithm-based fault tolerance scheme is proposed for fast Fourier transform (FFT) networks. It is shown that the proposed scheme achieves 100% fault coverage theoretically. An accurate measure of the fault coverage for FFT networks is provided by taking the roundoff error into account. It is shown that the proposed scheme maintains the low hardware overhead and high throughput of J.Y. Jou and J.A. Abraham's scheme and, at the same time, increases the fault coverage significantly. >

Patent
24 Oct 1990
TL;DR: In this article, an alarm sequence generator is used to test the correctness of a fault model and generate a user interface from which specific components can be selected for failure at specified times.
Abstract: In a real-time diagnostic system, an alarm sequence generator is used to test the correctness of a fault model. The fault model describes an industrial process being monitored. The alarm sequence generator reads the fault model and generates a user interface, from which specific components can be selected for failure at specified times. The alarm sequence generator assembles all alarms that are causally downstream from the selected set of faulty components and determines which alarms should be turned on based on probabilistic and temporal information in the fault model. The timed alarm sequence can be used by an expert to measure the correctness of a particular model, or can be used as input into a diagnostic system to measure the correctness of the diagnostic system.

Proceedings ArticleDOI
12 Mar 1990
TL;DR: A new technique is introduced to improve the diagnostic capabilities of a traditional automatic test pattern generation (ATPG) and the experimental results showing its effectiveness are finally presented.
Abstract: This paper addresses the generation of test patterns having diagnostic properties. The authors goal is to produce patterns able not only to detect, but also to distinguish faults in combinational circuits. A general formalization of the problem is first given; a new technique is then introduced to improve the diagnostic capabilities of a traditional automatic test pattern generation (ATPG); the experimental results showing its effectiveness are finally presented. >

Proceedings ArticleDOI
10 Sep 1990
TL;DR: The authors discuss the significant improvements that were achieved when a conventional ATPG (automatic test pattern generation) algorithm was modified to generate test sets suitable for I/sub DDQ/ testing, including increased SAF coverage, reduced vector set sizes, coverage of logically redundant SAFs and multiple SAFs, and reduced CPU cost for ATPG and fault simulation.
Abstract: The authors discuss the significant improvements that were achieved when a conventional ATPG (automatic test pattern generation) algorithm was modified to generate test sets suitable for I/sub DDQ/ testing. These improvements include increased SAF (stuck-at-fault) coverage, reduced vector set sizes, coverage of logically redundant SAFs and multiple SAFs, increased coverage of CMOS IC non-SAF defects, and reduced CPU cost for ATPG and fault simulation. This reduction in computational complexity for I/sub DDQ /based ATPG enables test generation for much larger circuits than previously possible. Additionally untestable faults can be further categorized to identify SAFs that are truly 'don't-care faults,' thereby offering a more realistic assessment of actual fault coverage. >

Proceedings ArticleDOI
10 Sep 1990
TL;DR: A heuristic algorithm is introduced for solving the arrangement of latches in a scan-path design to improve the coverage of delay faults, and preliminary experimental results show that the proposed algorithm can find a LAM with better fault coverage than a randomly selected ones.
Abstract: A problem involving the arrangement of latches in a scan-path design to improve the coverage of delay faults is described. The problem is NP-hard, and a heuristic algorithm is introduced for solving this arrangement problem. A necessary and sufficient condition is also given to determine whether there is a scan path to implement a given delay-fault test pair. Only LAM (latch-arrangement-mapping) implementable test pairs need to be simulated by a delay-fault simulator for a semi-completed LSSD (level-sensitive-scan-design) circuit. Preliminary experimental results show that the proposed algorithm can find a LAM with better fault coverage than a randomly selected ones. >

Proceedings ArticleDOI
10 Sep 1990
TL;DR: It is shown that, with only an incremental effort during wafer probe, data collection that can be used to compare the relative accuracy of different models over a range of fault coverage is possible.
Abstract: The authors report on an experiment to verify the accuracy of reject ratio predictions by the available approaches. The data collection effort includes instrumenting the wafer probe test to obtain chip failures as a function of applied vectors and running a fault simulator to obtain the cumulative fault coverage of these vectors. The accuracy of reject ratio predictions is judged by assuming earlier stopping points for the wafer probe, thereby gaining a measure of confidence in the final predicted value. The results of five different analyses are reported for over 70000 tested dies of a CMOS VLSI device. The five methods discussed predicted values for the reject ratio that vary by an order of magnitude at high values of fault coverage. It is shown that, with only an incremental effort during wafer probe, data collection that can be used to compare the relative accuracy of different models over a range of fault coverage is possible. >

Proceedings ArticleDOI
Hyung Ki Lee1, Dong Sam Ha1
24 Jun 1990
TL;DR: A highly efficient automatic test pattern generator for stuck-open (SOP) faults, called SOPRANO, in CMOS combinational circuits that achieves high SOP fault coverage and short processing time.
Abstract: In this paper, we describe a highly efficient automatic test pattern generator for stuck-open (SOP) faults, called SOPRANO, in CMOS combinational circuits. The key idea of SOPRANO is to convert a CMOS circuit into an equivalent gate level circuit and SOP faults into the equivalent stuck-at faults. Then SOPRANO derives test patterns for SOP faults using a gate level test pattern generator. Several techniques to reduce the test set size are introduced in SOPRANO. Experimental results performed on eight benchmark circuits show that SOPRANO achieves high SOP fault coverage and short processing time.

Proceedings ArticleDOI
11 Nov 1990
TL;DR: Empirical testability difference (ETD), a measure of the potential improvement in the overall testability of the circuit, is used to successively select storage elements for scan to obtain maximum fault coverage for the number of scan elements selected.
Abstract: The objective of the partial scan method proposed is to obtain maximum fault coverage for the number of scan elements selected. Empirical testability difference (ETD), a measure of the potential improvement in the overall testability of the circuit, is used to successively select storage elements for scan. ETD is calculated by using testability measures based on empirical evaluation of the circuit with the actual test sequence generator. In addition, ETD focuses on the hard-to-detect faults rather than all faults once such faults are known. The method has been extensively tested with ten of the sequential circuits given by F. Brglez et al. (1989) using the FASTEST provided by T. Kelsey and K. Saluja (1989). The results of these tests indicate that ETD yields on average either 27% of the number of uncovered faults for the same number of scan elements or 21% fewer scan elements for the same fault coverage compared to the other methods studied. >

Proceedings ArticleDOI
10 Sep 1990
TL;DR: A novel linear-time algorithm for identifying a large set of faults that are undetectable by a given test vector, intended as a simple, fast preprocessing step to be performed after a test vector has been generated, but before the (often lengthy) process of fault simulation begins.
Abstract: The authors propose a novel linear-time algorithm for identifying, in a large combinatorial circuit, a large set of faults that are undetectable by a given test vector. Although this so-called X-algorithm does not identify all the undetectable faults, empirical evidence is offered to show that the reduction in the number of remaining faults to be simulated is significant. The algorithm is intended as a simple, fast preprocessing step to be performed after a test vector has been generated, but before the (often lengthy) process of fault simulation begins. The empirical results indicate that the X-algorithm is both useful (indicated by the utility factor) and good (indicated by the effectiveness factor). It provides as much as a 50% reduction in the number of faults that need to be simulated. Moreover, the algorithm seems to identify a large fraction of the undetectable faults. >

Proceedings ArticleDOI
26 Jun 1990
TL;DR: A class of n-unit multiprocessor systems with O(n log n) interconnecting links is constructed, and a distributed probabilistic fault diagnosis algorithm whose probability of correctness converges to 1 as n to infinity is proposed.
Abstract: A class of n-unit multiprocessor systems with O(n log n) interconnecting links is constructed, and a distributed probabilistic fault diagnosis algorithm whose probability of correctness converges to 1 as n to infinity is proposed. For small probability of unit failure, a distributed diagnosis whose probability also converges to 1 as the size of the system grows is proposed for the hypercube. On the other hand, it is proved that if a class of systems has fewer than kn log n links for a small constant k, the probability of correctness of every fault diagnosis converges to 0 as n to infinity . By combining the probabilistic and the distributed approach the authors' model of fault diagnosis removes the major drawbacks of the PMC (Preparata-Metze-Chien) model: the assumption of tests with complete fault coverage and the assumption of a fault-free central monitoring unit capable of performing diagnosis. >