scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 1986"


Book
01 Jan 1986
TL;DR: The author’s research focused on the development of a novel and scalable Fault Simulation Algorithm that addressed the challenge of how to model the dynamic response of the immune system to shocks.
Abstract: Preface. 1 Introduction. 1.1 Introduction. 1.2 Quality. 1.3 The Test. 1.4 The Design Process. 1.5 Design Automation. 1.6 Estimating Yield. 1.7 Measuring Test Effectiveness. 1.8 The Economics of Test. 1.9 Case Studies. 1.9.1 The Effectiveness of Fault Simulation. 1.9.2 Evaluating Test Decisions. 1.10 Summary. Problems. References. 2 Simulation. 2.1 Introduction. 2.2 Background. 2.3 The Simulation Hierarchy. 2.4 The Logic Symbols. 2.5 Sequential Circuit Behavior. 2.6 The Compiled Simulator. 2.6.1 Ternary Simulation. 2.6.2 Sequential Circuit Simulation. 2.6.3 Timing Considerations. 2.6.4 Hazards. 2.6.5 Hazard Detection. 2.7 Event-Driven Simulation. 2.7.1 Zero-Delay Simulation. 2.7.2 Unit-Delay Simulation. 2.7.3 Nominal-Delay Simulation. 2.8 Multiple-Valued Simulation. 2.9 Implementing the Nominal-Delay Simulator. 2.9.1 The Scheduler. 2.9.2 The Descriptor Cell. 2.9.3 Evaluation Techniques. 2.9.4 Race Detection in Nominal-Delay Simulation. 2.9.5 Min-Max Timing. 2.10 Switch-Level Simulation. 2.11 Binary Decision Diagrams. 2.11.1 Introduction. 2.11.2 The Reduce Operation. 2.11.3 The Apply Operation. 2.12 Cycle Simulation. 2.13 Timing Verification. 2.13.1 Path Enumeration. 2.13.2 Block-Oriented Analysis. 2.14 Summary. Problems. References. 3 Fault Simulation. 3.1 Introduction. 3.2 Approaches to Testing. 3.3 Analysis of a Faulted Circuit. 3.3.1 Analysis at the Component Level. 3.3.2 Gate-Level Symbols. 3.3.3 Analysis at the Gate Level. 3.4 The Stuck-At Fault Model. 3.4.1 The AND Gate Fault Model. 3.4.2 The OR Gate Fault Model. 3.4.3 The Inverter Fault Model. 3.4.4 The Tri-State Fault Model. 3.4.5 Fault Equivalence and Dominance. 3.5 The Fault Simulator: An Overview. 3.6 Parallel Fault Processing. 3.6.1 Parallel Fault Simulation. 3.6.2 Performance Enhancements. 3.6.3 Parallel Pattern Single Fault Propagation. 3.7 Concurrent Fault Simulation. 3.7.1 An Example of Concurrent Simulation. 3.7.2 The Concurrent Fault Simulation Algorithm. 3.7.3 Concurrent Fault Simulation: Further Considerations. 3.8 Delay Fault Simulation. 3.9 Differential Fault Simulation. 3.10 Deductive Fault Simulation. 3.11 Statistical Fault Analysis. 3.12 Fault Simulation Performance. 3.13 Summary. Problems. References. 4 Automatic Test Pattern Generation. 4.1 Introduction. 4.2 The Sensitized Path. 4.2.1 The Sensitized Path: An Example. 4.2.2 Analysis of the Sensitized Path Method. 4.3 The D-Algorithm. 4.3.1 The D-Algorithm: An Analysis. 4.3.2 The Primitive D-Cubes of Failure. 4.3.3 Propagation D-Cubes. 4.3.4 Justification and Implication. 4.3.5 The D-Intersection. 4.4 Testdetect. 4.5 The Subscripted D-Algorithm. 4.6 PODEM. 4.7 FAN. 4.8 Socrates. 4.9 The Critical Path. 4.10 Critical Path Tracing. 4.11 Boolean Differences. 4.12 Boolean Satisfiability. 4.13 Using BDDs for ATPG. 4.13.1 The BDD XOR Operation. 4.13.2 Faulting the BDD Graph. 4.14 Summary. Problems. References. 5 Sequential Logic Test. 5.1 Introduction. 5.2 Test Problems Caused by Sequential Logic. 5.2.1 The Effects of Memory. 5.2.2 Timing Considerations. 5.3 Sequential Test Methods. 5.3.1 Seshu's Heuristics. 5.3.2 The Iterative Test Generator. 5.3.3 The 9-Value ITG. 5.3.4 The Critical Path. 5.3.5 Extended Backtrace. 5.3.6 Sequential Path Sensitization. 5.4 Sequential Logic Test Complexity. 5.4.1 Acyclic Sequential Circuits. 5.4.2 The Balanced Acyclic Circuit. 5.4.3 The General Sequential Circuit. 5.5 Experiments with Sequential Machines. 5.6 A Theoretical Limit on Sequential Testability. 5.7 Summary. Problems. References. 6 Automatic Test Equipment. 6.1 Introduction. 6.2 Basic Tester Architectures. 6.2.1 The Static Tester. 6.2.2 The Dynamic Tester. 6.3 The Standard Test Interface Language. 6.4 Using the Tester. 6.5 The Electron Beam Probe. 6.6 Manufacturing Test. 6.7 Developing a Board Test Strategy. 6.8 The In-Circuit Tester. 6.9 The PCB Tester. 6.9.1 Emulating the Tester. 6.9.2 The Reference Tester. 6.9.3 Diagnostic Tools. 6.10 The Test Plan. 6.11 Visual Inspection. 6.12 Test Cost. 6.13 Summary. Problems. References. 7 Developing a Test Strategy. 7.1 Introduction. 7.2 The Test Triad. 7.3 Overview of the Design and Test Process. 7.4 A Testbench. 7.4.1 The Circuit Description. 7.4.2 The Test Stimulus Description. 7.5 Fault Modeling. 7.5.1 Checkpoint Faults. 7.5.2 Delay Faults. 7.5.3 Redundant Faults. 7.5.4 Bridging Faults. 7.5.5 Manufacturing Faults. 7.6 Technology-Related Faults. 7.6.1 MOS. 7.6.2 CMOS. 7.6.3 Fault Coverage Results in Equivalent Circuits. 7.7 The Fault Simulator. 7.7.1 Random Patterns. 7.7.2 Seed Vectors. 7.7.3 Fault Sampling. 7.7.4 Fault-List Partitioning. 7.7.5 Distributed Fault Simulation. 7.7.6 Iterative Fault Simulation. 7.7.7 Incremental Fault Simulation. 7.7.8 Circuit Initialization. 7.7.9 Fault Coverage Profiles. 7.7.10 Fault Dictionaries. 7.7.11 Fault Dropping. 7.8 Behavioral Fault Modeling. 7.8.1 Behavioral MUX. 7.8.2 Algorithmic Test Development. 7.8.3 Behavioral Fault Simulation. 7.8.4 Toggle Coverage. 7.8.5 Code Coverage. 7.9 The Test Pattern Generator. 7.9.1 Trapped Faults. 7.9.2 SOFTG. 7.9.3 The Imply Operation. 7.9.4 Comprehension Versus Resolution. 7.9.5 Probable Detected Faults. 7.9.6 Test Pattern Compaction. 7.9.7 Test Counting. 7.10 Miscellaneous Considerations. 7.10.1 The ATPG/Fault Simulator Link. 7.10.2 ATPG User Controls. 7.10.3 Fault-List Management. 7.11 Summary. Problems. References. 8 Design-For-Testability. 8.1 Introduction. 8.2 Ad Hoc Design-for-Testability Rules. 8.2.1 Some Testability Problems. 8.2.2 Some Ad Hoc Solutions. 8.3 Controllability/Observability Analysis. 8.3.1 SCOAP. 8.3.2 Other Testability Measures. 8.3.3 Test Measure Effectiveness. 8.3.4 Using the Test Pattern Generator. 8.4 The Scan Path. 8.4.1 Overview. 8.4.2 Types of Scan-Flops. 8.4.3 Level-Sensitive Scan Design. 8.4.4 Scan Compliance. 8.4.5 Scan-Testing Circuits with Memory. 8.4.6 Implementing Scan Path. 8.5 The Partial Scan Path. 8.6 Scan Solutions for PCBs. 8.6.1 The NAND Tree. 8.6.2 The 1149.1 Boundary Scan. 8.7 Summary. Problems. References. 9 Built-In Self-Test. 9.1 Introduction. 9.2 Benefits of BIST. 9.3 The Basic Self-Test Paradigm. 9.3.1 A Mathematical Basis for Self-Test. 9.3.2 Implementing the LFSR. 9.3.3 The Multiple Input Signature Register (MISR). 9.3.4 The BILBO. 9.4 Random Pattern Effectiveness. 9.4.1 Determining Coverage. 9.4.2 Circuit Partitioning. 9.4.3 Weighted Random Patterns. 9.4.4 Aliasing. 9.4.5 Some BIST Results. 9.5 Self-Test Applications. 9.5.1 Microprocessor-Based Signature Analysis. 9.5.2 Self-Test Using MISR/Parallel SRSG (STUMPS). 9.5.3 STUMPS in the ES/9000 System. 9.5.4 STUMPS in the S/390 Microprocessor. 9.5.5 The Macrolan Chip. 9.5.6 Partial BIST. 9.6 Remote Test. 9.6.1 The Test Controller. 9.6.2 The Desktop Management Interface. 9.7 Black-Box Testing. 9.7.1 The Ordering Relation. 9.7.2 The Microprocessor Matrix. 9.7.3 Graph Methods. 9.8 Fault Tolerance. 9.8.1 Performance Monitoring. 9.8.2 Self-Checking Circuits. 9.8.3 Burst Error Correction. 9.8.4 Triple Modular Redundancy. 9.8.5 Software Implemented Fault Tolerance. 9.9 Summary. Problems. References. 10 Memory Test. 10.1 Introduction. 10.2 Semiconductor Memory Organization. 10.3 Memory Test Patterns. 10.4 Memory Faults. 10.5 Memory Self-Test. 10.5.1 A GALPAT Implementation. 10.5.2 The 9N and 13N Algorithms. 10.5.3 Self-Test for BIST. 10.5.4 Parallel Test for Memories. 10.5.5 Weak Read-Write. 10.6 Repairable Memories. 10.7 Error Correcting Codes. 10.7.1 Vector Spaces. 10.7.2 The Hamming Codes. 10.7.3 ECC Implementation. 10.7.4 Reliability Improvements. 10.7.5 Iterated Codes. 10.8 Summary. Problems. References. 11 IDDQ. 11.1 Introduction. 11.2 Background. 11.3 Selecting Vectors. 11.3.1 Toggle Count. 11.3.2 The Quietest Method. 11.4 Choosing a Threshold. 11.5 Measuring Current. 11.6 IDDQ Versus Burn-In. 1.7 Problems with Large Circuits. 11.8 Summary. Problems. References. 12 Behavioral Test and Verification. 12.1 Introduction. 12.2 Design Verification: An Overview. 12.3 Simulation. 12.3.1 Performance Enhancements. 12.3.2 HDL Extensions and C++. 12.3.3 Co-design and Co-verification. 12.4 Measuring Simulation Thoroughness. 12.4.1 Coverage Evaluation. 12.4.2 Design Error Modeling. 12.5 Random Stimulus Generation. 12.6 The Behavioral ATPG. 12.6.1 Overview. 12.6.2 The RTL Circuit Image. 12.6.3 The Library of Parameterized Modules. 12.6.4 Some Basic Behavioral Processing Algorithms. 12.7 The Sequential Circuit Test Search System (SCIRTSS). 12.7.1 A State Traversal Problem. 12.7.2 The Petri Net. 12.8 The Test Design Expert. 12.8.1 An Overview of TDX. 12.8.2 DEPOT. 12.8.3 The Fault Simulator. 12.8.4 Building Goal Trees. 12.8.5 Sequential Conflicts in Goal Trees. 12.8.6 Goal Processing for a Microprocessor. 12.8.7 Bidirectional Goal Search. 12.8.8 Constraint Propagation. 12.8.9 Pitfalls When Building Goal Trees. 12.8.10 MaxGoal Versus MinGoal. 12.8.11 Functional Walk. 12.8.12 Learn Mode. 12.8.13 DFT in TDX. 12.9 Design Verification. 12.9.1 Formal Verification. 12.9.2 Theorem Proving. 12.9.3 Equivalence Checking. 12.9.4 Model Checking. 12.9.5 Symbolic Simulation. 12.10Summary. Problems. References. Index.

219 citations


Proceedings ArticleDOI
02 Jul 1986
TL;DR: A test generation system capable of high fault coverage in complex sequential circuits that dynamically expands to multi-path sensitization in reconvergent fan-out structures to reduce back-tracking.
Abstract: This paper describes a test generation system capable of high fault coverage in complex sequential circuits. Sequential logic is efficiently processed by a unidirectional time flow approach. This single path sensitization technique dynamically expands to multi-path sensitization in reconvergent fan-out structures. Sophisticated conflict analysis is used to reduce back-tracking. User guidance is also accepted to further improve performance.

122 citations


Journal ArticleDOI
TL;DR: Two algorithms are proposed for self-testing of embedded bedded RAMs, both of which can detect a large variety of stuck-at and non-stuck-at faults.
Abstract: The authors present a built-in self-test (BIST) method for testing embedded memories. Two algorithms are proposed for self-testing of embedded bedded RAMs, both of which can detect a large variety of stuck-at and non-stuck-at faults. The hardware implementation of the methods requires a hardware test-pattern generator, which produces address, data, and read/write inputs. The output responses of the memory can be compressed by using a parallel input signature analyzer, or they can be compared with expected responses by an output comparator. The layout of memories has been considered in the design of additional BIST circuitry. The authors conclude by evaluating the two schemes on the basis of area overhead, performance degradation, fault coverage, test application time, and testing of self-test circuitry. The BIST overhead is very low and test time is quite short. Six devices, with one of the test schemes, have been manufactured and are in the field.

96 citations


Journal ArticleDOI
TL;DR: A new methodology is presented for indirectly measuring fault latency, the distribution of fault latency is derived from the methodology, and the knowledge of faultLatency is applied to the analysis of two important examples.
Abstract: The time interval between the occurrence of a fault and the detection of the error caused by the fault is divided by the generation of that error into two parts: fault latency and error latency. Since the moment of error generation is not directly observable, all related works in the literature have dealt with only the sum of fault and error latencies, thereby making the analysis of their separate effects impossible. To remedy this deficiency, we 1) present a new methodology for indirectly measuring fault latency, 2) derive the distribution of fault latency from the methodology, and 3) apply the knowledge of fault latency to the analysis of two important examples.

65 citations


Proceedings Article
01 Jan 1986

62 citations


Journal ArticleDOI
TL;DR: It is shown that the enumeration of errors missed by ACT for a unit under test is equivalent to the number of restricted partitions of a number, which indicates that with ACT a better control over fault coverage can be obtained.
Abstract: A new test data reduction technique called accumulator compression testng (ACT) is proposed ACT is an extension of syndrome testing It is shown that the enumeration of errors missed by ACT for a unit under test is equivalent to the number of restricted partitions of a number Asymptotic results are obtained for independent and dependent error modes Comparison is made between signature analysis (SA) and ACT Theoretical results indicate that with ACT a better control over fault coverage can be obtained than with SA Experimental results are supportive of this indication Built-in self test for processor environments may be feasible with ACT However, for general VLSI circuits the complexity of ACT may be a problem as an adder is necessary

61 citations


Patent
09 Dec 1986
TL;DR: In this paper, a method for modeling complementary metal oxide semiconductor (CMOS) combinatorial logic circuits by Boolean gates taking into account circuit behavior effects due to charge storing and static hazards is presented.
Abstract: A method for modeling complementary metal oxide semiconductor (CMOS) combinatorial logic circuits by Boolean gates taking into account circuit behavior effects due to charge storing and static hazards. Models are developed for both the faultless and faulty operation of each circuit. According to a further aspect of the invention, these models are used in a simulation procedure to evaluate the fault coverage of a large scale integrated circuit design built using a plurality of these circuits. In the evaluation procedure the faulty model is used only for a particular circuit whose failure performance is being tested and the faultless model is utilized for all other circuits. This procedure is continued until all of the individual gate circuits have been evaluated.

56 citations


Patent
31 Jan 1986
TL;DR: In this article, the authors present a system for concurrent evaluation of the effect of multiple faults in a logic design being evaluated, particularly useful in the design of very large scale integrated circuits for developing a compact input test set which will permit locating a predetermined percentage of all theoretically possible fault conditions in the manufactured chips.
Abstract: A system for concurrent evaluation of the effect of multiple faults in a logic design being evaluated is particularly useful in the design of very large scale integrated circuits for developing a compact input test set which will permit locating a predetermined percentage of all theoretically possible fault conditions in the manufactured chips. The system includes logic evaluation hardware for simulating a given logic design and evaluating the complete operation thereof prior to committing the design to chip fabrication. In addition, and concurrently with the logic design evaluation, the system includes means for storing large number of predetermined fault conditions for each gate in the design, and for evaluating the "fault operation" for each fault condition for each gate, and comparing the corresponding results against the "good machine" operation, and storing the fault operation if different from the good operation. By repeating the process on an event-driven basis from gate to subsequently affected gates throughout the design, a file of all fault effects can be developed from which an input test set for the logic design can be developed based on considerations of the required percentage of all possible faults to be detected and the time that can be allowed for testing of each chip. Special hardware is provided for identifying and eliminating hyperactive or oscillating faults to maintain processing efficiency.

56 citations


Journal ArticleDOI
TL;DR: A new technique for designing easily testable PLA's is presented that consists of the addition of input lines in such a way that, in test mode, any single product line can be activated and its associated circuitry and device can be tested.
Abstract: A new technique for designing easily testable PLA's is presented. The salient features of this technique are: 1) low overhead, 2) high fault coverage, 3) simple design, and 4) little or no impact on normal operation of PLA's. This technique consists of the addition of input lines in such a way that, in test mode, any single product line can be activated and its associated circuitry and device can be tested. Using this technique, all multiple stuck-at faults, as well as all multiple extra and multiple missing device faults, are detected.

45 citations


Proceedings Article
01 Jan 1986
TL;DR: In this article, a parity bit signature particularly well suited for exhaustive testing techniques is defined and discussed, and the general problem of evaluating its effectiveness relative to a given implementation is discussed.
Abstract: A parity bit signature particularly well suited for exhaustive testing techniques is defined and discussed. The discussion is concerned not only with the proposed parity bit signature itself, but also with the general problem of evaluating its effectiveness relative to a given implementation. In addition to such desirable properties as uniformity and ease of implementation, it is shown to be especially amenable to efficient fault coverage calculations. >

37 citations



Proceedings ArticleDOI
02 Jul 1986
TL;DR: SLS, a large capacity, high performance switch level simulator, developed to run on an IBM System/370 architecture, that uses a model which closely reflects the behavior of MOS circuits is described.
Abstract: We describe SLS, a large capacity, high performance switch level simulator, developed to run on an IBM System/370 architecture, that uses a model which closely reflects the behavior of MOS circuits. This performance is the result of mixing a compiled model with the more traditional approach of event-driven simulation control, together with very efficient algorithms for evaluating the steady state response of the circuit. SLS is used for design verification/checking applications and for estimating fault coverage.

Journal ArticleDOI
TL;DR: Provably conservative (and optimistic) reliability models can be systematically derived from more complex models, and incorporate a reduced state space and fewer transitions, and have solutions that are more cost- effective than those of the original complex models.
Abstract: Provably conservative (and optimistic) reliability models can be systematically derived from more complex models. These derived models incorporate a reduced state space and fewer transitions, and, therefore, have solutions that are more cost- effective than those of the original complex models. The designer can extensively explore the design space without incurring the expense of solving multiple complex models. A conservative- optimistic pair of derived models produces a band that includes the solution to the complex model. Sensitivity analysis can be performed on this pair of models to determine those parameters of the original model that are most sensitive to change (i.e., uncertainty) and hence warrant further expense in obtaining tighter specifications.

Journal ArticleDOI
TL;DR: A new algorithm is proposed based on complementation and the tautology check of a logic cover, derived from the PLA personality matrix, to achieve the best balance between run time and test-set compactness.
Abstract: PLATypus (PLA Test pattern generation and logic simulation tool) is an efficient tool for large PLA's which is interfaced with other existing PLA tools such as the folding program PLEASURE [12] and the logic minimizer ESPRESSO II-C [11] developed at the University of California at Berkeley. A new algorithm is proposed based on complementation and the tautology check of a logic cover, derived from the PLA personality matrix. Both complementation and tautology check are performed by advanced logic manipulation algorithms used in the logic minimization program ESPRESSO II-C [11]. The algorithm is exact, i.e., every testable crosspoint fault is tested, and maximum fault coverage is guaranteed. A quick preprocess, the biased random test generation, is used followed by the proposed algorithm to achieve the best balance between run time and test-set compactness. The program is refined at various stages by many powerful heuristics in the area of fault processing order, backend fault simulation, "don't-care" bit fixing, and on-the-fly test compaction. Both single stuck-at and crosspoint fault models are supported. PLATYPUS can also be used as a logic simulation tool and redundancy identifier. Test pattern generation has been performed by PLATYPUS on a large number of industrial PLA's.

01 Jul 1986
TL;DR: An experiment to accurately study the fault latency in the memory subsystem using real memory data from a VAX 11/780 at the University of Illinois and an analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is given.
Abstract: Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.

Proceedings ArticleDOI
K. Kishida1, F. Shirotori1, Y. Ikemoto1, S. Ishiyama1, T. Hayashi1 
02 Jul 1986
TL;DR: The delay test system features easy to use operation for providing the test data, including fail safe design to violation of scan design rule, quick turn around time for test data generation, and consideration for delay fault analysis.
Abstract: This paper presents a delay test system which detects the delay faults located in LSI chips. Fault model and the measure of fault coverage are defined. This system features easy to use operation for providing the test data, including fail safe design to violation of scan design rule, quick turn around time for test data generation, and consideration for delay fault analysis. The delay test is applied to the LSIs for M-68X series computers and justified its effectiveness to assure computer system's maximum performance.

Proceedings ArticleDOI
02 Jul 1986
TL;DR: The presented design-for-testability method guarantees a 100 percent fault coverage with respect to multiple stuck-at faults and multiple missing/extra crosspoint faults.
Abstract: A method for designing easily testable PLA's with low overhead is presented. The method is based on a reduction of product lines and the addition of a small number of inputs. The required additional hardware is calculated using a statistical cooling algorithm. The presented design-for-testability method guarantees a 100 percent fault coverage with respect to multiple stuck-at faults and multiple missing/extra crosspoint faults.

Journal ArticleDOI
Rajski1, Tyszer
TL;DR: It is shown that the multiple fault coverage ratio of Tc increases with the increasing number m of rows of a PLA and for m = 24 Tc detects 99 percent of all contact faults of size 8 or less.
Abstract: It is relatively easy to generate a complete single contact fault detection test set Tc for a PLA. However such a test set may fail to detect all multiple faults due to the phenomenon of masking. In previous papers attempting to quantitatively predict the multiple fault coverage capability of a single fault detection test set Tc in PLA's, it was proved that every multiple contact fault in an irredundant PLA is detected by Tc if the multiple fault does not contain any four-way masking cycle. In this correspondence, the masking relations are studied in detail and it is shown that Tc in fact detects a signifilcant percentage of faults with four-way masking. Based on these results more realistic bounds of the coverage capability of Tc are determined. It is shown that the multiple fault coverage ratio of Tc increases with the increasing number m of rows of a PLA and for m = 24 Tc detects 99 percent of all contact faults of size 8 or less.

Proceedings Article
01 Jan 1986
TL;DR: In this paper, a case where a single intermittent failure is not detected by a test set with 100% stuck-at fault coverage is presented, and a stress-strength analysis is presented to explain the experimental results.
Abstract: : Intermittent failures are studied by stressing (temperature, supply voltage and extra loading) good parts. The behavior of the chips under stress is similar to that of a marginal chip under normal operating conditions. The experiments show that most intermittent failures are pattern-sensitive for both sequential and combinational circuits. The stuck-at fault model is shown to be inappropriate to describe intermittent failures. This paper presents a case where a single intermittent failure is not detected by a test set with 100% single stuck-at fault coverage. A stress-strength analysis is presented to explain the experimental results. Keywords: Intermittent failures, Pattern sensitive faults, Soft failures, Integrated circuit reliability, Intermittent fault model.

Journal ArticleDOI
TL;DR: Fault detection, isolation, and recovery methodology employed in the Fault Tolerant Multiprocessor is described and results were found to be in close agreement with earlier assumptions made in reliability modeling.
Abstract: The Fault Tolerant Multiprocessor is a highly reliable computer designed to meet a goal of 10 ~ failures per hour. To a large extent, this level of reliability depends upon the ability to detect and isolate faults rapidly and accurately, purge the faulty module, and dynamically reconfigure the remaining good modules. This paper describes fault detection, isolation, and recovery methodology employed in the Fault Tolerant Multiprocessor. The second part of the paper deals with experimental results obtained by actually injecting faults at the pin level in the Fault Tolerant Multiprocessor. Over 21,000 faults were injected in the central processing unit, memory, bus interface circuits, and error detection, masking, and error reporting circuits of one line replaceable unit of the multiprocessor. Detection, isolation, and reconfiguration times for each fault were recorded. These results were found to be in close agreement with earlier assumptions made in reliability modeling. The experimental results are summarized in this paper.


Journal ArticleDOI
TL;DR: This presentation identifies and illustrates these causes of system failure and major topics considered are the loss of fault coverage due to imperfect implementation information or because the failure modes are not accurately represented by the single-stuck fault assumption.

Proceedings ArticleDOI
02 Jul 1986
TL;DR: In this article, a fault coverage estimation technique for mixed-level circuits is described, and the implementation of a FAULT Coverage Estimation (FACE) system is described for a combinational multiple-input multiple-output functional block.
Abstract: A Fault coverage estimation technique for mixed-level circuit is described. Observability formulae for combinational multiple-input multiple-output functional block are derived. Special procedures for estimating CMOS circuit transistor fault detection probability are developed, and the implementation of a FAult Coverage Estimation (FACE) system is described.

Journal ArticleDOI
George Weitzenfeld1
TL;DR: In this article, a parametric analysis was carried out to investigate the effect of various factors on the ground potential rise caused by the fault current and its distribution among neutral conductors and the earth.
Abstract: One of the most frequent faults in power systems is the single line-to-ground fault. During such an event, large fault current circulates through the system, grounding network and the earth, returning to generating sources. It is this type of fault that is being referred to in this paper as ground fault or, simply, fault. Considering a power system of a relatively simple network configuration, and using the double-sided elimination method, direct solution for the fault current and its distribution among neutral conductors and the earth is derived. The entire faulted subsystem is modelled considering its grounding network as an integral part. The solution is general and the computer memory and time requirements are very modest. A parametric analysis was carried out to investigate the effect of various factors on the ground potential rise caused by the fault current.

Journal ArticleDOI
TL;DR: In this paper, the effects of measurement threshold on fault coverage and the resultant failure rates at the assembled board and product levels are discussed and illustrated, as well as the importance of using the standard 100 megohm threshold.
Abstract: The increasing complexity of today's Printed Circuit Boards inevitably leads to higher failure rates at the assembled board and product levels. Electrical testing of PCBs at the bare board level always results in early identification of failures, and thus in increased production economies. The escalating costs to identify faults at various levels of the PCB manufacturing and assembly process are discussed and illustrated. The typical PCB spectrum of faults, including contamination, shorts, opens, and holes is discussed, along with reliable methods of fault removal. In addition to the economic factors such as capital equipment cost, programming, fixturing, and operating costs, several technical factors are discussed and illustrated. These include test comprehensiveness and test speed. Since fault coverage is directly proportional to the measurement capability of the test equipment, the effects of measurement threshold on fault coverage and the resultant failure rates at the assembled board and product levels are discussed and illustrated. The importance of using the standard 100 megohm threshold is also discussed and illustrated.

DOI
01 Jan 1986
TL;DR: The method presented does not require a fault dictionary, fault enumeration or knowledge of the values expected in the fault-free circuit, and it makes possible applications such as obtaining faults not detected by a given test, the identification of faults which cannot be modelled as stuck-at faults and other applications characteristic of this type of analysis.
Abstract: In the paper we develop an approach to fault diagnosis in combinational circuits yielding a new method based on an effect-cause analysis. In our method the circuit under test N* is studied by using a description of its behaviour called the operation map. Depending on the set of tests applied, this description may allow the fault in N* to be detected before, and independently of, being located. The elimination of inputs in the operation map allows us to find the fault situations in N* (causes) which are compatible with the applied test and the obtained response (the effect). The method presented does not require a fault dictionary, fault enumeration or knowledge of the values expected in the fault-free circuit, and it makes possible applications such as obtaining faults not detected by a given test (including redundant faults), the identification of faults which cannot be modelled as stuck-at faults and other applications characteristic of this type of analysis.


Journal ArticleDOI
TL;DR: Some general ideas on the theme of the inverse system transfer characteristic for fault detection are investigated, leading to improved fault tolerance for highly reliable safety system software.

Journal ArticleDOI
Zeung nam Bien1, Myung-Joong Youn1, Myung Jin Chung1, J.H. Kim1, B.C. Moon1, Bok-Man Kim 
TL;DR: A new fault diagnostic algorithm for locating a fault and estimating its magnitude in control systems is proposed for use in the fault tolerant control system and found to be highly effective for control systems with multiple PID controllers.