scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 1994"


Journal ArticleDOI
Jacob Savir1, S. Patil1
TL;DR: It is shown that the broad-side method is inferior to the skewed-load method, which is another form of scan-based transition test, and there is, however, a merit in combining the skewed -load method with the broad -side method to achieve a higher transition fault coverage.
Abstract: A broad-side delay test is a form of a scan-based delay test, where the first vector of the pair is scanned into the chain and the second vector of the pair is the combinational circuit's response to this first vector. This delay test form is called "broad-side" since the second vector of the delay test pair is provided in a broad-side fashion, namely through the logic. This paper concentrates on several issues concerning broad-side delay test. It analyzes the effectiveness of broad-side delay test; shows how to compute broad-side delay test vectors; shows how to generate broad-side delay test vectors using existing tools that were aimed at stuck-at faults; shows how to compute the detection probability of a transition fault using broad-side pseudo-random patterns; shows the results of experiments conducted on the ISCAS sequential benchmarks; and discusses some concerns of the broad-side delay test strategy. It is shown that the broad-side method is inferior to the skewed-load method, which is another form of scan-based transition test. There is, however, a merit in combining the skewed-load method with the broad-side method. This combined method will achieve a higher transition fault coverage than each individual method alone. >

296 citations


Journal ArticleDOI
TL;DR: A test generation method for a single nondeterministic finite-state machine (NFSM) is developed, which is an improved and generalized version of the Wp-method that generates test sequences only for deterministic infinite-state machines.
Abstract: Presents a method of generating test sequences for concurrent programs and communication protocols that are modeled as communicating nondeterministic finite-state machines (CNFSMs). A conformance relation, called trace-equivalence, is defined within this model, serving as a guide to test generation. A test generation method for a single nondeterministic finite-state machine (NFSM) is developed, which is an improved and generalized version of the Wp-method that generates test sequences only for deterministic finite-state machines. It is applicable to both nondeterministic and deterministic finite-state machines. When applied to deterministic finite-state machines, it yields usually smaller test suites with full fault coverage than the existing methods that also provide full fault coverage, provided that the number of states in implementation NFSMs are bounded by a known integer. For a system of CNFSMs, the test sequences are generated in the following manner: a system of CNFSMs is first reduced into a single NFSM by reachability analysis; then the test sequences are generated from the resulting NFSM using the generalized Wp-method. >

276 citations


Journal ArticleDOI
TL;DR: The authors propose a new alarm structure, propose a general model for representing the network, and give two algorithms which can solve the alarm correlation and fault identification problem in the presence of multiple faults.
Abstract: Presents an approach for modeling and solving the problem of fault identification and alarm correlation in large communication networks. A single fault in a large network may result in a large number of alarms, and it is often very difficult to isolate the true cause of the fault. This appears to be one of the most important difficulties in managing faults in today's networks. The problem may become worse in the case of multiple faults. The authors present a general methodology for solving the alarm correlation and fault identification problem. They propose a new alarm structure, propose a general model for representing the network, and give two algorithms which can solve the alarm correlation and fault identification problem in the presence of multiple faults. These algorithms differ in the degree of accuracy achieved in identifying the fault, and in the degree of complexity required for implementation. >

222 citations


Journal ArticleDOI
TL;DR: Fault tolerance is increasingly important for robots, especially those in remote or hazardous environments as mentioned in this paper, and robots need the ability to effectively detect and tolerate internal failures in order to continue performing their tasks without the need for immediate human intervention.

201 citations


Journal ArticleDOI
TL;DR: The approach presented involves injecting transient faults into integrated circuits by using heavy-ion radiation from a Californium-252 source to inject faults at internal locations in VLSI circuits.
Abstract: Fault injection is an effective method for studying the effects of faults in computer systems and for validating fault-handling mechanisms. The approach presented involves injecting transient faults into integrated circuits by using heavy-ion radiation from a Californium-252 source. The proliferation of safety-critical and fault-tolerant systems using VLSI technology makes such attempts to inject faults at internal locations in VLSI circuits increasingly important. >

188 citations


Journal ArticleDOI
TL;DR: Algorithms for fault-driven test set selection are presented based on an analysis of the types of tests needed for different types of faults, and a major reduction in testing time should come from reducing the number of specification tests that need to be performed.
Abstract: Analog testing is a difficult task without a clearcut methodology. Analog circuits are tested for satisfying their specifications, not for faults. Given the high cost of testing analog specifications, it is proposed that tests for analog circuits should be designed to detect faults. Therefore analog fault modeling is discussed. Based on an analysis of the types of tests needed for different types of faults, algorithms for fault-driven test set selection are presented. A major reduction in testing time should come from reducing the number of specification tests that need to be performed. Hence algorithms are presented for minimizing specification testing time. After specification testing time is minimized, the resulting test sets are supplemented with some simple, possibly non-specification, tests to achieve 100% fault coverage. Examples indicate that fault-driven test set development can lead to drastic reductions in production testing time. >

182 citations


Proceedings ArticleDOI
06 Jun 1994
TL;DR: A genetic algorithm (GA) framework for sequential circuit test generation that evolves candidate test vectors and sequences, using a fault simulator to compute the fitness of each candidate test.
Abstract: Test generation using deterministic fault-oriented algorithms is highly complex and time-consuming. New approaches are needed to augment the existing techniques, both to reduce execution time and to improve fault coverage. In this work, we describe a genetic algorithm (GA) framework for sequential circuit test generation. The GA evolves candidate test vectors and sequences, using a fault simulator to compute the fitness of each candidate test. Various GA parameters are studied, including alphabet size, fitness function, generation gap, population size, and mutation rate, as well as selection and crossover schemes. High fault coverages were obtained for most of the ISCAS89 sequential benchmark circuits, and execution times were significantly lower than in a deterministic test generator in most cases.

162 citations


Proceedings ArticleDOI
06 Nov 1994
TL;DR: This model allows us to relate a test coverage measure directly to the defect coverage, and shows how the defect density controls the time-to-next-failure.
Abstract: Models the relationship between testing effort, coverage and reliability, and presents a logarithmic model that relates testing effort to test coverage: statement (or block) coverage, branch (or decision) coverage, computation use (c-use) coverage, or predicate use (p-use) coverage. The model is based on the hypothesis that the enumerables (like branches or blocks) for any coverage measure have different detectability, just like the individual defects. This model allows us to relate a test coverage measure directly to the defect coverage. Data sets for programs with real defects are used to validate the model. The results are consistent with the known inclusion relationships among block, branch and p-use coverage measures. We show how the defect density controls the time-to-next-failure. The model can eliminate variables like the test application strategy from consideration. It is suitable for high-reliability applications where automatic (or manual) test generation is used to cover enemerables which have not yet been tested. >

106 citations


Journal ArticleDOI
TL;DR: A new fault location system for multi-terminal single transmission lines and an algorithm for synchronizing the asynchronous sampling data is presented and EMTP simulation results are presented.
Abstract: Conventional fault location systems which use one-terminal AC voltages and currents are difficult to apply to multi-terminal power systems. This paper discusses a new fault location system for multi-terminal single transmission lines. Asynchronous sampling at each terminal is preferred in order to simplify the transmission equipment and an algorithm for synchronizing the asynchronous sampling data is presented. Another algorithm is presented which converts the original multi-terminal power system by progressive conversion to a system with one fewer terminals to arrive at a 2-terminal system containing the fault. An effective fault locating system can be constructed by combining these algorithms with existing reactive power locating operations. EMTP simulation results are presented. >

101 citations


Journal ArticleDOI
TL;DR: This framework provides a basis for understanding transient fault problems in digital systems and can be helpful in selecting optimum techniques to mask or eliminate transient fault effects in developed systems.
Abstract: It is hard to shield systems effectively from transient faults (fault avoidance techniques). So some other means must be employed to assure appropriate levels of transient fault tolerance (insensitivity to transient faults). They are based on fault-masking and fault recovery ideas. Having analyzed this problem, the author identifies critical design points and outlines some practical solutions that refer to efficient on-line detectors (detecting errors during the system operation) and error handling procedures. This framework provides a basis for understanding transient fault problems in digital systems. It can be helpful in selecting optimum techniques to mask or eliminate transient fault effects in developed systems. >

94 citations


Proceedings ArticleDOI
02 Oct 1994
TL;DR: An approach based on Genetic Algorithms suitable for even the largest benchmark circuits, together with a prototype system named GATTO is described and its effectiveness (in terms of result quality and CPU time requirements) for circuits previously unmanageable is illustrated.
Abstract: This paper is concerned with the question of automated test pattern generation for large synchronous sequential circuits and describes an approach based on Genetic Algorithms suitable for even the largest benchmark circuits, together with a prototype system named GATTO. Its effectiveness (in terms of result quality and CPU time requirements) for circuits previously unmanageable is illustrated. The flexibility of the new approach enables users to easily trade off fault coverage and CPU time to suit their needs.

Proceedings ArticleDOI
06 Nov 1994
TL;DR: Using the technique presented here an efficient static test set for analog and mixed-signal ICs can be constructed, reducing both the test time and the packaging cost.
Abstract: Static tests are key in reducing the current high cost of testing analog and mixed-signal ICs. A new DC test generation technique for detecting catastrophic failures in this class of circuits is presented. To include the effect of tolerance of parameters during testing, the test generation problem is formulated as a minimax optimization problem, and solved iteratively as successive linear programming problems. An analytical fault modeling technique, based on manufacturing defect statistics is used to derive the fault list for the test generation. Using the technique presented here an efficient static test set for analog and mixed-signal ICs can be constructed, reducing both the test time and the packaging cost.

Proceedings ArticleDOI
06 Nov 1994
TL;DR: It is found that there is little or no reduction in the FDE of a test set when its size is reduced while the all-uses coverage is kept constant, suggesting, indirectly, that coverage is more correlated than the size with theFDE.
Abstract: Size and code coverage are two important attributes that characterize a set of tests. When a program P is executed on elements of a test set T, we can observe the fault-detecting capacity of T for P. We can also observe the degree to which T induces code coverage on P according to some coverage criterion. We would like to know whether it is the size of T or the coverage of T on P which determines the fault detection effectiveness (FDE) of T for P. We found that there is little or no reduction in the FDE of a test set when its size is reduced while the all-uses coverage is kept constant. These data suggest, indirectly, that coverage is more correlated than the size with the FDE. To further investigate this suggestion, we report an empirical study to compare the statistical correlation between (1) FDE and coverage, and (2) FDE and the size. Results from our experiments indicate that the correlation between FDE and block coverage is higher than that between FDE and size. >

Proceedings ArticleDOI
16 Aug 1994
TL;DR: Some results on the testability of a system whose fault behavior is modeled by a nondeterministic automaton are presented and issues pertaining to testability such as optimal sensor configuration and the infimal partition of the fault space are discussed.
Abstract: Automated fault diagnosis for a complex system is often a very difficult task. Before proceeding with fault diagnosis, we need to make sure that the given sensor configuration has the capability of assisting the diagnostician perform the fault diagnosis in an efficient manner. In this paper, we present some results on the testability of a system whose fault behavior is modeled by a nondeterministic automaton. We discuss the issues pertaining to testability such as optimal sensor configuration and the infimal partition of the fault space. We also present a manufacturing process example to illustrate the application of the results presented in the paper. >

Proceedings ArticleDOI
02 Oct 1994
TL;DR: This paper describes the design of an efficient weighted random pattern system and various heuristics that affect the performance of the system are discussed and an experimental evaluation is provided.
Abstract: This paper describes the design of an efficient weighted random pattern system. The performance of the system is measured by the number of weight sets and the number of weighted random patterns required for high fault coverage. Various heuristics that affect the performance of the system are discussed and an experimental evaluation is provided.

Journal ArticleDOI
01 Sep 1994
TL;DR: An expert system and critic are presented which together form a novel and intelligent fault tolerance framework integrating fault detection and tolerance routines with dynamic fault tree analysis.
Abstract: Fault tolerance is of increasing importance for modern robots. The ability to detect and tolerate failures enables robots to effectively cope with internal failures and continue performing assigned tasks without the need for immediate human intervention. To monitor fault tolerance actions performed by lower level routines and to provide higher level information about a robot;s recovery capabilities, we present an expert system and critic which together form a novel and intelligent fault tolerance framework integrating fault detection and tolerance routines with dynamic fault tree analysis. A higher level, operating system inspired critic layer provides a buffer between robot fault tolerant operations and the user. The expert system gives the framework the modularity and flexibility to quickly convert between a variety of robot structures and tasks. It also provides a standard interface to the fault detection and tolerance software and a more intelligent means of monitoring the progress of failure and recovery throughout the robot system. The expert system further allows for prioritization of tasks so that recovery can take precedence over less pressing goals. Fault trees are used as a standard database to reveal the components essential to fault detection and tolerance within a system and detail the interconnection between failures in the system. The trees are also used quantitatively to provide a dynamic estimate of the probability of failure of the entire system or various subsystems.

Proceedings ArticleDOI
02 Oct 1994
TL;DR: The testability of analog circuits in the frequency domain is studied by introducing the analog fault observability concept and a significant reduction in the number of measured output parameters necessary for testing is reduced to one or two parameters without a loss in fault coverage.
Abstract: A technique for multifrequency test vector generation using testability analysis and output response detection by adding a translation built-in self test (T-BIST) is presented. We study the testability of analog circuits in the frequency domain by introducing the analog fault observability concept. This testability evaluation will be helpful in generating the test vectors and for selecting test nodes for the various types of faults. In the proposed approach test vector generation and test point selection allow a significant reduction in the number of measured output parameters necessary for testing (to one or two parameters) without a loss in fault coverage. The T-BIST approach consists of verifying whether or not the tested parameters for the given test vector are within the acceptance range. This technique is based on the conversion of each detected parameter to a DC voltage. Results are presented for different practical filters for which a complete test solution was achieved.

Patent
04 May 1994
TL;DR: In this article, a test vector generation and fault simulation (TGFS) comparator is implemented in the PLD or FPGA consisting of a partitioned sub-circuit configuration, and a multiplicity of copies of the same configuration each with a single and different fault introduced in it.
Abstract: An electronic circuit test vector generation and fault simulation apparatus is constructed with programmable logic devices (PLD) or field programmable gate array (FPGA) devices and messaging buses carrying data and function calls. A test generation and fault simulation (TGFS) comparator is implemented in the PLD or FPGA consisting of a partitioned sub-circuit configuration, and a multiplicity of copies of the same configuration each with a single and different fault introduced in it. The method for test vector generation involves determining test vectors that flag each of the fault as determined by a comparison of the outputs of the good and single fault configurations. Further the method handles both combinational as well as sequential type circuits which require generating a multiplicity of test vectors for each fault. The successful test vectors are now propagated to the inputs and outputs of the electronic circuit, through driver and receiver sub-circuits, modeled via their corresponding TGFS comparators, by means of an input/output/function messaging buses. A method of fault simulation utilizing the TGFS comparators working under a fault specific approach determines the fault coverage of the test vectors.

Journal ArticleDOI
TL;DR: A method to estimate the coverage of path delay faults of a given test set, without enumerating paths, is proposed, which is polynomial in the number of lines in the circuit, and thus allows circuits with large numbers of paths to be considered under the path delay fault model.
Abstract: A method to estimate the coverage of path delay faults of a given test set, without enumerating paths, is proposed. The method is polynomial in the number of lines in the circuit, and thus allows circuits with large numbers of paths to be considered under the path delay fault model. Several levels of approximation, with increasing accuracy and increasing polynomial complexity, are proposed. Experimental results are presented to show the effectiveness and accuracy of the estimate in evaluating the path delay fault coverage. Combining this nonenumerative estimation method with a test generation method for path delay faults would yield a cost effective method to consider path delay faults in large circuits, which are beyond the capabilities of existing test generation and fault simulation procedures, that are based on enumeration of paths. >

Proceedings ArticleDOI
02 Oct 1994
TL;DR: A testability analysis procedure for complex analogue circuits is presented based on layout dependent fault models extracted from process defect statistics, which concludes that the fault coverage achieved by this test can be improved by the use of a supplementary test based on power supply variations.
Abstract: A testability analysis procedure for complex analogue circuits is presented based on layout dependent fault models extracted from process defect statistics. The technique has been applied to a mixed-signal phase locked loop circuit and a number of test methodologies have been evaluated including the existing production test. It is concluded that the fault coverage achieved by this test can be improved by the use of a supplementary test based on power supply variations.

Journal ArticleDOI
TL;DR: The issue of test selection with respect to a general distributed test architecture containing distributed interfaces, and a fact that the methods given in the literature cannot ensure the same fault coverage as the corresponding original testing methods is investigated.

Proceedings ArticleDOI
02 Oct 1994
TL;DR: Experimental results are presented in this paper depicting that the proposed GLFSR can attain fault coverage equivalent to the LFSR, but with significantly fewer patterns.
Abstract: A new and effective pseudo-random test pattern generator, termed GLFSR, is introduced. These are linear feedback shift registers over a Galois field GF(2/sup /spl delta//), (/spl delta/>1). Unlike conventional LFSRs, which are over GF(2), these generators are not equivalent to cellular arrays, and are shown do achieve significantly higher fault coverage. Experimental results are presented in this paper depicting that the proposed GLFSR can attain fault coverage equivalent to the LFSR, but with significantly fewer patterns. Specifically, results obtained demonstrate that in combinational circuits, for both stuck-at as well as transition faults, the proposed GLFSR outperforms all conventional pattern generators. Moreover, these experimental results are validated by certain randomness tests which demonstrate that the patterns generated by GLFSR achieve a higher degree of randomness.

Journal ArticleDOI
25 Apr 1994
TL;DR: This work analyzes the probability that an arbitrary pseudo-random test sequence of short length detects all faults, and chooses the shortest subsequence that includes test patterns for all the faults of interest, hence resulting in 100% fault coverage.
Abstract: When using Built-In Self Test (BIST) for testing VLSI circuits, a major concern is the generation of proper test patterns that detect the faults of interest. Usually a linear feedback shift register (LFSR) is used to generate test patterns. We first analyze the probability that an arbitrary pseudo-random test sequence of short length detects all faults. The term short is relative to the probability of detecting the fault having the fewest test patterns. We then show how to guide the search for an initial state (seed) for a LFSR with a given primitive feedback polynomial so that all the faults of interest are detected by a minimum length test sequence. Our algorithm is based on finding the location of test patterns in the sequence generated by this LPSR. This is accomplished using, the theory of discrete logarithms. We then select the shortest subsequence that includes test patterns for all the faults of interest, hence resulting in 100% fault coverage. >

Proceedings ArticleDOI
25 Apr 1994
TL;DR: It was found that several multilevel, synthesized, robust path delay testable circuits require impractically long pseudo-random test sequences and weighted random testing techniques have been developed for robust pathdelay testing are applied.
Abstract: Importance of delay testing is growing especially for high speed circuits. Delay testing using automatic test equipment is expensive. Built-in self-test can significantly reduce the cost of comprehensive delay testing by replacing the test equipment. It was found that several multilevel, synthesized, robust path delay testable circuits require impractically long pseudo-random test sequences. Weighted random testing techniques have been developed for robust path delay testing. The proposed technique is successfully applied to these circuits and 100% robust path delay fault coverage obtained using only 1-2 sets of weights. >

Journal ArticleDOI
TL;DR: A basic framework to characterize the behavior of two-dimensional (2-D) cellular automata (CA) has been proposed and a method of synthesizing 2-D CAs to generate patterns of specified length has been reported.
Abstract: A basic framework to characterize the behavior of two-dimensional (2-D) cellular automata (CA) has been proposed. The performance of the regular structure of the 2-D CA has been evaluated for pseudo-random pattern generation. The potential increase in the local neighborhood structure for 2-D CA has led to better randomness of the generated patterns as compared to LFSR and 1-D CA. The quality of the random patterns generated with 2-D CA based built-in-self-test (BIST) structure has been evaluated by comparing the fault coverage on several benchmark circuits. Also a method of synthesizing 2-D CAs to generate patterns of specified length has been reported. The patterns generated can serve as a very good source of random two-dimensional sequences and also variable length parallel pattern generation having virtually nil correlation among the bit patterns.

Proceedings ArticleDOI
06 Nov 1994
TL;DR: This paper proposes a correction technique for simulation-based ATPG based on identifying the diverging state and on computing a fault cluster (faults close to each other) which has been used to generate tests with very high fault coverage.
Abstract: Simulation-based test vector generators require much less computer time than deterministic ATPG but they generate longer test sequences and sometimes achieve lower fault coverage. This is due to the divergence in the search process. In this paper, we propose a correction technique for simulation-based ATPG. This technique is based on identifying the diverging state and on computing a fault cluster (faults close to each other). A set of candidate faults from the cluster is targeted with a deterministic ATPG and the resulting test sequence is used to restart the search process of the simulation-based technique. This above process is repeated until all faults are detected or proven to be redundant/untestable. The program implementing this approach has been used to generate tests with very high fault coverage, and runs about 10 times faster than traditional deterministic techniques with very good test quality in terms of test length and fault coverage.

Proceedings ArticleDOI
23 Sep 1994
TL;DR: This paper presents a testability improvement method for digital systems described in VHDL behavioral specification based on testability analysis at registertransfer (RT) level which reflects test pattern generation costs, fault coverage and test application time.
Abstract: This paper presents a testability improvement method for digital systems described in VHDL behavioral specification. The method is based on testability analysis at registertransfer (RT) level which reflects test pattern generation costs, fault coverage and test application time. The testability is measured by controllability and observability, and determined by the structure of a design, the depth from I/O ports and the functional units used. In our approach, hardto-test parts are detected by a testability analysis algorithm and transformed by some known DFT techniques. Our experimental results show that testability improvement transformations guided by the RT level testability analysis have a strong correlation to ATPG results at gate level.

Journal ArticleDOI
TL;DR: In this paper, a model-based technique is applied to the problem of detect in degraded performance in a military turbofan engine from take-off acceleration-type transients, and the important conclusion from this work is at good fault coverage can be gleaned from the resultant pseudo-steady-state gain estimates derived in this way.
Abstract: Reliable methods for diagnosing faults and detecting degraded performance in gas turbine engines are continually being sought. In this paper, a model-based technique is applied to the problem of detect in degraded performance in a military turbofan engine from take-off acceleration-type transients. In the past, difficulty has been experienced in isolating the effects of some of the physical processes involved. One such effect is the influence of the bulk metal temperature on the measured engine parameters during large power excursions. It will be shown that the model-based technique provides a simple and convenient way of separating this effect from the faster dynamic components. The important conclusion from this work is at good fault coverage can be gleaned from the resultant pseudo-steady-state gain estimates derived in this way.

Proceedings ArticleDOI
M. Roncken1
03 Nov 1994
TL;DR: This work investigates how the partial scan principles from the synchronous test world can be adapted to asynchronous circuits, and shows that asynchronous partial scan design can be approached as a high-level design activity.
Abstract: We present a design-for-testability method for asynchronous circuits based on partial scan. More specifically, we investigate how the partial scan principles from the synchronous test world can be adapted to asynchronous circuits, and we show that asynchronous partial scan design can be approached as a high-level design activity. The method is demonstrated on an asynchronous error corrector for the DCC player. It has been used effectually in the production and application-mode tests of this 155 k transistor chip-set. In particular, it has led to high 99.9% stuck-at output fault coverage in short 64 msec test time at the expense of less than 3% additional area.

Proceedings ArticleDOI
10 Oct 1994
TL;DR: The methods presented can easily handle large and complex behavioral descriptions with loops, conditionals, and allow different scheduling constructs, such as pipelining, multicycling, and chaining.
Abstract: Most existing behavioral synthesis systems put emphasis on optimizing area and performance. Only recently has some research been done to consider testability during behavioral synthesis. In our previous work, we integrated hierarchical testability with behavioral synthesis of simple digital data path circuits to synthesize highly testable circuits (S. Bhatia and N.K. Jha, 1994). In the current work, we consider the testability of complete controller and data path during behavioral synthesis. The methods presented can easily handle large and complex behavioral descriptions with loops, conditionals, and allow different scheduling constructs, such as pipelining, multicycling, and chaining. The test set for the combined controller/data path is generated during synthesis in a very short time and near 100% fault coverage is obtained for almost all the synthesized circuits at practically zero overheads. >