scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 1989"


Journal ArticleDOI
TL;DR: This study shows that a test sequence produced by T- method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study.
Abstract: The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out. >

402 citations


Journal ArticleDOI
TL;DR: The authors describe a method for diagnosing the failures observed in testing VLSI designs that use the scan-path structure by simulating selected faults after testing using a fault simulator that allows the application of several patterns in parallel.
Abstract: The authors describe a method for diagnosing the failures observed in testing VLSI designs that use the scan-path structure. Diagnosis consists of simulating selected faults after testing using a fault simulator that allows the application of several patterns in parallel. The method is also suitable for signature-based random-pattern testing. The authors discuss diagnostic fault simulation, fault-list generation, relating faults to defects, diagnostic strategy, and random-pattern failures, and they report some experimental results to indicate the procedure's power. >

289 citations


Book
01 Sep 1989
TL;DR: In this article, auxiliary signals are used for improving fault detection in the chemical process, and a sequential probability ratio test is used to detect faults in a chemical process with auxiliary signals.
Abstract: Preliminaries.- Sequential probability ratio test.- Auxiliary signals for improving fault detection.- Extension to multiple hypothesis testing.- Modelling and identification of the chemical process.- Fault detection and diagnosis in the chemical process.- Conclusions and further research.

132 citations


Proceedings ArticleDOI
29 Aug 1989
TL;DR: The authors introduce WARP, a weighted test generation system that includes a canonical circuit for resolving weights to any desired precision and analyzes pattern coverage and benchmark results on fault coverage differences between CARs and LFSRs.
Abstract: The authors introduce WARP, a weighted test generation system that includes a canonical circuit for resolving weights to any desired precision. Either cellular automata registers (CARs) or linear feedback shift registers (LFSRs) are used as a source of random patterns, and optionally, it is possible to permute and linearly combine random bits from the source to control inputs to the weighting circuit. The authors analyze pattern coverage and conclude with benchmark results on fault coverage differences between CARs and LFSRs. >

103 citations


Journal ArticleDOI
TL;DR: Experimental results are presented showing the effectiveness of the application of a concurrent fault simulator to automatic test vector generation in generating tests for combinational and sequential circuits.
Abstract: A description is given of the application of a concurrent fault simulator to automatic test vector generation. As faults are simulated in the fault simulator a cost function is simultaneously computed. A simple cost function is the distance (in terms of the number of gates and flip-flops) of a fault effect from a primary output. The input vector is then modified to reduce the cost function until a test is found. Experimental results are presented showing the effectiveness of this method in generating tests for combinational and sequential circuits. By defining suitable cost functions, it has been possible to generate: (1) initialization sequences; (2) tests for a group of faults; and (3) a test for a given fault. Even asynchronous sequential circuits can be handled by this approach. >

97 citations


Proceedings ArticleDOI
01 Aug 1989
TL;DR: It is shown that test sequences generated by the UIOv approach and the DS approach always satisfy the uniqueness criterion, and a uniqueness criterion is discussed here to capture a desirable fault coverage for finite-state machine, FSM, test sequences.
Abstract: This paper shows the Unique Input/Output, UIO, approach and the Distinguishing Sequence, DS, approach for the conformance testing of protocol implementations do not always produce identical fault converges, contrary to a previous claim. In the UIO approach, when UIO sequences and signatures are not unique in an implementation, they may not be able to detect erroneous states in the implementation. The UIO approach is revised here with the addition of a verification procedure to ensure that the UIO sequences are all unique in an implementation. Since signatures are generally not unique, this revision requires substituting the use of a signature for a state, S, with a set of input/output sequences, IO(S,K)s, unique to S, each of which distinguishes S from at least one other state, K. Verification is then applied to the IO(S,K)s. Fault coverage in the revised UIO, UIOv, approach is better than that in the original approach. A uniqueness criterion is discussed here to capture a desirable fault coverage for finite-state machine, FSM, test sequences. This criterion ensures the detection of any faulty FSM implementation provided that its set of states does not exceed that in the specified FSM. It is shown that test sequences generated by the UIOv approach and the DS approach always satisfy the uniqueness criterion. In fact, the DS approach is a special case of the UIOv approach; however, the UIOv approach has wider applicability and is generally applicable to k-distinguishable FSMs.

96 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that extremely high single-stuck fault coverage is necessary for high quality products and that the dependence of quality on test coverage is linear rather than exponential.
Abstract: It is shown that extremely high single-stuck fault coverage is necessary for high-quality products. Even 100% single-stuck fault coverage may not guarantee adequate quality. Results are presented that extend previous work and show that for high required IC quality, process yield has a negligible effect on required test thoroughness. The extensions consist of: removing the assumption of a one-to-one correspondence between chip defects and single-stuck faults; demonstrating that for high quality levels the dependence of quality on test coverage is linear rather than exponential and that for high yields, the dependence of quality on yield is also linear; and showing that the yield used in the calculations should be functional rather than die yield. The theoretical results are compared with data obtained from measurements at a production IC facility. >

87 citations


Journal ArticleDOI
TL;DR: In this scheme, self-checking techniques and built-in self-test techniques are combined in an original way to take advantage of each other and the result is a unified BIST scheme (UBIST), allowing high fault coverage for all tests needed for integrated circuits.
Abstract: An original built-in self-test (BIST) scheme is proposed aimed at covering some of the shortcomings of self-checking circuits and applicable to all tests needed for integrated circuits. In this scheme, self-checking techniques and built-in self-test techniques are combined in an original way to take advantage of each other. The result is a unified BIST scheme (UBIST), allowing high fault coverage for all tests needed for integrated circuits, e.g., offline test (design verification, manufacturing test, maintenance test) and online concurrent error detection. An important concept introduced is that of self-exercising checkers. The strongly code-disjoint property of the checkers is ensured for a very large class of fault hypotheses by internal test pattern generation, and the design of the checkers is simplified. >

67 citations


Journal ArticleDOI
TL;DR: An upper bound is found for the minimum number of test patterns required to detect a fault in combinational networks based on Reed-Muller (RM) transforms.
Abstract: A new approach for fault detection in combinational networks based on Reed-Muller (RM) transforms is presented. An upper bound on the number of RM spectral coefficients required to be verified for detection of multiple stuck-at-faults and single bridging faults at the input lines of an n-input network is shown to be n. The time complexity (time required to test a network) for detection of multiple terminal faults and the storage required for storing the test are determined. An upper bound is found for the minimum number of test patterns required to detect a fault. The authors present standard tests based on this result, with a simple test generation procedure and upper bounds on minimal numbers of test patterns. >

65 citations


Proceedings ArticleDOI
05 Nov 1989
TL;DR: The authors demonstrate that test vectors can be generated using realistic defect models and actual IC layouts, which should lead to test vectors with a higher defect detectability.
Abstract: Conventionally, test vectors are generated using gate-level models to represent the circuit design and abstract fault models (e.g. the stuck-fault model) to describe all of the processing defects causing circuit failure. The authors demonstrate that test vectors can be generated using realistic defect models and actual IC layouts, which should lead to test vectors with a higher defect detectability. The layout-driven generation of the faults has a computational complexity which is similar to that of design-rule checking, i.e. O(n log n). >

62 citations


Proceedings ArticleDOI
29 Aug 1989
TL;DR: The authors propose a load-balancing method which uses static partitioning initially and then dynamic allocation of work for processors which become idle and present experimental results based on an implementation on the Intel iPSC/2 hypercube multiprocessor using the ISCAS combinational benchmark circuits.
Abstract: The authors address the issues involved in providing an integrated test generation/fault simulation environment on a parallel processor. They propose heuristics to partition faults for parallel test generation with minimization of the overall run time and test length as an objective. For efficient utilization of available processors, the work load has to be balanced at all times. Since it is very difficult to predict a priori how difficult it is to generate a test for a particular fault, the authors propose a load-balancing method which uses static partitioning initially and then dynamic allocation of work for processors which become idle. They present experimental results based on an implementation on the Intel iPSC/2 hypercube multiprocessor using the ISCAS combinational benchmark circuits. The main contribution of the work described is to show that if one is not careful in the design of a parallel algorithm, apart from inefficient utilization of available processors, degradation in the quality of solutions can occur. >

Proceedings ArticleDOI
01 Jun 1989
TL;DR: In this article, a testability solution is proposed in which externally accessible test points are pre-designed into cells that comprise the VLSI designs, accessed through an on-chip grid of orthogonal probe and sense lines.
Abstract: A new testability solution is proposed in which externally accessible test points are pre-designed into cells that comprise the VLSI designs. The test points are accessed through an on-chip grid of orthogonal probe and sense lines. The resultant VLSI design consists of a large number of test points through which test signals on every cell on the IC can be measured or modified to a limited extent. The sizable number of test points improves the testability of the designs by a very large factor. Additionally, analog measurement and signal injection capabilities allow detection of practical CMOS fault modes such as opens, shorts, open or closed FETs and even noise margins. The large observability of CrossCheck based designs reduces the automatic test pattern generation problem to one of providing control only. Several ISCAS benchmark designs are analyzed using CrossCheck cell libraries and fault models. The results show that over 97 percent coverage of a broad range of fault modes, such as opens and shorts, can be obtained on VLSI CMOS designs without the need for large computing resources.

01 Jan 1989
TL;DR: In this article, the authors present results of an extensive study of five existing testability measures when used to aid heuristics for automatic test pattern generation algorithms, using over 60 000 faults in circuits of varying size and complexity.
Abstract: This paper presents results of an extensive study of five existing testability measures when used to aid heuristics for automatic test pattern generation algorithms. Each measure was evaluated using over 60 000 faults in circuits of varying size and complexity. The per- formance of these measures was rated using several different criteria. Based on these results the performance of a composite test generation strategy that uses multiple guidance heuristics was evaluated. The re- sults indicate that this strategy not only provides better fault coverage but also reduces the average time taken to generate a test or determine that a given fault is undetectable.

Journal ArticleDOI
TL;DR: A way to merge boundary scan with the built-in self-test (BIST) of printed circuit boards with the advantages of the CA register, which allows modification without major redesign, its higher stuck-at fault coverage, and its higher transition fault coverage.
Abstract: The authors propose a way to merge boundary scan with the built-in self-test (BIST) of printed circuit boards. Their boundary-scan structure is based on Version 2.0 of the Joint Task Action Group's recommendations for boundary scan and incorporates BIST using a register based on cellular automata (CA) techniques. They examine test patterns generated from this register and the more conventional linear-feedback shift register. The advantages of the CA register, or CAR, are its modularity, which allows modification without major redesign, its higher stuck-at fault coverage, and its higher transition fault coverage. >

Proceedings ArticleDOI
29 Aug 1989
TL;DR: The authors present the extension of the ATPG system SOCRATES to hierarchical test pattern generation, which is based upon HLPs and the strategy of dynamically expanding the HLPs to their gate-level realization, at most one at a time.
Abstract: It is demonstrated that the exploitation of high-level primitives (HLPs) and, in particular, of the knowledge concerning their function in ATPG (automatic test pattern generation) leads to significant improvements in implication, unique sensitization, and multiple backtrace. Motivated by this observation and the necessity of covering all faults inside HLPs, the authors present the extension of the ATPG system SOCRATES to hierarchical test pattern generation, which is based upon HLPs and the strategy of dynamically expanding the HLPs to their gate-level realization, at most one at a time. Experimental results have substantiated that the proposed approach performs significantly better in terms of CPU time, elapsed time, fault coverage, and memory requirements than a gate-level ATPG algorithm. It is expected that the extended SOCRATES algorithm will be capable of coping with circuits consisting of 100000 gates and more within reasonable times, even in a workstation environment. >

Proceedings ArticleDOI
12 Apr 1989
TL;DR: In this article, a low-cost self-test and self-diagnosis architecture for locating both defective chips and bad interconnects on a printed-circuit board is presented.
Abstract: The authors present a low-cost self-test and self-diagnosis architecture for locating both defective chips and bad interconnects on a printed-circuit board. It is assumed that the boundary scan method developed by the Joint Task Action Group (JTAG) is applied to all chips on the board. To achieve high fault coverage, the proposed method uses pseudorandom patterns from a cellular automaton to locate defective chips, and walking sequences to locate bad interconnects. It is shown that the effectiveness of this method depends on the type of circuits to be tested. >

Proceedings ArticleDOI
05 Nov 1989
TL;DR: A method is proposed to determine all the possible ranges of detected fault sizes, thereby maximizing the fault coverage of a given test sequence, and methods are given to achieve such coverages wherever possible.
Abstract: Existing methodologies for determining gate delay fault coverages through the computation of detected fault sizes are shown to have certain deficiencies A method is proposed to determine all the possible ranges of detected fault sizes, thereby maximizing the fault coverage of a given test sequence The ultimate goal of ensuring that the coverage for a particular fault extends up to the actual circuit slack is explored, and methods are given to achieve such coverages wherever possible Results of experiments performed to evaluate the practical benefits of the proposed methods are reported >

Journal ArticleDOI
TL;DR: The authors propose a statistical model for measuring delay-fault coverage that provides a figure of merit for delay testing in the same way that fault coverage provides one for the testing of single stuck-at faults.
Abstract: The authors propose a statistical model for measuring delay-fault coverage. The model provides a figure of merit for delay testing in the same way that fault coverage provides one for the testing of single stuck-at faults. The mode measures test effectiveness in terms of the propagation delay of the path to be tested, the size of the delay defect, and the system clock interval, and then combines the data for all delay faults to measure total delay-fault coverage. The authors also propose a model for measuring the defect level as a function of the manufacturing yield and the predictions of the statistical delay-fault coverage model. >

Proceedings ArticleDOI
29 Aug 1989
TL;DR: A technique for detecting and locating faults in analogue circuits by checking that the measurements are consistent with the circuit function, which overcomes the difficulty of representing the uncertainty inherent in any analogue design or measurements.
Abstract: The authors describe a technique for detecting and locating faults in analogue circuits by checking that the measurements are consistent with the circuit function. The unique representation used accommodates the imprecise nature of analogue circuits. A model of the circuit is formed from the constraints imposed by the behavior of the components and the interconnections. The values of parameters within the circuit are deduced by propagating the effects of measurements through this model. Faults are implied from the detection of inconsistencies and located by suspending constraints within the model. The method does not use fault simulation and is therefore applicable to any type of fault. It is able to detect performance variations, as well as catastrophic failures. Values are represented as ranges within which the true value lies. This overcomes the difficulty of representing the uncertainty inherent in any analogue design or measurements. The method has been successfully used to detect and locate a number of faults in several circuits. >

Proceedings ArticleDOI
21 Jun 1989
TL;DR: In this paper, a method is described for selecting a minimal set of directly accessible flip-flops, which is shown to be NP-complete and suboptimal solutions can be derived using some heuristics.
Abstract: A method is described for selecting a minimal set of directly accessible flip-flops. Since this problem turns out to be NP-complete, suboptimal solutions can be derived using some heuristics. An algorithm is presented to compute the corresponding weights of the patterns, which are time-dependent in some cases. The entire approach is validated with the help of examples. Only 10-40% of the flip-flops have to be integrated into a partial scan path or into a built-in self-test register to obtain nearly complete fault coverage by weighted random patterns. >

Proceedings ArticleDOI
29 Aug 1989
TL;DR: The authors have proposed and implemented a dynamic framework and a method for hierarchically generating test patterns from a hierarchical net list and developed a module-oriented decision-making algorithm, MODEM, which entails a dynamic calculus and procedures for a single generic module.
Abstract: The authors have proposed and implemented a dynamic framework and a method for hierarchically generating test patterns from a hierarchical net list. They have shown consistent gains in CPU over the traditional gate-level implementation while maintaining identical levels of fault coverage. In generating and characterizing modules for a large and varied set of hierarchical benchmarks, the authors benefited considerably from the consistent representations that are available during synthesis from a high-level description or when modules are generated by a process of technology mapping into standard cells. The authors introduced the concept of a single generic module which is hierarchical; the traditional AND, OR, NAND, and NOR are included implicitly. They developed a module-oriented decision-making algorithm, MODEM, which entails a dynamic calculus and procedures such as implication, error propagation, line justification, and probabilistic testability measures for a single generic module. Without loss of generality they adapted the control flow and basic features of PODEM in the first implementation of MODEM. >

Proceedings ArticleDOI
02 Oct 1989
TL;DR: A complete test pattern generation system for path delay faults using PODEM using a 5-valued logic and criteria and efficient algorithms to prune the number of paths for test generation are presented.
Abstract: A complete test pattern generation system for path delay faults is presented. The test pattern generator is based on PODEM using a 5-valued logic. Techniques to prune the search space for test pattern generation are proposed. Since the number of paths for test generation can be exponential in the number of lines in the network, criteria and efficient algorithms to prune the number of paths for test generation are presented. The test generation system is evaluated using the ISCAS combinational benchmark circuits. >

Patent
07 Nov 1989
TL;DR: In this article, an adaptive inference system is used to detect and locate faults in an electrical or electronic device or assembly, where a position-dependent, time-ordered test is performed upon the device and assembly to provide a comprehensive error analysis including an array of error data and information that is time interdependent.
Abstract: A method of using an adaptive inference system to detect and locate faults in an electrical or electronic device or assembly. A position-dependent, time-ordered test is performed upon the device or assembly to provide a comprehensive error analysis. The error analysis includes an array of error data and information that is time interdependent. Once fault data is stored in memory, a newly-detected fault can be compared with the stored faults. A relationship between the stored fault data and the detected fault is determined. The system indicates the cause of the detected fault to the operator based on stored fault data that is most probably related to the detected fault. Possibilites of faults within the device or assembly are then displayed. This system analysis and range of potential causes can be evaluated by an operator. In this manner, faults not having been contemplated by stored data and information in the adaptive inference system and not bearing a direct relationship to a problem being reviewed can be identified.

Journal ArticleDOI
05 Nov 1989
TL;DR: The authors present an approach to parallel processing of test generation for logic circuits in a loosely coupled distributed network of general-purpose computers and derive the expressions of optimal granularity in cases of both static and dynamic task allocation.
Abstract: The problem of test generation for logic circuits is known to be NP-hard, and hence it is very hard to speed up the test generation process due to its backtracking mechanism. The authors present an approach to parallel processing of test generation for logic circuits in a loosely coupled distributed network of general-purpose computers. They analyze the effects of the allocation of target faults to processors, the optimal granularity (grain size of target faults), and the speedup ratio of the multiple-processor system to a single-processor system. To analyze the case in which a test pattern generated for one fault can also be a test pattern for other faults if fault simulation is performed, they introduce a ratio of newly processed faults to target faults and derive the expressions of optimal granularity in cases of both static and dynamic task allocation. They also derive an expression of the speedup of a multiple-processor system in the homogeneous case. The analysis indicates that the speedup approaches N, the number of servers, if the data transfer time per fault and the waiting time per communication are much smaller than the processing time per fault and if the decrease ratio of newly processed faults due to overlapped processing is much smaller than the ratio of newly processed faults. >

Proceedings ArticleDOI
29 Aug 1989
TL;DR: It is shown how the test-detect principle can be adapted to the parallel-patterns technique for combinational fault simulation, and the dominant-test-detECT approach has proved to be effective both for small sets of patterns, as might be used in automatic test pattern generation, and for larger pattern sets that might been used in built-in self-test.
Abstract: It is shown how the test-detect principle can be adapted to the parallel-patterns technique for combinational fault simulation. Several techniques for implementing a parallel-test-detect simulator are presented, with techniques based on nominator analysis providing the fastest fault simulation results. The dominant-test-detect approach has proved to be effective both for small sets of patterns, as might be used in automatic test pattern generation, and for larger pattern sets that might be used in built-in self-test. >

Journal ArticleDOI
TL;DR: The results indicate that this composite test generation strategy that uses multiple guidance heuristics was evaluated not only provides better fault coverage but also reduces the average time taken to generate a test or determine that a given fault is undetectable.
Abstract: The results of an extensive study of five existing testability measures when used to aid heuristics for automatic test pattern generation algorithms are presented. Each measure was evaluated using over 60000 faults in circuits of varying size and complexity. The performance of these measures was rated using several different criteria. Based on these results the performance of a composite test generation strategy that uses multiple guidance heuristics was evaluated. The results indicate that this strategy not only provides better fault coverage but also reduces the average time taken to generate a test or determine that a given fault is undetectable. >

Proceedings ArticleDOI
05 Nov 1989
TL;DR: The authors propose design for testability at the logic synthesis level by producing a reduced feedback or pipeline like structure which is easily analyzed by a sequential circuit test generator.
Abstract: The authors propose design for testability at the logic synthesis level. Their state assignment is aimed at producing a reduced feedback or pipeline like structure which is easily analyzed by a sequential circuit test generator. State variables are assigned one at a time such that a state variable depends only on primary inputs and the previously assigned state variables. This results in a purely pipeline structure for finite-memory or definite machines. For other machines, the number of cycles in the implemented structure is minimized. The authors give several examples to compare their reduced feedback synthesis with another method that is aimed at reducing the amount of logic in a multilevel implementation. Results show a marked improvement in test generation time and fault coverage; in terms of logic their method did just as well as the other method. >

Proceedings ArticleDOI
01 Jun 1989
TL;DR: The method is shown to correctly classify definitely detectable faults which are mis-classified by methods recently reported elsewhere, and the effect of the delay fault is explicitly described by the new waveform method.
Abstract: A new, simplified waveform method is presented for delay fault testing. The method enables accurate calculation of a delay fault detection threshold for definitely detectable faults, and a delay fault range for possibly detectable faults. The method is shown to correctly classify definitely detectable faults which are mis-classified by methods recently reported elsewhere [1] [2]. A quantitative delay fault model with variable fault size is used, and the effect of the delay fault is explicitly described by the new waveform method. The calculation of the detectable delay size threshold occurs in linear time for any definitely detectable fault.

Journal ArticleDOI
F. Brglez1, D. Bryan1, J. Calhoun, G. Kedem, R. Lisanke 
TL;DR: The authors show that by compiling from a unified design specification followed by logic synthesis it is possible to reduce the problem of automatic test-pattern generation and present a language-based design capture and logic synthesis with hierarchical test pattern generation and redundancy removal techniques.
Abstract: The authors present an integrated, compiler-driven approach to digital chip design that automates mask layout and test-pattern generation for 100% stuck-at fault coverage. This approach is well suited for designs where it is most important the minimize the design cycle time rather than the silicon area. The authors show that by compiling from a unified design specification followed by logic synthesis it is possible to reduce the problem of automatic test-pattern generation. They present a language-based design capture and logic synthesis with hierarchical test pattern generation and redundancy removal techniques. A section on benchmark results highlights the close coupling of a language-based design specification, logic synthesis, and testability. >

Proceedings ArticleDOI
01 Jun 1989
TL;DR: A parallel test generation algorithm is proposed which tries to achieve a high fault coverage for HTD faults in a reasonable amount of time and exhibits superlinear speedups in some cases due to search anomalies.
Abstract: For circuits of VLSI complexity, test generation time can be prohibitive. Most of the time is consumed by hard-to-detect (HTD) faults which might remain undetected even after a large number of backtracks. We identify the problems inherent in a uniprocessor implementation of a test generation algorithm and propose a parallel test generation algorithm which tries to achieve a high fault coverage for HTD faults in a reasonable amount of time. A dynamic search space allocation strategy is used which ensures that the search spaces allocated to different processors are disjoint. The parallel test generation algorithm has been implemented on an Intel iPSC/2 hypercube. Results are presented using the ISCAS combinational benchmark circuits which conclusively prove that parallel processing of HTD faults does indeed result in high fault coverage which is otherwise not achievable by a uniprocessor algorithm in limited CPU time. The parallel algorithm exhibits superlinear speedups in some cases due to search anomalies.