scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 1996"


Journal ArticleDOI
TL;DR: The combinatorial design method substantially reduces testing costs and in several experiments, the method demonstrated good code coverage and fault detection ability.
Abstract: The combinatorial design method substantially reduces testing costs. The authors describe an application in which the method reduced test plan development from one month to less than a week. In several experiments, the method demonstrated good code coverage and fault detection ability.

398 citations


Journal ArticleDOI
TL;DR: HOPE as mentioned in this paper is an efficient parallel fault simulator for synchronous sequential circuits that employs the parallel version of the single fault propagation technique, which is based on an earlier fault simulator railed PROOFS, which employs several heuristics to efficiently drop faults and to avoid simulation of many inactive faults.
Abstract: HOPE is an efficient parallel fault simulator for synchronous sequential circuits that employs the parallel version of the single fault propagation technique. HOPE is based on an earlier fault simulator railed PROOFS, which employs several heuristics to efficiently drop faults and to avoid simulation of many inactive faults. In this paper, we propose three new techniques that substantially speed up parallel fault simulation: (1) reduction of faults simulated in parallel through mapping nonstem faults to stem faults, (2) a new fault injection method called functional fault injection, and (3) a combination of a static fault ordering method and a dynamic fault ordering method. Based on our experiments, our fault simulator, HOPE, which incorporates the proposed techniques, is about 1.6 times faster than PROOFS for 16 benchmark circuits.

301 citations


Journal ArticleDOI
TL;DR: In this article, a technique for power system fault location estimation which uses data from both ends of a transmission line and which does not require the data to be synchronized is described, which can be easily applied for offline analysis.
Abstract: A technique for power system fault location estimation which uses data from both ends of a transmission line and which does not require the data to be synchronized is described. The technique fully utilizes the advantages of digital technology and numerical relaying which are available today and can easily be applied for offline analysis. This technique allows for accurate estimation of the fault location irrespective of the fault type, fault resistance, load currents, and source impedances. Use of two-terminal data allows the algorithm to eliminate previous assumptions in fault location estimation, thus increasing the accuracy of the estimate. The described scheme does not require real-time communications, only offline post-fault analysis. The paper also presents fault analysis techniques utilizing the additional communicated information.

280 citations


Patent
18 Oct 1996
TL;DR: In this article, the authors propose a method and apparatus for correlating faults in a networking system, where a database of fault rules is maintained along with and associated probable causes, and possible solutions for determining the occurrence of faults defined by the fault rules.
Abstract: A method and apparatus for correlating faults in a networking system. A database of fault rules is maintained along with and associated probable causes, and possible solutions for determining the occurrence of faults defined by the fault rules. The fault rules include a fault identifier, an occurrence threshold specifying a minimum number of occurrences of fault events in the networking system in order to identify the fault, and a time threshold in which the occurrences of the fault events must occur in order to correlate the fault. Occurrences of fault events in the networking system are detected and correlated by determining matched fault rules which match the fault events and generating a fault report upon determining that a number of occurrences for the matched fault rules within the time threshold is greater than or equal to the occurrence threshold for the matched fault rules.

256 citations


Proceedings ArticleDOI
10 Nov 1996
TL;DR: A scan-based BIST scheme is presented which guarantees complete fault coverage with very low hardware overhead, and it is shown that the output of an LFSR which feeds a scan path has to be modified only at a few bits in order to transform the random patterns into a complete test set.
Abstract: A scan-based BIST scheme is presented which guarantees complete fault coverage with very low hardware overhead. A probabilistic analysis shows that the output of an LFSR which feeds a scan path has to be modified only at a few bits in order to transform the random patterns into a complete test set. These modifications may be implemented by a bit-flipping function which has the LFSR-state as an input, and flips the value shifted into the scan path at certain times. A procedure is described for synthesizing the additional bit-flipping circuitry, and the experimental results indicate that this mixed-mode BIST scheme requires less hardware for complete fault coverage than all the other scan-based BIST approaches published so far.

235 citations


Proceedings ArticleDOI
20 Oct 1996
TL;DR: Experimental results indicate that complete fault coverage can be obtained with low hardware overhead.
Abstract: This paper presents a low-overhead scheme for the built-in self-test (BIST) of circuits with scan. Complete (100%) fault coverage is obtained without modifying the function logic and without degrading system performance (beyond using scan). Deterministic test cubes that detect the random-pattern-resistant faults are embedded in a pseudo-random sequence of bits generated by a linear feedback shift register (LFSR). This is accomplished by altering the pseudo-random sequence by adding logic at the LFSR's serial output to "fix" certain bits. A procedure for synthesizing the bit-fixing logic for embedding the test cubes is described. Experimental results indicate that complete fault coverage can be obtained with low hardware overhead. Also, the proposed approach permits the use of small LFSRs for generating the pseudo-random bit sequence. The faults that are not detected because of linear dependencies in the LFSR can be detected by embedding deterministic cubes at the expense of additional bit-fixing logic. Data is presented showing how much additional logic is required for different size LFSRs.

203 citations


Proceedings ArticleDOI
28 Apr 1996
TL;DR: A new approach for Field Programmable Gate Array (FPGA) testing is presented that exploits the reprogrammability of FPGAs to create Built-In Self-Test (BIST) logic only during off-line test, achieving BIST without any area overhead or performance penalties to the system function implemented by the FPGA.
Abstract: We present a new approach for Field Programmable Gate Array (FPGA) testing that exploits the reprogrammability of FPGAs to create Built-In Self-Test (BIST) logic only during off-line test. As a result, BIST is achieved without any area overhead or performance penalties to the system function implemented by the FPGA. Our approach is applicable to all levels of testing, achieves maximal fault coverage, and all tests are applied at-speed. We describe the BIST architecture used to test all the programmable logic blocks in an FPGA and the configurations required to implement our approach using a commercial FPGA. We also discuss implementation problems caused by CAD tool limitations and limited architectural resources, and we describe techniques which overcome these limitations.

167 citations


Proceedings ArticleDOI
20 Oct 1996
TL;DR: Experimental results demonstrate that complete or near-complete stuck-at fault coverage can be achieved by the proposed technique with the insertion of a few test points and a minimum number of phases.
Abstract: This paper presents a novel test point insertion technique which, unlike the previous ones, is based on a constructive methodology. A divide and conquer approach is used to partition the entire test into multiple phases. In each phase a group of test points targeting a specific set of faults is selected. Control points within a particular phase are enabled by fixed values, resulting in a simple and natural sharing of the logic driving them. Experimental results demonstrate that complete or near-complete stuck-at fault coverage can be achieved by the proposed technique with the insertion of a few test points and a minimum number of phases.

159 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient algorithm for identifying f-redundant path delay faults and presents a sufficient condition for functional redundancy, showing that a significant percentage of pathdelay faults are f- redundant for ISCAS'85 benchmark circuits.
Abstract: Recently published results have shown that, for many circuits, only a small percentage of path delay faults is robust testable, Among the robust untestable faults, a significant percentage is not nonrobust testable either. In this paper, we take a closer look at the properties of these nonrobust untestable faults with the goal of determining whether and how these faults should be tested. We define a path delay fault to be functional redundant (f-redundant) if, regardless of the delays at all other signals, the circuit performance will not be determined by the path. These paths are false paths-regardless of the delays of all signals. Therefore, these paths cannot and need not be tested. We present a sufficient condition for functional redundancy. We will show that nonrobust untestable faults are not necessarily f-redundant. For those nonrobust untestable but functional irredundant (f-irredundant) faults, the corresponding path may become a true path, and thus may determine the circuit performance under the faulty condition. We present an efficient algorithm for identifying f-redundant path delay faults. Results show that a significant percentage of path delay faults are f-redundant for ISCAS'85 benchmark circuits. Identification of f-redundant faults has two important applications: 1) it provides a more realistic fault coverage measure (as the number of detected faults divided by the total number of f-irredundant faults), 2) For circuits with a large number of paths, testing only a subset of paths becomes a common practice. The path selection process can be guided to avoid selecting f-redundant paths. To illustrate this application, we present an algorithm for selecting a set of f-irredundant path delay faults that includes at least one of the longest f-irredundant paths for each signal in the circuits.

148 citations


Journal ArticleDOI
TL;DR: The authors describe their technique for injecting faults into a system's VHDL behavioral level model, and evaluate an embedded control system providing fail safe operation in the railway industry.
Abstract: Designers are realizing the advantages of performing fault injection early, using simulation to inject faults into a model of the design rather than the actual system. The authors describe their technique for injecting faults into a system's VHDL behavioral level model. To demonstrate the technique, they evaluate an embedded control system providing fail safe operation in the railway industry.

143 citations


Journal ArticleDOI
TL;DR: A gate-level transient fault simulation environment which has been developed based on realistic fault models and can be used for any transient fault which can be modeled as a transient pulse of some width is described.
Abstract: Mixed analog and digital mode simulators have been available for accurate /spl alpha/-particle-induced transient fault simulation. However, they are not fast enough to simulate a large number of transient faults on a relatively large circuit in a reasonable amount of time. In this paper, we describe a gate-level transient fault simulation environment which has been developed based on realistic fault models. Although the environment was developed for /spl alpha/-particle-induced transient faults, the methodology can be used for any transient fault which can be modeled as a transient pulse of some width. The simulation environment uses a gate level timing fault simulator as well as a zero-delay parallel fault simulator. The timing fault simulator uses logic level models of the actual transient fault phenomenon and latch operation to accurately propagate the fault effects to the latch outputs, after which point the zero-delay parallel fault simulator is used to speed up the simulation without any loss in accuracy. The environment is demonstrated on a set of ISCAS-89 sequential benchmark circuits.

Proceedings ArticleDOI
28 Apr 1996
TL;DR: This paper presents an innovative method for inserting test points in the circuit-under-test to obtain complete fault coverage for a specified set of test patterns using a path tracing procedure.
Abstract: This paper presents an innovative method for inserting test points in the circuit-under-test to obtain complete fault coverage for a specified set of test patterns. Rather than using probabilistic techniques for test point placement, a path tracing procedure is used to place both control and observation points. Rather than adding extra scan elements to drive the control points, a few of the existing primary inputs to the circuit are ANDed together to form signals that drive the control points. By selecting which patterns the control point is activated for, the effectiveness of each control point is maximized. A comparison is made with the best previously published results for other test point insertion methods, and it is shown that the proposed method requires fewer test points and less overhead to achieve the same or better fault coverage.

Proceedings ArticleDOI
01 Jun 1996
TL;DR: The ability to significantly reduce the length of the test sequences indicates that it may be possible to reduce test generation time if superfluous input vectors are not generated.
Abstract: We propose three static compaction techniques for test sequences of synchronous sequential circuits. We apply the proposed techniques to test sequences generated for benchmark circuits by various test generation procedures. The results show that the test sequences generated by all the test generation procedures considered can be significantly compacted. The compacted sequences thus have shorter test application times and smaller memory requirements. As a by product, the fault coverage is sometimes increased as well. More importantly, the ability to significantly reduce the length of the test sequences indicates that it may be possible to reduce test generation time if superfluous input vectors are not generated.

Proceedings ArticleDOI
28 Apr 1996
TL;DR: An overview of the most important and commonly used fault models, including the industry's popular disturb fault model, are given and a methodology to design tests for realistic linked faults is presented, resulting in the new tests March LR, March LRD and March LRDD.
Abstract: Many march tests have already been designed to cover faults of different fault models. The complexity of these tests arises when linked faults are taken into consideration. This paper gives an overview of the most important and commonly used fault models, including the industry's popular disturb fault model. The fault coverage of march tests is analysed in a novel way, i.e., in terms of their detection capabilities for: simple faults, and linked faults; whereby the infinite class of linked faults has been reduced to a set of realistic linked faults. Thereafter the paper presents a methodology to design tests for realistic linked faults, resulting in the new tests March LR, March LRD and March LRDD. These new tests will be shown to be more efficient and to offer a higher fault coverage than comparable existing tests.

Proceedings ArticleDOI
28 Apr 1996
TL;DR: Various classes of segment delay fault tests are defined that offer a trade-off between fault coverage and quality and are presented as an efficient algorithm to compute the number of segments of any possible length in a circuit.
Abstract: We propose a segment delay fault model to represent any general delay defect ranging from a spot defect to a distributed defect. The segment length, L, is a parameter that can be chosen based on available statistics about the types of manufacturing defects. Once L is chosen, the fault list contains all segments of length L and paths whose entire lengths are less than L. Both rising and falling transitions at the origin of segments are considered. Choosing segments of a small length can prevent an explosion of the number of faults considered. At the same time, a defect over a segment may be large enough to affect any path passing through it. We present an efficient algorithm to compute the number of segments of any possible length in a circuit. We define various classes of segment delay fault tests-robust, transition, and non-robust-that offer a trade-off between fault coverage and quality.

Journal ArticleDOI
TL;DR: A prototype system named GATTO is used to assess the effectiveness of the approach in terms of result quality and CPU time requirements and the results are the best ones reported in the literature for most of the largest standard benchmark circuits.
Abstract: This paper deals with automated test pattern generation for large synchronous sequential circuits and describes an approach based on genetic algorithms. A prototype system named GATTO is used to assess the effectiveness of the approach in terms of result quality and CPU time requirements. An account is also given of a distributed version of the same algorithm, named GATTO*. Being based on the PVM library, it runs on any network of workstations and is able to either reduce the required time, or improve the result quality with respect to the monoprocessor version. In the latter case, in terms of Fault Coverage, the results are the best ones reported in the literature for most of the largest standard benchmark circuits. The flexibility of GATTO enables users to easily tradeoff fault coverage and CPU time to suit their needs.

Book ChapterDOI
11 Sep 1996
TL;DR: A method for deriving tests with respect to the reduction relation with full fault coverage for deterministic implementations is proposed based on certain properties of the product of specification and implementation machines.
Abstract: In this paper, conformance testing of protocols specified as nondeterministic finite state machines is considered. Protocol implementations are assumed to be deterministic. In this testing scenario, the conformance relation becomes a preorder, so-called reduction relation between FSMs. The reduction relation requires that an implementation machine produces a (sub)set of output sequences that can be produced by its specification machine in response to every input sequence. A method for deriving tests with respect to the reduction relation with full fault coverage for deterministic implementations is proposed based on certain properties of the product of specification and implementation machines.

Journal ArticleDOI
TL;DR: The theoretical analysis shows that this transparent BIST technique does not decrease the fault coverage for modeled faults, it behaves better for unmodeled ones and does not increase the aliasing with respect to the initial test algorithm.
Abstract: I present the theoretical aspects of a technique called transparent BIST for RAMs. This technique applies to any RAM test algorithm and transforms it into a transparent one. The interest of the transparent test algorithms is that testing preserves the contents of the RAM. The transparent test algorithm is then used to implement a transparent BIST. This kind of BIST is very suitable for periodic testing of RAMs. The theoretical analysis shows that this transparent BIST technique does not decrease the fault coverage for modeled faults, it behaves better for unmodeled ones and does not increase the aliasing with respect to the initial test algorithm. Furthermore, transparent BIST involves only slightly higher area overhead with respect to standard BIST. Thus, transparent BIST becomes more attractive than standard BIST since it can be used for both fabrication testing and periodic testing.

Journal ArticleDOI
TL;DR: Basic ideas underlying the techniques for fault coverage analysis and assurance mainly developed in the context of protocol conformance testing based on finite state models are analyzed.
Abstract: Testing is a trade-off between increased confidence in the correctness of the implementation under test and constraints on the amount of time and effort that can be spent in testing. Therefore, the coverage, or adequacy of the test suite, becomes a very important issue. In this paper, we analyze basic ideas underlying the techniques for fault coverage analysis and assurance mainly developed in the context of protocol conformance testing based on finite state models. Special attention is paid to parameters which determine the testability of a given specification and influence the length of a test suite which guarantees complete fault coverage. We also point out certain issues which need further study.

Journal ArticleDOI
TL;DR: A simple test generation technique is described which derives sinusoidal test waveforms that detect several fault classes and shows that certain stimuli will provoke variations in delay, rise time, and overshoot that indicate faulty behavior.
Abstract: This paper describes a simple test generation technique which derives sinusoidal test waveforms that detect several fault classes In addition, the authors show that certain stimuli will provoke variations in delay, rise time, and overshoot that indicate faulty behavior Simple algorithms compute the different parameters

Book ChapterDOI
01 Jan 1996
TL;DR: This paper elaborate various fault models for testing in context in a composite system represented as two communicating FSMs, a component FSM and a context machine that models the remaining part of the system which is assumed to be correctly implemented.
Abstract: We focus in this paper on the problem of modeling faults located in a given component embedded within a composite system. The system is represented as two communicating FSMs, a component FSM and a context machine that models the remaining part of the system which is assumed to be correctly implemented. We elaborate various fault models for testing in context. The existing FSM-based methods are assessed for their applicability to derive tests complete w.r.t. the fault models appropriate for testing in context.

Proceedings ArticleDOI
R.V. White1, F.M. Miles1
03 Mar 1996
TL;DR: In this article, the authors present a tutorial that presents redundancy, fault isolation, fault detection and annunciation, and on-line repair principles for distributed power systems, and highlight special considerations for high availability and fault tolerance.
Abstract: The demand for continuously available electronic systems increases every day. Transaction processing, communications systems, and critical processes all require nonstop, fault tolerant operation. Creating a fault tolerant or highly available system can be achieved by following four basic principles: redundancy, fault isolation, fault detection and annunciation, and on-line repair. This paper is a tutorial that presents those four principles after reviewing some fundamentals of reliability and availability. It concludes with an expanded discussion on implementing redundancy. Special considerations for high availability and fault tolerance in distributed power systems are highlighted.

Patent
11 Jan 1996
TL;DR: In this article, a software application fault identification method and system is presented, which includes software and accompanying computer hardware platforms for detecting software application faults, determining a severity of the faults, and identifying a source of the fault.
Abstract: A software application fault identification method and system. The method and system include software and accompanying computer hardware platforms for detecting a software application fault, determining a severity of the fault, and identifying a source of the fault. The method and system further include software and accompanying computer hardware platforms for generating an alarm message signal based upon the detected fault, the severity determined and the identified source, as well as transmitting the alarm message signal to a remote monitoring station.

Proceedings ArticleDOI
30 Oct 1996
TL;DR: Using the prototype tool, Visual C-Patrol (VCP), it is shown that test coverage can be increased through an "assertion violation" technique for injecting software faults during execution, substantially increasing test branch coverage in four software systems studied.
Abstract: During testing, it is nearly impossible to run all statements or branches of a program. It is especially difficult to test the code used to respond to exceptional conditions. This untested code, often the error recovery code, will tend to be an error prone part of a system. We show that test coverage can be increased through an "assertion violation" technique for injecting software faults during execution. Using our prototype tool, Visual C-Patrol (VCP), we were able to substantially increase test branch coverage in four software systems studied.

Journal ArticleDOI
Mogens Blanke1
TL;DR: The method is based on an analysis of component failure modes and their effects and provides decision tables for fault handling, and helps present the propagation of component faults, and shows where fault handling can be applied to stop the migration of a fault.

Proceedings ArticleDOI
28 Apr 1996
TL;DR: Application of H-SCAN to RT-level designs and fault simulation using the test patterns generated by H- SCAN shows fault coverage comparable to full-scan testing, with significant reduction in test area overhead and test application time when compared to a traditional gate-level full- scan implementation.
Abstract: This paper presents H-SCAN, a practical testing methodology that can be easily applied to a high-level design specification. H-SCAN allows the use of combinational test patterns without the high area and test application time overheads associated with full-scan testing. Connectivities between registers existing in an RT-level design are exploited to reduce the area overhead associated with implementing a scan scheme. Test application time is significantly reduced by using the parallelism inherent in the design, and eliminating the pin constraint of parallel scan schemes by analyzing the test responses on-chip using existing comparators. The proposed method also includes generating appropriate sequential test vectors from combinational test vectors generated by a combinational ATPG program. Application of H-SCAN to RT-level designs and fault simulation using the test patterns generated by H-SCAN shows fault coverage comparable to full-scan testing, with significant reduction in test area overhead and test application time when compared to a traditional gate-level full-scan implementation.

Journal ArticleDOI
TL;DR: In this article, the authors study the testability of analog circuits in the frequency domain by introducing the analog fault observability concept, which combines a structural testing methodology with functionality verification to increase the test effectiveness and consequently the design manufacturability and reliability.
Abstract: We study the testability of analog circuits in the frequency domain by introducing the analog fault observability concept. The proposed algorithm indicates the set of adequate test frequencies and test nodes to increase fault observability. This approach combines a structural testing methodology with functionality verification to increase the test effectiveness and consequently the design manufacturability and reliability. We analyze the case of single fault, double, and multiple faults. Concepts such as fault masking, fault dominance, fault equivalence, and non observable fault in analog circuits are defined and then used to evaluate testability. The theoretical aspect is based on the sensitivity approach.

Journal ArticleDOI
TL;DR: An optimization search method based on genetic algorithms for finding combinational PSCs that have better fault coverage and/or lower area costs than the commonly-used parity function is described.
Abstract: We address test data compaction for built-in self-test (BIST). The thrust of the work focuses on BIST space compaction, a process increasingly required when a large number of internal circuit nodes need to be monitored during test but where area limitations preclude the association of observation latches for all the monitored nodes. We introduce a general class of space compactors denoted as programmable space compactors (PSCs). Programmability enables highly-effective space compactors to be designed for circuits under test (CUT) subjected to a specific set or test patterns. Circuit-specific information such as the fault-free and expected faulty behavior of a circuit are used to choose PSCs that have better fault coverage and/or lower area costs than the commonly-used parity function. Finding optimal PSCs is a difficult task since the space of possible PSC functions is extremely large and grows exponentially with the number of lines (nodes) to be compacted. We describe an optimization search method based on genetic algorithms for finding combinational PSCs. The factors used to assess the effectiveness of a PSC are its fault coverage and implementation area.

Proceedings ArticleDOI
10 Nov 1996
TL;DR: This testing procedure and design for testability (DFT) technique is general enough to handle RTL control flowintensive circuits like protocol handlers as well as data flow intensive circuits like digital filters and makes the combined controller-data path highly testable and does not require any external behavioral information.
Abstract: In this paper, we present a technique for extracting functional (control/data flow) information from register transfer level (RTL) controller/data path circuits and illustrate its use in design for hierarchical testability of these circuits. This testing procedure and design for testability (DFT) technique is general enough to handle RTL control flow intensive circuits like protocol handlers as well as data flow intensive circuits like digital filters. It makes the combined controller-data path highly testable and does not require any external behavioral information. This scheme has the advantages of low area/delay/power overheads (average of 3.2%, 0.9% and 4.1%, respectively, for benchmarks), high fault coverage (over 99% for most cases), very low test generation times (because it is independent of bit-width), and the advantage of at-speed testing. Experiments show a 2-to-4 (1-to-3) orders of magnitude test generation time advantage over an efficient gate-level sequential test generator (combinational test generator that assumes full scan).

Proceedings ArticleDOI
18 Aug 1996
TL;DR: An FPGA-based hardware emulation system is shown to boost the speed of fault simulation for sequential circuits and a parallel fault emulation approach is proposed, in which faults that are not activated or with short propagation distance are screened off before fault emulation, and non-stem faults are collapsed into their equivalent stem faults.
Abstract: An FPGA-based hardware emulation system is shown to boost the speed of fault simulation for sequential circuits. The circuit is downloaded into the emulation system which emulates the faulty circuit's behavior by synthesizing from the good circuit and the given fault list in a novel way. Fault injection is made easy by shifting the content of a fault injection chain, with which we get rid of the highly time-consuming bit-stream regeneration process. Experimental results for ISCAS-89 benchmark circuits show that the fault emulator is about twenty times faster than HOPE (parallel fault simulator). A parallel fault emulation approach is also proposed, in which faults that are not activated or with short propagation distance are screened off before fault emulation, and non-stem faults are collapsed into their equivalent stem faults, further reducing the number of faults actually emulated.