scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 1998"


Book
01 Jan 1998
TL;DR: This work presents a meta-modelling framework for designing and synthesising synthesis for Delay Fault Testability, and presents a number of case studies on Delay Testing that show the importance of knowing the architecture of the defect.
Abstract: Foreword. Preface. 1. Introduction. 2. Test Application Schemes for Testing Delay Defects. 3. Delay Fault Models. 4. Case Studies on Delay Testing. 5. Path Delay Fault Classification. 6. Delay Fault Simulation. 7. Test Generation for Path Delay Faults. 8. Design for Delay Fault Testability. 9. Synthesis for Delay Fault Testability. 10. Conclusions and Future Work. References. Index.

255 citations


Journal ArticleDOI
TL;DR: As the size of a test set is reduced, while the code coverage is kept constant, there is little or no reduction in the fault detection effectiveness of the new test set so generated.
Abstract: Given a test set T to test a program P, there are at least two attributes of T that determine its fault detection effectiveness. One attribute is the size of T measured as the number of test cases in T. Another attribute is the code coverage measured when P is executed on all elements of T. The fault detection effectiveness of T is the ratio of the number of faults guaranteed to result in program failure when P is executed on T to the total number of faults present in P. An empirical study was conducted to determine the relative importance of the size and coverage attributes in affecting the fault detection effectiveness of a randomly selected test set for some program P. Results from this study indicate that as the size of a test set is reduced, while the code coverage is kept constant, there is little or no reduction in the fault detection effectiveness of the new test set so generated. For the study reported, of the two attributes mentioned above, the code coverage attribute of a test set is more important than its size attribute. © 1998 John Wiley & Sons, Ltd.

211 citations


Journal ArticleDOI
TL;DR: It has been shown, for a fanout free circuit under test, that the transition test generation cost for a fault is the minimum number of transitions required to test a given stuck-at fault.
Abstract: A automatic test pattern generator (ATPG) algorithm is proposed that reduces switching activity (between successive test vectors) during test application. The main objective is to permit safe and inexpensive testing of low power circuits and bare die that might otherwise require expensive heat removal equipment for testing at high speeds, Three new cost functions, namely transition controllability, observability, and test generation costs, have been defined. It has been shown, for a fanout free circuit under test, that the transition test generation cost for a fault is the minimum number of transitions required to test a given stuck-at fault. The proposed algorithm has been implemented and the generated tests are compared with those generated by a standard PODEM implementation for the larger ISCAS85 benchmark circuits. The results clearly demonstrate that the tests generated using the proposed ATPG can decrease the average number of (weighted) transitions between successive test vectors by a factor of 2 to 23.

166 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: An empirical evaluation of the fault-detecting ability of two white-box software testing techniques: decision coverage (branch testing) and the all-uses data flow testing criterion supports the belief that these testing techniques can be more effective than random testing.
Abstract: This paper reports on an empirical evaluation of the fault-detecting ability of two white-box software testing techniques: decision coverage (branch testing) and the all-uses data flow testing criterion. Each subject program was tested using a very large number of randomly generated test sets. For each test set, the extent to which it satisfied the given testing criterion was measured and it was determined whether or not the test set detected a program fault. These data were used to explore the relationship between the coverage achieved by test sets and the likelihood that they will detect a fault.Previous experiments of this nature have used relatively small subject programs and/or have used programs with seeded faults. In contrast, the subjects used here were eight versions of an antenna configuration program written for the European Space Agency, each consisting of over 10,000 lines of C code.For each of the subject programs studied, the likelihood of detecting a fault increased sharply as very high coverage levels were reached. Thus, this data supports the belief that these testing techniques can be more effective than random testing. However, the magnitudes of the increases were rather inconsistent and it was difficult to achieve high coverage levels.

164 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: This work presents a versatile automatic functional test generation methodology for microprocessors that can be applied to both design validation and manufacturing test, especially in high speed "native" mode.
Abstract: New methodologies based on functional testing and built-in self-test can narrow the gap between necessary solutions and existing techniques for processor validation and testing. We present a versatile automatic functional test generation methodology for microprocessors. The generated assembly instruction sequences can be applied to both design validation and manufacturing test, especially in high speed "native" mode. All the functional capabilities of complex processors can be exercised, leading to high quality validation sequences and manufacturing tests with high fault coverage. The tests can also be applied in a built-in self-test fashion. Experimental results on two microprocessors show that this method is very effective in generating high quality manufacturing tests as well as in functional design validation.

163 citations


Journal ArticleDOI
TL;DR: An efficient scheme to compress and decompress in parallel deterministic test patterns for circuits with multiple scan chains while achieving a complete fault coverage for any fault model for which test cubes are obtainable is presented.
Abstract: The paper presents an efficient scheme to compress and decompress in parallel deterministic test patterns for circuits with multiple scan chains. It employs a boundary-scan-based environment for high quality testing with flexible trade-offs between test data volume and test application time while achieving a complete fault coverage for any fault model for which test cubes are obtainable. It also reduces bandwidth requirements, as all test cube transfers involve compressed data. The test patterns are generated by the reseeding of a two-dimensional hardware structure which is comprised of a linear feedback shift register (LFSR), a network of exclusive-or (XOR) gates used to scramble the bits of test vectors, and extra feedbacks which allow including internal scan flip-flops into the decompressor structure to minimize the area overhead. The test data decompressor operates in two modes: pseudorandom and deterministic. In the first mode, the pseudorandom pattern generator (PRPG) is used purely as a generator of test vectors. In the latter case, variable-length seeds are serially scanned through the boundary-scan interface into the PRPG and parts of internal scan chains and, subsequently, a decompression is performed in parallel by means of the PRPG and selected scan flip-flops interconnected to form the decompression device. Extensive experiments with the largest ISCAS' 89 benchmarks show that the proposed technique greatly reduces the amount of test data in a cost effective manner.

130 citations


Proceedings ArticleDOI
31 May 1998
TL;DR: The proposed approach is based on a re-ordering of the vectors in the test sequence to minimize the switching activity of the circuit during test application and guarantees a decrease in power consumption and heat dissipation.
Abstract: This paper considers the problem of testing VLSI integrated circuits without exceeding their power ratings during test. The proposed approach is based on a re-ordering of the vectors in the test sequence to minimize the switching activity of the circuit during test application. Our technique uses the Hamming distance between test vectors and guarantees a decrease in power consumption and heat dissipation without modifying the initial fault coverage. Results of experiments are presented at the end of this paper and shows a reduction of the circuit activity in the range from 8.2 to 54.1% during test application.

120 citations


Patent
31 Dec 1998
TL;DR: In this paper, the authors present a design methodology for generating a test die for a product die including the step of concurrently designing test circuitry and a product circuitry in a unified design.
Abstract: One embodiment of the present invention concerns a design methodology for generating a test die for a product die including the step of concurrently designing test circuitry and a product circuitry in a unified design. The test circuitry can be designed to provide a high degree of fault coverage for the corresponding product circuitry generally without regard to the amount of silicon area that will be required by the test circuitry. The design methodology then partitions the unified design into the test die and the product die. The test die includes the test circuitry and the product die includes the product circuitry. The product and test die may then be fabricated on separate semiconductor wafers. By partitioning the product circuitry and test circuitry into separate die, embedded test circuitry can be either eliminated or minimized on the product die. This will tend to decrease the size of the product die and decrease the cost of manufacturing the product die while maintaining a high degree of test coverage of the product circuits within the product die.

112 citations


Proceedings ArticleDOI
23 Jun 1998
TL;DR: This paper compares two fault injection techniques: scan chain implemented fault injection (SCIFI) and fault injection in a VHDL software simulation model of a system, and a newly developed tool called FIMB UL (Fault Injection and Monitoring using BUilt in Logic).
Abstract: This paper compares two fault injection techniques: scan chain implemented fault injection (SCIFI), i.e. fault injection in a physical system using built in test logic, and fault injection in a VHDL software simulation model of a system. The fault injections were used to evaluate the error detection mechanisms included in the Thor RISC microprocessor, developed by Saab Ericsson Space AB. The Thor microprocessor uses several advanced error detection mechanisms including control flow checking, stack range checking and variable constraint checking. A newly developed tool called FIMB UL (Fault Injection and Monitoring using BUilt in Logic), which uses the Test Access Port (TAP) of the Thor CPU to do fault injection, is presented. The simulations were carried out using the MEFISTO-C tool and a highly detailed VHDL model of the Thor processor. The results show that the larger fault set available in the simulations caused only minor differences in the error detection distribution compared to SCIFI and that the overall error coverage was lower using SCIFI (90-94% vs. 94-96% using simulation based fault injection).

106 citations


Patent
11 Dec 1998
TL;DR: In this paper, the authors proposed a method for improving the fault coverage of manufacturing tests for integrated circuits having structures such as embedded memories, where the integrated circuit die of a semiconductor wafer are provided with a fuse array or other circuitry capable of storing an identification number.
Abstract: A method for improving the fault coverage of manufacturing tests for integrated circuits having structures such as embedded memories. In the disclosed embodiment of the invention, the integrated circuit die of a semiconductor wafer are provided with a fuse array or other circuitry capable of storing an identification number. The integrated circuit die also include an embedded memory or similar circuit and built-in self-test (BIST) and built-in self-test (BISR) circuitry. At a point early in the manufacturing test process, the fuse array of each integrated circuit die is encoded with an identification number to differentiate the die from other die of the wafer or wafer lot. The integrity of the embedded memory of each integrated circuit die is then tested at the wafer level under a variety of operating conditions via the BIST and BISR circuitry. The results of these tests are stored in ATE and associated with a particular integrated circuit die via the identification number of the die. The manufacturing test process then continues for the packaged integrated circuits. As with the unsingulated die, the packaged parts are subjected to one or more sets of stress factors, with data being gathered at each stage. Again, test results (e.g., faulty memory locations as determined by the BIST circuitry) are correlated to specific packaged parts via the identification number of the integrated circuit die. The test results of the various stages are next compared to determine if any detected repairable failures are uniform across the various operating conditions. In general, the assumption is made that an integrated circuit IC which exhibits different failure mechanisms at different stages of the testing/manufacturing process is questionable and the part is discarded.

99 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: A new approach for testing word-oriented memories is presented, distinguishing between inter-word and intra-word faults and allowing for a systematic way of converting tests for bit-oriented Memories to rests for word- oriented memories.
Abstract: Most memory test algorithms are optimized tests for a particular memory technology, and a particular set of fault models, under the assumption that the memory is bit-oriented; i.e., read and write operations affect only a single bit in the memory. Traditionally, word-oriented memories have been tested by repeated application of a test for bit-oriented memories whereby a different data background (which depends on the used intra-word fault model) is used during each iteration. This results in rime inefficiencies and limited fault coverage. A new approach for testing word-oriented memories is presented, distinguishing between inter-word and intra-word faults and allowing for a systematic way of converting tests for bit-oriented memories to rests for word-oriented memories. The conversion consists of concatenating the bit-oriented test for inter-word faults with a test for intra-word faults. This approach results in more efficient tests with complete coverage of the targeted faults. Because most memories have an external data path which is wider than one bit, word-oriented memory tests are very important.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: The paper will experimentally show that the test patterns generated at the behavioral level provide a very high stuck-at fault coverage when applied to different gate-level implementations of the given VHDL behavioral specification.
Abstract: This paper proposes a behavioral-level test pattern generation algorithm for behavioral VHDL descriptions. The proposed approach is based on the comparison between the implicit description of the fault-free behavior and the faulty behavior, obtained through a new behavioral fault model. The paper will experimentally show that the test patterns generated at the behavioral level provide a very high stuck-at fault coverage when applied to different gate-level implementations of the given VHDL behavioral specification. Gate-level ATPGs applied on these same circuits obtain lower fault coverage, in particular when considering circuits with hard to detect faults.

Journal ArticleDOI
TL;DR: In this article, two numerical algorithms for fault location and distance protection using data from one end of a transmission line are presented, which are relatively simple and easy to be implemented in the on-line application.
Abstract: Two numerical algorithms for fault location and distance protection which use data from one end of a transmission line are presented. Both algorithms require only current signals as input data. Voltage signals are unnecessary for determining the unknown distance to the fault. The solution for the most frequent phase to ground fault is presented. The algorithms are relatively simple and easy to be implemented in the on-line application. The algorithms allow for accurate calculation of the fault location irrespective of the fault resistance and load. To illustrate the features of the new algorithms, steady-state and dynamic tests are presented.

Proceedings ArticleDOI
26 Apr 1998
TL;DR: An ATPG technique that reduces power dissipation during the test of sequential circuits by 70% on average with respect to the original test pattern, generated ignoring the heat dissipation problem.
Abstract: This paper proposes an ATPG technique that reduces power dissipation during the test of sequential circuits. The proposed approach exploits some redundancy introduced during the test pattern generation phase and selects a subset of sequences able to reduce the consumed power without reducing the fault coverage. The method is composed of three independent steps: redundant test pattern generation, power consumption measurement, optimal test sequence selection. The experimental results gathered on the ISCAS benchmark circuits show that our approach decreases the power consumption by 70% on average with respect to the original test pattern, generated ignoring the heat dissipation problem.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: The description and application of the features of MEFISTO-L, the fault injection tool for VHDL models, being developed at LAAS, are described for supporting the strategy that is proposed for testing FTMs.
Abstract: The early assessment of the adequacy of fault tolerance mechanisms (FTMs), and the subsequent removal of fault tolerance deficiency faults (ftd-faults), are essential tasks in the design process of dependable computer systems. The paper is centered on the description and application of the features of MEFISTO-L, the fault injection tool for VHDL models, being developed at LAAS for supporting the strategy that we have proposed for testing FTMs. The paper first describes the overall testing framework in which MEFISTO-L is incorporated. The main guidelines for the design of MEFISTO-L and its objectives, attributes, implementation and use are then described. Special attention is given to the main original and innovative features: i) the embedded VHDL code analyzer, ii) the observation and injection mechanisms, iii) their synchronization, and iv) their automatic placement in the target VHDL model.

Journal ArticleDOI
TL;DR: In this paper, a low-cost vectorless test solution, known as oscillation test, is investigated to test the operational amplifier (op amp), which is one of the most encountered analog building blocks.
Abstract: The operational amplifier (op amp) is one of the most encountered analog building blocks. In this paper, the problem of testing an integrated op amp is treated. A new low-cost vectorless test solution, known as oscillation test, is investigated to test the op amp. During the test mode, the op amps are converted to a circuit that oscillates and the oscillation frequency is evaluated to monitor faults. The tolerance band of the oscillation frequency is determined using a Monte Carlo analysis taking into account the nominal tolerance of all important technology and design parameters. Faults in the op amps under test which cause the oscillation frequency to exit the tolerance band can therefore be detected. Some Design for Testability (DfT) rules to rearrange op amps to form oscillators are presented and the related practical problems and limitations are discussed. The oscillation frequency can be easily and precisely evaluated using pure digital circuitry. The simulation and practical implementation results confirm that the presented techniques ensure a high fault coverage with a low area overhead.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: It is shown that a deterministic test set can be encoded as initial values of an accumulator based structure, and all testable faults can be detected within a given test length by carefully selecting the seeds of the accumulator.
Abstract: Most built-in self test (BIST) solutions require specialized test pattern generation hardware which may introduce significant area overhead and performance degradation. Recently, some authors proposed test pattern generation on chip by means of functional units also used in system mode like adders or multipliers. These schemes generate pseudo-random or pseudo-exhaustive patterns for serial or parallel BIST. If the circuit under test contains random pattern resistant faults a deterministic test pattern generator is necessary to obtain complete fault coverage. In this paper it is shown that a deterministic test set can be encoded as initial values of an accumulator based structure, and all testable faults can be detected within a given test length by carefully selecting the seeds of the accumulator. A ROM is added for storing the seeds, and the control logic of the accumulator is modified. In most cases the size of the ROM is less than the size required by traditional LFSR-based reseeding approaches.

Proceedings ArticleDOI
26 Apr 1998
TL;DR: It is shown that statistical encoding of test sets can be combined with low-cost pattern decoding for deterministic BIST and provides higher fault coverage than pseudorandom testing with shorter test application time.
Abstract: We present a new approach to built-in self-test of sequential circuits using precomputed test sets. Our approach is especially suited to circuits containing a large number of flip-flops but few primary inputs. Such circuits are often encountered as embedded cores and filters for digital signal processing, and are inherently difficult to test. We show that statistical encoding of test sets can be combined with low-cost pattern decoding for deterministic BIST. This approach exploits recent advances in sequential circuit ATPG and unlike other BIST schemes, does not require access to gate-level models of the circuit under test. Experimental results show that the proposed method provides higher fault coverage than pseudorandom testing with shorter test application time.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: A deterministic BIST scheme for circuits with multiple scan paths is presented and a procedure is described for synthesizing a pattern generator which stimulates all scan chains simultaneously and guarantees complete fault coverage.
Abstract: A deterministic BIST scheme for circuits with multiple scan paths is presented. A procedure is described for synthesizing a pattern generator which stimulates all scan chains simultaneously and guarantees complete fault coverage. The new scheme may require less chip area than a classical LFSR-based approach while better or even complete fault coverage is obtained at the same time.

Journal ArticleDOI
TL;DR: A fast fault simulation approach based on ordinary logic emulation that reduces the number of faults actually emulated by screening off faults not activated or with short propagation distances before emulation, and by collapsing nonstem faults into their equivalent stem faults.
Abstract: A fast fault simulation approach based on ordinary logic emulation is proposed. The circuit configured into our system that emulates the faulty circuit's behaviour is synthesized from the good circuit and the given fault list in a novel way. Fault injection is made easy by shifting the content of a fault injection scan chain or by selecting the output of a parallel fault injection selector, with which we get rid of the time-consuming bit-stream regeneration process. Experimental results for ISCAS-89 benchmark circuits show that our serial fault emulator is about 20 times faster than HOPE. The speedup grows with the circuit size by our analysis. Two hybrid fault emulation approaches are also proposed. The first reduces the number of faults actually emulated by screening off faults not activated or with short propagation distances before emulation, and by collapsing nonstem faults into their equivalent stem faults. The second reduces the hardware requirement of the fault emulator by incorporating an ordinary fault simulator.

Proceedings ArticleDOI
24 Aug 1998
TL;DR: It is shown how to design address sequence generators and address dependent data for March tests, that generate all the patterns required for the detection of those faults.
Abstract: New fault models like the unrestored write and the false write through faults and suitable test algorithms have recently been developed by several authors. These tests are applied in addition to March tests. Since a March test algorithm can be implemented in many different ways and still be effective in detecting its target faults, we have what we call degrees of freedom in the test space. In this paper it is shown, that for commonly used memory organizations tests for the unrestored write and false write through faults can be integrated in March test sequences. It is shown how to design address sequence generators and address dependent data for March tests, that generate all the patterns required for the detection of those faults. The detection properties of the original March tests are retained. The additional overhead in terms of silicon area and timing for an on-chip realization of a built-in March self-test with the added fault detection features is negligible and the test application time remains unchanged.

Book
01 Jan 1998
TL;DR: In this article, the authors propose a self-testability model for digital circuits and fault models, which is based on random testing and built-in self-testing models, and test length for a test sequence.
Abstract: Random testing and built-in self-test models for digital circuits and fault models basic concepts and test generation methods performance measurements for a test sequence basic principles of random testing random test length for combinational circuits random test length for sequential circuits random test length for RAMs random test length for microprocessors generation of random test sequences experimental results signature analysis design for random testability appendices - A - random pattern sources, B - calculation of a probability of complete fault coverage, C - finite Markov chains, D - black-box fault model, E - exact calculation of activities, F - comparing asynchronous and synchronous test, G - proofs of properties 7.1, 7.2 and 12.3, H - microprocessor Motorola 6800, I - pseudorandom testing, J - random testing of delay faults, K - subsequences of required lengths, L - diagnosis from random testing, M - conjecture about multiple faults exercises solutions to exercises.

Journal ArticleDOI
TL;DR: It is shown that by using parity prediction, on-line error detection can be incorporated into these multipliers with very low hardware overheads, so for large values of m these overheads are particularly low.
Abstract: In this paper error detection is applied to four finite field bit-serial multipliers. It is shown that by using parity prediction, on-line error detection can be incorporated into these multipliers with very low hardware overheads. These hardware overheads are generally independent of m and comprise only a handful of gates, so for large values of m these overheads are particularly low. The fault coverage of the presented structures has been investigated by simulation experiment and shown to range between 90% and 94.3%.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: The results for very detailed studies of pattern and timing-dependent failures from the 309 dies in the retest of an experimental test chip show that multiple-detect single stuck fault test sets have high transition fault coverage.
Abstract: This paper presents the results for very detailed studies of pattern and timing-dependent failures from the 309 dies in the retest of an experimental test chip. 22 out of the 50 CUTs with pattern-dependent failures had test escapes if the test sets were reordered. Some timing-dependent failures became timing-independent combinational (TIC) defects at very low voltage. Multiple-detect single stuck fault test sets have high transition fault coverage. Most dies with TIC or non-TIC defects were close to gross failures or next to the wafer periphery.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: Formal risk analysis, an approach for automatically generating a fault tree from finite state machine-based descriptions of a system, is presented and is the basis for subsequent improvements of the system design and quantitative analysis of safety and liveness requirements in the presence of failures.
Abstract: Usually, fault tree analyses are performed manually. They are based on documents that describe the system. Considerable knowledge, system insight, and overview is necessary to consider many failure modes, and dependencies between system components and their functionality at a time. Often, the behavior is too complicated to fully comprehend all possible failure consequences. Manual fault tree analysis is error-prone, costly and not necessarily complete. Formal risk analysis, an approach for automatically generating a fault tree from finite state machine-based descriptions of a system, is presented. The generated fault tree is complete with respect to all failures assumed possible. It is the basis for subsequent improvements of the system design and quantitative analysis of safety and liveness requirements in the presence of failures. A case study of formal risk analysis, the automatic generation of a fault tree for all sensor failures of a production cell's elevating rotary table, is discussed.

Proceedings ArticleDOI
19 Jan 1998
TL;DR: A set of collapsing rules based on the analysis of the assembly code and of the behavior of a fault free run of the system are proposed, useful to reduce the fault list length and the fault injection time without decreasing the accuracy of the results.
Abstract: Fault injection is become a popular approach to evaluate and possibly to improve the dependability of computer-based systems. One of the main issues to be solved when setting up a fault injection experiment is the generation of a list of faults to be injected, really representative of the whole set of possible faults. This paper proposes a set of collapsing rules based on the analysis of the assembly code and of the behavior of a fault free run of the system, useful to reduce the fault list length and the fault injection time without decreasing the accuracy of the results. The approach is suitable to be adapted for microprocessor-based systems and is independent on the method used to generate the fault list to be collapsed.

Journal ArticleDOI
TL;DR: Experimental results show that BIST TPGs based on input reduction achieve complete stuck-at fault coverage in practical test lengths (/spl les/2/sup 30/) for many benchmark circuits.
Abstract: A new technique called input reduction is proposed for built-in self test (BIST) test pattern generator (TPG) design and test set compaction. This technique analyzes the circuit function and identifies sets of compatible and inversely compatible inputs; inputs in each set can be combined into a test signal in the test mode without sacrificing fault coverage, even if they belong to the same circuit cone. The test signals are used to design BIST TPGs that guarantee the detection of all detectable stuck-at faults in practical test lengths. A deterministic test set generated for the reduced circuit obtained by combining inputs into test signals is usually more compact than that generated for the original circuit. Experimental results show that BIST TPGs based on input reduction achieve complete stuck-at fault coverage in practical test lengths (/spl les/2/sup 30/) for many benchmark circuits. These are achieved with low area overhead and performance penalty to the circuit under test. Results also show that the memory storage and test application time for external testing using deterministic test sets can be reduced by as much as 85%.

Journal ArticleDOI
TL;DR: A new approach for the multiple fault location in linear analog circuits is proposed, based on the k-fault hypothesis, which is provided with efficient algorithms for fault location also in the case of low testability circuits.
Abstract: A new approach for the multiple fault location in linear analog circuits is proposed. It presents the characteristic of using classical numerical procedures together with symbolic analysis techniques, which is particularly useful in the parametric fault diagnosis field. The proposed approach is based on the k-fault hypothesis and is provided with efficient algorithms for fault location also in the case of low testability circuits. The developed algorithms have been used for realizing a software package prototype which implements a fully automated system for the fault location in linear analog circuits of moderate size.

Journal ArticleDOI
TL;DR: This scheme identifies a suitable control and data flow from the register-transfer level circuit, and uses it to test each embedded element in the circuit by symbolically justifying its precomputed test set from the system primary inputs to the element inputs and symbolically propagating the output response to the systemPrimary outputs.
Abstract: In this paper, we present a technique for extracting functional (control/data flow) information from register-transfer level controller/data path circuits, and illustrate its use in design for hierarchical testability of these circuits. This scheme does not require any additional behavioral information. It identifies a suitable control and data flow from the register-transfer level circuit, and uses it to test each embedded element in the circuit by symbolically justifying its precomputed test set from the system primary inputs to the element inputs and symbolically propagating the output response to the system primary outputs. When symbolic justification and propagation become difficult, it inserts test multiplexers at suitable points to increase the symbolic controllability and observability of the circuit. These test multiplexers are mostly restricted to off-critical paths. Testability analysis and insertion are completely based on the register-transfer level circuit and the functional information automatically extracted from it, and are independent of the data path bit width owing to their symbolic nature. Furthermore, the data path test set is obtained as a byproduct of this analysis without any further search. Unlike many other design-for-testability techniques, this scheme makes the combined controller-data path very highly testable. It is general enough to handle control-flow-intensive register-transfer level circuits like protocol handlers as well as data-flow intensive circuits like digital filters. It results in low area/delay/power overheads, high fault coverage, and very low test generation times (because it is symbolic and independent of bit width). Also, a large part of our system-level test sets can be applied at speed. Experimental results on many benchmarks show the average area, delay, and power overheads for testability to be 3.1, 1.0, and 4.2%, respectively. Over 99% fault coverage is obtained in most cases with two-four orders of magnitude test generation time advantage over an efficient gate-level sequential test pattern generator and one-three orders of magnitude advantage over an efficient gate-level combinational test pattern generator (that assumes full scan). In addition, the test application times obtained for our method are comparable with those of gate-level sequential test pattern generators, and up to two orders of magnitude smaller than designs using full scan.

Journal ArticleDOI
TL;DR: A modular yet integrated approach to the problem of fast fault detection and classification that can be model-based or model-free, and which would be applicable to arbitrary dynamic systems.
Abstract: This paper presents a modular yet integrated approach to the problem of fast fault detection and classification. Although the specific application example studied here is a power system, the method would be applicable to arbitrary dynamic systems. The approach is quite flexible in the sense that it can be model-based or model-free. In the model-free case, we emphasize the use of concepts from signal processing and wavelet theory to create fast and sensitive fault indicators. If a model is available then conventionally generated residuals can serve as fault indicators. The indicators can then be analyzed by standard statistical hypothesis testing or by artificial neural networks to create intelligent decision rules. After a detection, the fault indicator is processed by a Kohonen network to classify the fault. The approach described here is expected to be of wide applicability. Results of computer experiments with simulated faulty transmission lines are included.