scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2004"


Proceedings ArticleDOI
26 Oct 2004
TL;DR: Case study information on ATPG- and DFT-based solutions for test power reduction is presented and ICs have been observed to fail at specified minimum operating voltages during structured at-speed testing while passing all other forms of test.
Abstract: It is a well-known phenomenon that test power consumption may exceed that of functional operation. ICs have been observed to fail at specified minimum operating voltages during structured at-speed testing while passing all other forms of test. Methods exist to reduce power without dramatically increasing pattern volume for a given coverage. We present case study information on ATPG- and DFT-based solutions for test power reduction.

285 citations


Journal Article
TL;DR: A survey on fault injection techniques with comparison of the different injection techniques and an overview on the different tools is presented.
Abstract: Fault tolerant circuits are currently required in several major application sectors. Besides and in complement to other possible approaches such as proving or analytical modeling whose applicability and accuracy are significantly restricted in the case of complex fault tolerant systems, fault-injection has been recognized to be particularly attractive and valuable. Fault injection provides a method of assessing the dependability of a system under test. It involves inserting faults into a system and monitoring the system to determine its behavior in response to a fault. Several fault injection techniques have been proposed and practically experimented. They can be grouped into hardware-based fault injection, software-based fault injection, simulation-based fault injection, emulation-based fault injection and hybrid fault injection. This paper presents a survey on fault injection techniques with comparison of the different injection techniques and an overview on the different tools.

234 citations


Journal ArticleDOI
TL;DR: In this paper, an effective fault location algorithm and intelligent fault diagnosis scheme are proposed, which first identifies fault locations using an iterative estimation of load and fault current at each line section, then an actual location is identified, applying the current pattern matching rules.
Abstract: In this paper, an effective fault location algorithm and intelligent fault diagnosis scheme are proposed. The proposed scheme first identifies fault locations using an iterative estimation of load and fault current at each line section. Then an actual location is identified, applying the current pattern matching rules. If necessary, comparison of the interrupted load with the actual load follows and generates the final diagnosis decision. Effect of load uncertainty and fault resistance has been carefully investigated through simulation results that turns out to be very satisfactory.

218 citations


Proceedings ArticleDOI
04 Oct 2004
TL;DR: This work examines fault tolerant communication algorithms for use in the NoC domain and finds that the redundant random walk algorithm offers significantly reduced overhead while maintaining useful levels of fault tolerance.
Abstract: As technology scales, fault tolerance is becoming a key concern in on-chip communication. Consequently, this work examines fault tolerant communication algorithms for use in the NoC domain. Two different flooding algorithms and a random walk algorithm are investigated. We show that the flood-based fault tolerant algorithms have an exceedingly high communication overhead. We find that the redundant random walk algorithm offers significantly reduced overhead while maintaining useful levels of fault tolerance. We then compare the implementation costs of these algorithms, both in terms of area as well as in energy consumption, and show that the flooding algorithms consume an order of magnitude more energy per message transmitted.

214 citations


Journal ArticleDOI
TL;DR: A detailed simulation-based characterization of QCA defects and study of their effects at logic level are presented and a testing technique requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design.
Abstract: There has been considerable research on quantum dot cellular automata (QCA) as a new computing scheme in the nanoscale regimes. The basic logic element of this technology is the majority voter. In this paper, a detailed simulation-based characterization of QCA defects and study of their effects at logic level are presented. Testing of these QCA devices at logic level is investigated and compared with conventional CMOS-based designs. Unique testing features of designs based on this technology are presented and interesting properties have been identified. A testing technique is presented; it requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design. A design-for-test scheme is also presented, which results in the generation of a reduced test set at 100% fault coverage.

172 citations


Journal ArticleDOI
TL;DR: In this paper, a fault diagnosis and accommodation system (FDAS) for open-frame underwater vehicles is proposed, where the fault detector units (FDUs) associated with each thruster are used to monitor their state and use information provided by the FDUs to accommodate faults and perform an appropriate control reallocation.

170 citations


Patent
30 Nov 2004
TL;DR: In this article, a test assembly for testing product circuitry of a product die is presented. But the test assembly is not designed to provide a high degree of fault coverage for the corresponding product circuitry generally without regard to the amount of silicon area that will be required by the test circuitry.
Abstract: One embodiment of the present invention concerns a test assembly for testing product circuitry of a product die. In one embodiment, the test assembly includes at test die and an interconnection substrate for electrically coupling the test die to a host controller that communicates with the test die. The test die may be designed according to a design methodology that includes the step of concurrently designing test circuitry and a product circuitry in a unified design. The test circuitry can be designed to provide a high degree of fault coverage for the corresponding product circuitry generally without regard to the amount of silicon area that will be required by the test circuitry. The design methodology then partitions the unified design into the test die and the product die. The test die includes the test circuitry and the product die includes the product circuitry. The product and test die may then be fabricated on separate semiconductor wafers. By partitioning the product circuitry and test circuitry into separate die, embedded test circuitry can be either eliminated or minimized on the product die. This will tend to decrease the size of the product die and decrease the cost of manufacturing the product die while maintaining a high degree of test coverage of the product circuits within the product die. The test die can be used to test multiple product die on one or more wafers.

150 citations


Proceedings ArticleDOI
26 Oct 2004
TL;DR: Experiments show that testing transition faults through the longest paths can be done in reasonable test set size and the test generation efficiency is evaluated on ISCAS89 benchmark circuits and industrial designs.
Abstract: To detect the smallest delay faults at a fault site, the longest path(s) through it must be tested at full speed. Existing test generation tools are inefficient in automatically identifying the longest testable paths due to the high computational complexity. In this work a test generation methodology for scan-based synchronous sequential circuits is presented, under two at-speed test strategies used in industry. The two strategies are compared and the test generation efficiency is evaluated on ISCAS89 benchmark circuits and industrial designs. Experiments show that testing transition faults through the longest paths can be done in reasonable test set size.

121 citations


Proceedings ArticleDOI
26 Oct 2004
TL;DR: This work proposes an effective method for applying fine-delay fault testing in order to improve defect coverage of especially resistive opens by grouping conventional delay-fault patterns into sets of almost equal-length paths, which narrows the overall path length distribution and allows running the pattern sets at a higher speed, thus enabling the detection of small delay faults.
Abstract: This work proposes an effective method for applying fine-delay fault testing in order to improve defect coverage of especially resistive opens. The method is based on grouping conventional delay-fault patterns into sets of almost equal-length paths. This narrows the overall path length distribution and allows running the pattern sets at a higher speed, thus enabling the detection of small delay faults. These small delay faults are otherwise undetectable because they are masked by longer paths. A requirement for this method is to have hazard-free paths. To obtain these (almost) hazard-free paths we use a fast and simple postprocessing step that filters out paths with hazards. The experimental data shows the effectiveness and the necessity of this filtering process.

102 citations


Journal ArticleDOI
TL;DR: A method for identifying the X inputs of test vectors in a given test set by using fault simulation and procedures similar to implication and justification of automatic test pattern generation (ATPG) algorithms is proposed.
Abstract: Given a test set for stuck-at faults of a combinational circuit or a full-scan sequential circuit, some of the primary input values may be changed to the opposite logic values without losing fault coverage. We can regard such input values as don't care (X). In this paper, we propose a method for identifying the X inputs of test vectors in a given test set. While there are many combinations of X inputs in the test set generally, the proposed method finds one including as many X inputs as possible, by using fault simulation and procedures similar to implication and justification of automatic test pattern generation (ATPG) algorithms. Experimental results for ISCAS benchmark circuits show that approximately 69% of the inputs of uncompacted test sets could be X on the average. Even for highly compacted test sets, the method found that approximately 48% of inputs are X.

101 citations


Proceedings ArticleDOI
16 Feb 2004
TL;DR: A novel scan-based delay test approach, referred as the hybrid delay scan, that combines advantages of the skewed-load and broad-side approaches and does not require a strong buffer or buffer tree to drive the fast switching scan enable signal.
Abstract: A novel scan-based delay test approach, referred as the hybrid delay scan, is proposed in this paper. The proposed scan-based delay testing method combines advantages of the skewed-load and broad-side approaches. Unlike the skewed-load approach whose design requirement is often too costly to meet due to the fast switching scan enable signal, the hybrid delay scan does not require a strong buffer or buffer tree to drive the fast switching scan enable signal. Hardware overhead added to standard scan designs to implement the hybrid approach is negligible. Since the fast scan enable signal is internally generated, no external pin is required. Transition delay fault coverage achieved by the hybrid approach is equal to or higher than that achieved by the broad-side load for all ISCAS 89 benchmark circuits. On an average, about 4.5% improvement in fault coverage is obtained by the hybrid approach over the broad-side approach.

Proceedings ArticleDOI
28 Jun 2004
TL;DR: This paper illustrates exhaustive fault simulation using a new startup algorithm for the time-triggered architecture (TTA) and shows that this approach is fast enough to be deployed in the design loop.
Abstract: The increasing performance of modern model-checking tools offers high potential for the computer-aided design of fault-tolerant algorithms. Instead of relying on human imagination to generate taxing failure scenarios to probe a fault-tolerant algorithm during development, we define the fault behavior of a faulty process at its interfaces to the remaining system and use model checking to automatically examine all possible failure scenarios. We call this approach "exhaustive fault simulation". In this paper we illustrate exhaustive fault simulation using a new startup algorithm for the time-triggered architecture (TTA) and show that this approach is fast enough to be deployed in the design loop. We use the SAL toolset from SRI for our experiments and describe an approach to modeling and analyzing fault-tolerant algorithms that exploits the capabilities of tools such as this.

Proceedings ArticleDOI
25 Apr 2004
TL;DR: This paper is the first paper of its kind that treats the scan enable signal as a test data signal during the scan operation of a test pattern and shows that the extra flexibility of reconfiguring the scan chains every shift cycle reduces the number of different configurations required by RSSA while keeping test coverage the same.
Abstract: This paper extends the reconfigurable shared scan-in architecture (RSSA) to provide additional ability to change values on the scan configuration signals (scan enable signals) during the scan operation on a per-shift basis. We show that the extra flexibility of reconfiguring the scan chains every shift cycle reduces the number of different configurations required by RSSA while keeping test coverage the same. In addition a simpler analysis can be used to construct the scan chains. This is the first paper of its kind that treats the scan enable signal as a test data signal during the scan operation of a test pattern. Results are presented on some ISCAS as well as industrial circuits.

Proceedings ArticleDOI
Wu-Tung Cheng1, Kun-Han Tsai1, Yu Huang1, Nagesh Tamarapalli1, Janusz Rajski1 
15 Nov 2004
TL;DR: The proposed methodology enables seamless reuse of the existing standard ATPG based diagnosis infrastructure with compressed test data and indicates that the diagnostic resolution of devices with embedded compression is comparable with that of devices without embedded compression.
Abstract: In scan test environment, designs with embedded compression techniques can achieve dramatic reduction in test data volume and test application time. However, performing fault diagnosis with the reduced test data becomes a challenge. In this paper, we provide a general methodology based on circuit transformation technique that can be applied for performing fault diagnosis in the context of any compression technique. The proposed methodology enables seamless reuse of the existing standard ATPG based diagnosis infrastructure with compressed test data. Experimental results indicate that the diagnostic resolution of devices with embedded compression is comparable with that of devices without embedded compression.

Proceedings ArticleDOI
27 Jan 2004
TL;DR: A SAT-based ATPG tool targeting on a path-oriented transition fault model, utilizing an efficient false-path pruning technique to identify the longest sensitizable path through each fault site, which can be orders-of-magnitude faster than a commercial AtPG tool.
Abstract: This paper presents a SAT-based ATPG tool targeting on a path-oriented transition fault model Under this fault model, a transition fault is detected through the longest sensitizable path In the ATPG process, we utilize an efficient false-path pruning technique to identify the longest sensitizable path through each fault site We demonstrate that our new SAT-based ATPG can be orders-of-magnitude faster than a commercial ATPG tool To demonstrate the quality of the tests generated by our approach, we compare its resulting test set to three other test sets: a single-detection transition fault test set, a multiple-detection transition fault test set, and a traditional critical path test set added to the single-detection set The superiority of our approach is demonstrated through various experiments based on statistical delay simulation and defect injection using benchmark circuits

Proceedings ArticleDOI
15 Nov 2004
TL;DR: It is shown that a unique March test solution can ensure the complete coverage of all the faults induced by the resistive-open defects in the SRAM core-cells, which simplifies considerably the problem of delay fault testing in this part of SRAM memories.
Abstract: In this paper we present an exhaustive analysis of resistive-open defect in core-cell of SRAM memories. These defects that appear more frequently in VDSM technologies induce a modification of the timing within the memory (delay faults). Among the faults induce by such resistive-open defects there are static and dynamic read destructive fault (RDF), deceptive read destructive fault (DRDF), incorrect read fault (IRF) and transition fault (TF). Each of them requires specific test conditions and different kind of March tests are needed to cover all these faults (TF, RDF, DRDF and IRF). In this paper, we show that a unique March test solution can ensure the complete coverage of all the faults induced by the resistive-open defects in the SRAM core-cells. This solution simplifies considerably the problem of delay fault testing in this part of SRAM memories.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: It is shown how by consciously creating scan paths prior to logic synthesis, both the transition delay fault coverage and circuit speed can be improved.
Abstract: This paper introduces a new method to construct functional scan chains at the register-transfer level aimed at increasing the delay fault coverage when using the skewed-load test application strategy. It is shown how by consciously creating scan paths prior to logic synthesis, both the transition delay fault coverage and circuit speed can be improved.

Proceedings ArticleDOI
28 Jun 2004
TL;DR: A circuit fault detection and isolation technique for quasi delay-insensitive asynchronous circuits where a large class of faults are tolerated, and the remaining faults can be both detected easily and isolated to a small region of the design.
Abstract: This paper presents a circuit fault detection and isolation technique for quasi delay-insensitive asynchronous circuits. We achieve fault isolation by a combination of physical layout and circuit techniques. The asynchronous nature of quasi delay-insensitive circuits combined with layout techniques makes the design tolerant to delay faults. Circuit techniques are used to make sections of the design robust to nondelay faults. The combination of these is an asynchronous defect-tolerant circuit where a large class of faults are tolerated, and the remaining faults can be both detected easily and isolated to a small region of the design.

Proceedings ArticleDOI
01 Jul 2004
TL;DR: A new compile-time analysis that enables a testing methodology for white-box coverage testing of error recovery code in Java web services using compiler-directed fault injection, and incorporates refinements that establish sufficient context sensitivity to ensure relatively precise def-use links.
Abstract: This paper presents a new compile-time analysis that enables a testing methodology for white-box coverage testing of error recovery code (i.e., exception handlers) in Java web services using compiler-directed fault injection. The analysis allows compiler-generated instrumentation to guide the fault injection and to record the recovery code exercised. (An injected fault is experienced as a Java exception.) The analysis (i) identifies the exception-flow 'def-uses' to be tested in this manner, (ii) determines the kind of fault to be requested at a program point, and (iii) finds appropriate locations for code instrumentation. The analysis incorporates refinements that establish sufficient context sensitivity to ensure relatively precise def-use links and to eliminate some spurious def-uses due to demonstrably infeasible control flow. A runtime test harness calculates test coverage of these links using an exception def-catch metric. Experiments with the methodology demonstrate the utility of the increased precision in obtaining good test coverage on a set of moderately-sized Java web services benchmarks.

Patent
Hsueh Chi Shen1
23 Apr 2004
TL;DR: In this article, a system and method for detecting a fault and identifying a remedy for the fault in real-time in a semiconductor product manufacturing facility is presented, which includes importing data from a manufacturing device and data representing a plurality of different manufacturing devices into an analysis tool.
Abstract: A system and method for detecting a fault and identifying a remedy for the fault in real-time in a semiconductor product manufacturing facility are provided. In one example, the method includes importing data from a manufacturing device and data representing a plurality of different manufacturing devices into an analysis tool. The imported data is analyzed using the analysis tool to determine if a fault exists in the manufacturing device's operation and, if a fault exists, the fault is classified and a remedy for the fault is identified based at least partly on the classification. Configuration data used to control the manufacturing device may be updated, and the update may apply the remedy to the configuration information. The manufacturing device's operation may then be modified using the updated configuration data.

Patent
02 Mar 2004
TL;DR: In this article, a method of fault identification on a semiconductor manufacturing tool includes monitoring tool sensor output, establishing a fingerprint of tool states based on the plurality of sensors outputs, capturing sensor data indicative of fault conditions, building a library of such fault fingerprints, comparing present tool fingerprint with fault fingerprints to identify a fault condition and estimating the effect of such a fault conditions on process output.
Abstract: A method of fault identification on a semiconductor manufacturing tool includes monitoring tool sensor output, establishing a fingerprint of tool states based on the plurality of sensors outputs, capturing sensor data indicative of fault conditions, building a library of such fault fingerprints, comparing present tool fingerprint with fault fingerprints to identify a fault condition and estimating the effect of such a fault condition on process output. The fault library is constructed by inducing faults in a systematic way or by adding fingerprints of known faults after they occur.

Proceedings ArticleDOI
23 May 2004
TL;DR: An efficient algorithm to check whether two faults are equivalent is presented and if they are not equivalent, the algorithm returns a test vector that distinguishes them.
Abstract: Fault equivalence is an essential concept in digital design with significance in fault diagnosis, diagnostic test generation, testability analysis and logic synthesis. In this paper, an efficient algorithm to check whether two faults are equivalent is presented. If they are not equivalent, the algorithm returns a test vector that distinguishes them. The proposed approach is complete since for every pair of faults it either proves equivalence or it returns a distinguishing vector. This is performed with a simple hardware construction and a sequence of simulation/ATPG-based steps. Experiments on benchmark circuits demonstrate the competitiveness of the proposed method.

Proceedings ArticleDOI
26 Oct 2004
TL;DR: This paper evaluates N-detect scan ATPG patterns for their impact to test quality through simulation and fallout from production on a Pentium 4 processor using 90 nm manufacturing technology.
Abstract: This paper evaluates N-detect scan ATPG patterns for their impact to test quality through simulation and fallout from production on a Pentium 4 processor using 90 nm manufacturing technology. An incremental ATPG flow is used to generate N-detect test patterns. The generated patterns were applied in production with flows to determine overlap in fallout to different tests. The generated N-detect test patterns are then evaluated based on different metrics. The metrics include signal states, bridge fault coverage, stuck-at fault coverage and fault detection profile. The correlation between the different metrics is studied. Data from production fallout shows the effectiveness of N-detect tests. Further, the correlation between fallout data and the different metrics is analyzed.

Journal ArticleDOI
TL;DR: A complete analysis of LFs, based on the concept of fault primitives, such that the whole space of LF is investigated and accounted for and validated and makes March SL very attractive industrially.
Abstract: The analysis of linked faults (LFs), which are faults that influence the behavior of each other, such that masking can occur, has proven to be a source for new memory tests, characterized by an increased fault coverage. However, many newly reported fault models have not been investigated from the point-of-view of LFs. This paper presents a complete analysis of LFs, based on the concept of fault primitives, such that the whole space of LFs is investigated and accounted for and validated. Some simulated defective circuits, showing linked-fault behavior, will be also presented. The paper establishes detection conditions along with new tests to detect each fault class. The tests are merged into a single test March SL detecting all considered LFs. Preliminary test results, based on Intel advanced caches, show that its fault coverage is high as compared with all other traditional tests and that it detects some unique faults; this makes March SL very attractive industrially.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: A multiple-capture-orders method is developed to guarantee the full scan fault coverage and a test architecture based on a ring control structure is adopted which makes the test control very simple and requires very low area overhead.
Abstract: This paper proposes a method to reduce the excess power dissipation during scan testing. The proposed method divides a scan chain into a number of sub-chains, and enables only one sub-chain at a time for both the scan and capture operations. To efficiently deal with the data dependence problem during the capture cycles, we develop a multiple-capture-orders method to guarantee the full scan fault coverage. A test pattern generation procedure is developed to reduce the test application time and a test architecture based on a ring control structure is adopted which makes the test control very simple and requires very low area overhead. Experimental results for large ISCAS'89 benchmark circuits show that the proposed method can reduce average and peak power by 86.8% and 66.1% in average, respectively, when 8 sub-chains are used.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed scheme reduces tester storage requirements and tester bandwidth requirements by orders of magnitude compared to conventional external testing, but requires much less area overhead than a full BIST implementation providing the same fault coverage.
Abstract: This paper presents a new test data-compression scheme that is a hybrid approach between external testing and built-in self-test (BIST). The proposed approach is based on weighted pseudorandom testing and uses a novel approach for compressing and storing the weight sets. Three levels of compression are used to greatly reduce test costs. Experimental results show that the proposed scheme reduces tester storage requirements and tester bandwidth requirements by orders of magnitude compared to conventional external testing, but requires much less area overhead than a full BIST implementation providing the same fault coverage. No test points or any modifications are made to the function logic. The paper describes the proposed hybrid BIST architecture as well as two different ways of storing the weight sets, which are an integral part of this scheme.

Proceedings ArticleDOI
25 Apr 2004
TL;DR: A characterization of the defects shows that very few defective chips act as if they had a single-stuck fault present and that most of the defect cause sequence-dependent behavior.
Abstract: LSI logic has designed and manufactured two test chips at CRC. These test chips were used to investigate the characteristics of actual production defects and the effectiveness of various test techniques in detecting their presence. This paper presents a characterization of the defects that shows that very few defective chips act as if they had a single-stuck fault present and that most of the defects cause sequence-dependent behavior. A variety of techniques are used to reduce the size of test sets for digital chips. They typically rely on preserving the single-stuck-fault coverage of the test set. This strategy doesn't guarantee that the defect coverage is retained. This paper presents data obtained from applying a variety of test sets on two chips (Murphy and ELF35) and recording the test escapes. The reductions in test size can thus be compared with the increases in test escapes. The data shows that, even when the fault coverage is preserved, there is a penalty in test quality. Also presented is the data showing the effect of reducing the fault coverage. Techniques studied include various single-stuck-fault models including inserting faults at the inputs of complex gates such as adders, multiplexers, etc. This technique is compatible with the use of structural RTL netlists. Other techniques presented include compaction techniques and don't care bit assignment strategies.

Patent
27 Oct 2004
TL;DR: In this paper, a method, apparatus, and computer program product diagnosing and resolving faults is disclosed, and a disclosed fault management architecture includes a fault manager suitable having diagnostic engines and fault correction agents.
Abstract: A method, apparatus, and computer program product diagnosing and resolving faults is disclosed. A disclosed fault management architecture includes a fault manager suitable having diagnostic engines and fault correction agents. The diagnostic engines receive error information and identify associated fault possibilities. The fault possibility information is passed to fault correction agents, which diagnose and resolve the associated faults. The architecture uses logs to track the status of error information, the status of fault management exercises, and the fault status of system resources. Additionally, a soft error rate discriminator can be employed to track and resolve soft (correctible) errors in the system. The architecture is extensible allowing additional diagnostic engines and agents to be plugged in to the architecture without interrupting the normal operational flow of the computer system.

Proceedings ArticleDOI
23 May 2004
TL;DR: This article describes a novel approach to fault diagnosis suitable for at-speed testing of board-level interconnect faults based on a new parallel test pattern generator and a specifically fault detecting sequence.
Abstract: This article describes a novel approach to fault diagnosis suitable for at-speed testing of board-level interconnect faults.This approach is based on a new parallel test pattern generator and a specifically fault detecting sequence. The test sequence has tree major advantages.At first, it detects both static and dynamic faults upon interconnects. Secondly, it allows precise on-chp at-speed fault diagnosis of interconnect faults.Third, the hardware implementation of both the test generator and the response analyzer is very efficient in terms of silicon area.

Proceedings ArticleDOI
16 Feb 2004
TL;DR: A new scan architecture is proposed to reduce test time and volume while retaining the original scan input count and promises a substantial reduction in test cost for large circuits.
Abstract: Scan-based designs are widely used to decrease the complexity of the test generation process; nonetheless, they increase test time and volume. A new scan architecture is proposed to reduce test time and volume while retaining the original scan input count. The proposed architecture allows the use of the captured response as a template for the next pattern with only the necessary bits of the captured response being updated while observing the full captured response. The theoretical and experimental analysis promises a substantial reduction in test cost for large circuits.