scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2005"


Proceedings ArticleDOI
20 Mar 2005
TL;DR: A novel, software-only, transient-fault-detection technique, called SWIFT, which efficiently manages redundancy by reclaiming unused instruction-level resources present during the execution of most programs and provides a high level of protection and performance with an enhanced control-flow checking mechanism.
Abstract: To improve performance and reduce power, processor designers employ advances that shrink feature sizes, lower voltage levels, reduce noise margins, and increase clock rates. However, these advances make processors more susceptible to transient faults that can affect correctness. While reliable systems typically employ hardware techniques to address soft-errors, software techniques can provide a lower-cost and more flexible alternative. This paper presents a novel, software-only, transient-fault-detection technique, called SWIFT. SWIFT efficiently manages redundancy by reclaiming unused instruction-level resources present during the execution of most programs. SWIFT also provides a high level of protection and performance with an enhanced control-flow checking mechanism. We evaluate an implementation of SWIFT on an Itanium 2 which demonstrates exceptional fault coverage with a reasonable performance cost. Compared to the best known single-threaded approach utilizing an ECC memory system, SWIFT demonstrates a 51% average speedup.

729 citations


Journal ArticleDOI
TL;DR: In this paper, a fuzzy-logic-based algorithm to identify the type of faults for digital distance protection system has been developed, which is able to accurately identify the phase(s) involved in all ten types of shunt faults that may occur in a transmission line under different fault resistances, inception angle, and loading levels.
Abstract: In this paper, a fuzzy-logic-based algorithm to identify the type of faults for digital distance protection system has been developed. The proposed technique is able to accurately identify the phase(s) involved in all ten types of shunt faults that may occur in a transmission line under different fault resistances, inception angle, and loading levels. The proposed method needs only three line-current measurements available at the relay location and can perform the fault classification task in about a half-cycle period. Thus, the proposed technique is well suited for implementation in a digital distance protection scheme.

221 citations


Journal ArticleDOI
TL;DR: This paper presents a high-level, functional component-oriented, software-based self-testing methodology for embedded processors and validate the effectiveness and efficiency of the proposed methodology by completely applying it on two different processor implementations of a popular RISC instruction set architecture.
Abstract: Embedded processor testing techniques based on the execution of self-test programs have been recently proposed as an effective alternative to classic external tester-based testing and pure hardware built-in self-test (BIST) approaches. Software-based self-testing is a nonintrusive testing approach and provides at-speed testing capability without any hardware or-performance overheads. In this paper, we first present a high-level, functional component-oriented, software-based self-testing methodology for embedded processors. The proposed methodology aims at high structural fault coverage with low test development and test application cost. Then, we validate the effectiveness of the proposed methodology as a low-cost alternative over structural software-based self-testing methodologies based on automatic test pattern generation and pseudorandom testing. Finally, we demonstrate the effectiveness and efficiency of the proposed methodology by completely applying it on two different processor implementations of a popular RISC instruction set architecture including several gate-level implementations.

188 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work presents a value-driven approach to system-level test case prioritization called the prioritization of requirements for test (PORT), which prioritizes system test cases based upon four factors: requirements volatility, customer priority, implementation complexity, and fault proneness of the requirements.
Abstract: Test case prioritization techniques have been shown to be beneficial for improving regression-testing activities. With prioritization, the rate of fault detection is improved, thus allowing testers to detect faults earlier in the system-testing phase. Most of the prioritization techniques to date have been code coverage-based. These techniques may treat all faults equally. We build upon prior test case prioritization research with two main goals: (1) to improve user-perceived software quality in a cost effective way by considering potential defect severity and (2) to improve the rate of detection of severe faults during system-level testing of new code and regression testing of existing code. We present a value-driven approach to system-level test case prioritization called the prioritization of requirements for test (PORT). PORT prioritizes system test cases based upon four factors: requirements volatility, customer priority, implementation complexity, and fault proneness of the requirements. We conducted a PORT case study on four projects developed by students in advanced graduate software testing class. Our results show that PORT prioritization at the system level improves the rate of detection of severe faults. Additionally, customer priority was shown to be one of the most important prioritization factors contributing to the improved rate of fault detection.

186 citations


Proceedings ArticleDOI
01 May 2005
TL;DR: Experimental results show the effectiveness of the novel low-capture-power X-filling method in reducing capture power dissipation without any impact on area, timing, and fault coverage.
Abstract: Research on low-power scan testing has been focused on the shift mode, with little or no consideration given to the capture mode power. However, high switching activity when capturing a test response can cause excessive IR drop, resulting in significant yield loss. This paper addresses this problem with a novel low-capture-power X-filling method by assigning 0's and 1's to unspecified (X) bits in a test cube to reduce the switching activity in capture mode. This method can be easily incorporated into any test generation flow, where test cubes are obtained during ATPG or by X-bit identification. Experimental results show the effectiveness of this method in reducing capture power dissipation without any impact on area, timing, and fault coverage.

183 citations


Proceedings ArticleDOI
08 Nov 2005
TL;DR: A novel low-capture-power X-filling method of assigning 0's and 1's to unspecified (X) bits in a test cube obtained during ATPG to improve the applicability of scan-based at-speed testing by reducing the risk of test yield loss.
Abstract: Scan-based at-speed testing is a key technology to guarantee timing-related test quality in the deep submicron era. However, its applicability is being severely challenged since significant yield loss may occur from circuit malfunction due to excessive IR drop caused by high power dissipation when a test response is captured. This paper addresses this critical problem with a novel low-capture-power X-filling method of assigning 0's and 1's to unspecified (X) bits in a test cube obtained during ATPG. This method reduces the circuit switching activity in capture mode and can be easily incorporated into any test generation flow to achieve capture power reduction without any area, timing, or fault coverage impact. Test vectors generated with this practical method greatly improve the applicability of scan-based at-speed testing by reducing the risk of test yield loss

144 citations


Proceedings ArticleDOI
05 Apr 2005
TL;DR: In this paper, the authors describe one-and two-ended impedance-based fault location experiences using simple reactance, Takagi, zero-sequence current with angle correction, and twoended negative-sequence.
Abstract: In this paper, we describe one- and two-ended impedance-based fault location experiences. We define terms associated with fault location and describe several impedance-based methods of fault location (simple reactance, Takagi, zero-sequence current with angle correction, and two-ended negative-sequence). We examine several system faults and analyze the performance of the fault locators given possible sources of error (short fault window, nonhomogeneous system, incorrect fault type selection, etc.). Finally, we show the laboratory testing results of a two-ended method, where we automatically extracted a two-ended fault location estimate from a single end.

132 citations


Proceedings ArticleDOI
08 Nov 2005
TL;DR: A scalable test strategy for the routers in a NoC, based on partial scan and on an IEEE 1500-compliant test wrapper is proposed, which takes advantage of the regular design of the NoC to reduce both test area overhead and test time.
Abstract: Network-on-chip has recently emerged as alternative communication architecture for complex system chip and different aspects regarding NoC design have been studied in the literature. However, the test of the NoC itself for manufacturing faults has been marginally tackled. This paper proposes a scalable test strategy for the routers in a NoC, based on partial scan and on an IEEE 1500-compliant test wrapper. The proposed test strategy takes advantage of the regular design of the NoC to reduce both test area overhead and test time. Experimental results show that a good tradeoff of area overhead, fault coverage, test data volume, and test time is achieved by the proposed technique. Furthermore, the method can be applied for large NoC sizes and it does not depend on the network routing and control algorithms, which makes the method suitable to test a large class of network models

123 citations


Journal ArticleDOI
TL;DR: This analysis of the relationships between variable and literal faults, and among literal, operator, term, and expression faults, produces a richer set of findings that interpret previous empirical results, can be applied to the design and evaluation of test methods, and inform the way that test cases should be prioritized for earlier detection of faults.
Abstract: Kuhn, followed by Tsuchiya and Kikuno, have developed a hierarchy of relationships among several common types of faults (such as variable and expression faults) for specification-based testing by studying the corresponding fault detection conditions. Their analytical results can help explain the relative effectiveness of various fault-based testing techniques previously proposed in the literature. This article extends and complements their studies by analyzing the relationships between variable and literal faults, and among literal, operator, term, and expression faults. Our analysis is more comprehensive and produces a richer set of findings that interpret previous empirical results, can be applied to the design and evaluation of test methods, and inform the way that test cases should be prioritized for earlier detection of faults. Although this work originated from the detection of faults related to specifications, our results are equally applicable to program-based predicate testing that involves logic expressions.

122 citations


Proceedings ArticleDOI
25 Sep 2005
TL;DR: This paper presents a novel approach to test suite reduction that attempts to selectively keep redundant tests in the reduced suites by modifying an existing heuristic for test suite minimization.
Abstract: Software testing is a critical part of software development. Test suite sizes may grow significantly with subsequent modifications to the software over time. Due to time and resource constraints for testing, test suite minimization techniques attempt to remove those test cases from the test suite that have become redundant over time since the requirements covered by them are also covered by other test cases in the test suite. Prior work has shown that test suite minimization techniques can severely compromise the fault detection effectiveness of test suites. In this paper, we present a novel approach to test suite reduction that attempts to selectively keep redundant tests in the reduced suites. We implemented our technique by modifying an existing heuristic for test suite minimization. Our experiments show that our approach can significantly improve the fault detection effectiveness of reduced suites without severely affecting the extent of test suite size reduction.

119 citations


Proceedings ArticleDOI
07 Mar 2005
TL;DR: The paper presents a functional coverage based test generation technique for pipelined architectures that combines a general graph-theoretic model that can capture the structure and behavior of a wide variety of pipeline processors and a functional fault model that is used to define the functional coverage.
Abstract: Functional verification of microprocessors is one of the most complex and expensive tasks in the current system-on-chip design process. A significant bottleneck in the validation of such systems is the lack of a suitable functional coverage metric. This paper presents a functional coverage based test generation technique for pipelined architectures. The proposed methodology makes three important contributions. First, a general graph-theoretic model is developed that can capture the structure and behavior (instruction-set) of a wide variety of pipelined processors. Second, we propose a functional fault model that is used to define the functional coverage for pipelined architectures. Finally, test generation procedures are presented that accept the graph model of the architecture as input and generate test programs to detect all the faults in the functional fault model. Our experimental results on two pipelined processor models demonstrate that the number of test programs generated by our approach to obtain a fault coverage is an order of magnitude less than those generated by traditional random or constrained-random test generation techniques.

Proceedings ArticleDOI
03 Oct 2005
TL;DR: The proposed lock & key technique provides security while not negatively impacting the design's fault coverage, and requires only that a small area overhead penalty is incurred for a significant return in security.
Abstract: Scan test has been a common and useful method for testing VLSI designs due to the high controllability and observability it provides. These same properties have recently been shown to also be a security threat to the intellectual property on a chip (Yang et al., 2004). In order to defend from scan based attacks, we present the lock & key technique. Our proposed technique provides security while not negatively impacting the design's fault coverage. This technique requires only that a small area overhead penalty is incurred for a significant return in security. Lock & key divides the already present scan chain into smaller subchains of equal length that are controlled by an internal test security controller. When a malicious user attempts to manipulate the scan chain, the test security controller goes into insecure mode and enables each subchain in an unpredictable sequence making controllability and observability of the circuit under test very difficult. We present and analyze the design of the lock & key techniques to show that this is a flexible option to secure scan designs for various levels of security

Proceedings ArticleDOI
03 Oct 2005
TL;DR: This work proposes new techniques to determine illegal states of circuits that can be used during ATPG to prohibit tests using such states, which are essentially functional or pseudofunctional.
Abstract: In designs using DFT, such as scan, some of the faults that are untestable in the circuit without DFT become testable after DFT insertion. Additionally, scan tests may scan in illegal or unreachable states that cause nonfunctional operation of the circuit during test. This may cause higher than normal power dissipation and demands on supply current. We propose new techniques to determine illegal states of circuits that can be used during ATPG to prohibit tests using such states. The resulting tests are essentially functional or pseudofunctional.

Book
01 Jan 2005
TL;DR: Fault and Fault Modelling, Test Stimulus Generation, Fault Diagnosis Methodology, and Design for Testability and Built-In Self-Test are studied to improve testability and built-in self-Test.
Abstract: Fault and Fault Modelling.- Test Stimulus Generation.- Fault Diagnosis Methodology.- Design for Testability and Built-In Self-Test.

Journal ArticleDOI
TL;DR: In this paper, a general architecture for fault tolerant control is proposed based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBJ parameterization to quantify the performance of the fault tolerant system.
Abstract: A general architecture for fault tolerant control is proposed. The architecture is based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBK parameterization to quantify the performance of the fault tolerant system. The approach suggested can be applied for additive faults, parametric faults and for system structural changes. The modelling for each of these fault classes is described. The method allows for design of passive as well as for active fault handling. Also, the related design method can be fitted either to guarantee stability or to achieve graceful degradation in the sense of guaranteed degraded performance. A number of fault diagnosis problems, fault tolerant control problems, and feedback control with fault rejection problems are formulated/considered, mainly from a fault modelling point of view. The method is illustrated on a servo example including an additive fault and a parametric fault.

Journal ArticleDOI
15 May 2005
TL;DR: This study hypothesizes that the estimation of code coverage on testing effectiveness varies under different testing profiles, and employs coverage testing and mutation testing in this experiment to investigate the relationship between code coverage and fault detection capability under differentTesting profiles.
Abstract: Software testing is a key procedure to ensure high quality and reliability of software programs. The key issue in software testing is the selection and evaluation of different test cases. Code coverage has been proposed to be an estimator for testing effectiveness, but it remains a controversial topic which lacks of support from empirical data. In this study, we hypothesize that the estimation of code coverage on testing effectiveness varies under different testing profiles. To evaluate the performance of code coverage, we employ coverage testing and mutation testing in our experiment to investigate the relationship between code coverage and fault detection capability under different testing profiles. From our experimental data, code coverage is simply a moderate indicator for the capability of fault detection on the whole test set. However, it is clearly a good estimator for the fault detection of exceptional test cases, but a poor one for test cases in normal operations. For other testing profiles, such as functional testing and random testing, the correlation between code coverage and fault coverage is higher in functional test than in random testing, although these different testing profiles are complementary in the whole test set. The effects of different coverage metrics are also addressed in our experiment.

Journal ArticleDOI
TL;DR: An efficient method based on ordered binary decision diagram (OBDD) for evaluating the multistate system reliability and the Griffith's importance measures which can be regarded as the importance of a system-component state of a multistates system subject to imperfect fault-coverage with various performance requirements is presented.
Abstract: Algorithms for evaluating the reliability of a complex system such as a multistate fault-tolerant computer system have become more important. They are designed to obtain the complete results quickly and accurately even when there exist a number of dependencies such as shared loads (reconfiguration), degradation, and common-cause failures. This paper presents an efficient method based on ordered binary decision diagram (OBDD) for evaluating the multistate system reliability and the Griffith's importance measures which can be regarded as the importance of a system-component state of a multistate system subject to imperfect fault-coverage with various performance requirements. This method combined with the conditional probability methods can handle the dependencies among the combinatorial performance requirements of system modules and find solutions for multistate imperfect coverage model. The main advantage of the method is that its time complexity is equivalent to that of the methods for perfect coverage model and it is very helpful for the optimal design of a multistate fault-tolerant system.

Journal ArticleDOI
TL;DR: A model-free incremental diagnosis algorithm is outlined, which alleviates the need for an explicit fault model, and extensive results on combinational and full-scan sequential benchmark circuits confirm its resolution and performance.
Abstract: Fault diagnosis is important in improving the circuit-design process and the manufacturing yield. Diagnosis of today's complex defects is a challenging problem due to the explosion of the underlying solution space with the increasing number of fault locations and fault models. To tackle this complexity, an incremental diagnosis method is proposed. This method captures faulty lines one at a time using the novel linear-time single-fault diagnosis algorithms. To capture complex fault effects, a model-free incremental diagnosis algorithm is outlined, which alleviates the need for an explicit fault model. To demonstrate the applicability of the proposed method, experiments on multiple stuck-at faults, open-interconnects and bridging faults are performed. Extensive results on combinational and full-scan sequential benchmark circuits confirm its resolution and performance.

Journal ArticleDOI
TL;DR: A new compile-time analysis that enables a testing methodology for white-box coverage testing of error recovery code of server applications written in Java, using compiler-directed fault injection, incorporating refinements that establish sufficient context sensitivity to ensure relatively precise def-use links.
Abstract: This paper presents a new compile-time analysis that enables a testing methodology for white-box coverage testing of error recovery code (i.e., exception handlers) of server applications written in Java, using compiler-directed fault injection. The analysis allows compiler-generated instrumentation to guide the fault injection and to record the recovery code exercised. (An injected fault is experienced as a Java exception.) The analysis 1) identifies the exception-flow "def-uses" to be tested in this manner, 2) determines the kind of fault to be requested at a program point, and 3) finds appropriate locations for code instrumentation. The analysis incorporates refinements that establish sufficient context sensitivity to ensure relatively precise def-use links and to eliminate some spurious def-uses due to demonstrably infeasible control flow. A runtime test harness calculates test coverage of these links using an exception def-catch metric. Experiments with the methodology demonstrate the utility of the increased precision in obtaining good test coverage on a set of moderately sized server benchmarks.

Proceedings ArticleDOI
H.H. Chen1
05 Dec 2005
TL;DR: A test algorithm is developed for SoC design to perform self-testing and set stopping criteria in a hierarchical and parallel manner to increase fault coverage and reduce testing time.
Abstract: This paper describes a hierarchical built-in self-test (BIST) method for testing an integrated system chip with a global BIST controller, multiple local BIST circuits for each macro, and data/control paths to perform the system-on-chip (SoC) test operations. The global BIST controller is composed of programmable devices for storing the test patterns and programming the test commands, a state machine for executing the test sequence for each macro in an orderly manner, a dynamic random access memory (DRAM) for collecting the feedback data from the local BIST circuits, and a built-in processor for conducting intra-macro and inter-macro testing via programs from an external tester. A test algorithm is also developed for SoC design to perform self-testing and set stopping criteria in a hierarchical and parallel manner to increase fault coverage and reduce testing time.

Proceedings ArticleDOI
18 Dec 2005
TL;DR: This paper shows a side-channel attack on LFSR-based stream ciphers using scan chains using a tree based pattern with a selfchecking compactor to prevent such scan based attacks.
Abstract: Scan based testing is a powerful and popular test technique. However the scan chain can be used by an attacker to decipher the cryptogram. The present paper shows such a side-channel attack on LFSR-based stream ciphers using scan chains. The paper subsequently discusses a strategy to build the scan chains in a tree based pattern with a selfchecking compactor. It has been shown that such a structure prevents such scan based attacks but does not compromise on fault coverage.

Proceedings ArticleDOI
25 Sep 2005
TL;DR: Results of experiments on test suites for the space antenna-steering application show significant reduction in test suite size at the cost of a moderate loss in fault detection effectiveness.
Abstract: Test suite reduction is an important test maintenance activity that attempts to reduce the size of a test suite with respect to some criteria. Emerging trends in software development such as component reuse, multi-language implementations, and stringent performance requirements present new challenges for existing reduction techniques that may limit their applicability. A test suite reduction technique that is not affected by these challenges is presented; it is based on dynamically generated language-independent information that can be collected with little run-time overhead. Specifically, test cases from the suite being reduced are executed on the application under test and the call stacks produced during execution are recorded. These call stacks are then used as a coverage requirement in a test suite reduction algorithm. Results of experiments on test suites for the space antenna-steering application show significant reduction in test suite size at the cost of a moderate loss in fault detection effectiveness.

Proceedings ArticleDOI
15 May 2005
TL;DR: An extended symptom-fault-action model is proposed to incorporate actions into fault reasoning process to tackle the above problem and shows both performance and accuracy of fault reasoning can be greatly improved by taking actions.
Abstract: Fault localization is a core element in fault management. Many fault reasoning techniques use deterministic or probabilistic symptom-fault causality model for fault diagnoses and localization. Symptom-fault map is commonly used to describe symptom-fault causality in fault reasoning. However, due to lost and spurious symptoms in fault reasoning systems that passively collect symptoms, the performance and accuracy of the fault localization can be significantly degraded. In this paper, we propose an extended symptom-fault-action model to incorporate actions into fault reasoning process to tackle the above problem. This technique is called active integrated fault reasoning (AIR), which contains three modules: fault reasoning, fidelity evaluation and action selection. Corresponding fault reasoning and action selection algorithms are elaborated. Simulation study shows both performance and accuracy of fault reasoning can be greatly improved by taking actions, especially when the rate of spurious and lost symptoms is high.

Proceedings ArticleDOI
30 Nov 2005
TL;DR: New models of saboteurs and mutants that can be easily applicable in VFIT, a fault injection tool developed by the Fault-Tolerant Systems Research Group (GSTF) of the Technical University of Valencia are presented.
Abstract: Fault injection techniques based on the use of VHDL as design language offer important advantages with regard to other fault injection techniques First, as they can be applied during the design phase of the system, they allow reducing the time-to-market Second, this type of techniques presents high controllability and reachability Among the different techniques, those based on the use of saboteurs and mutants are especially attractive due to their high capability of fault modeling However, it is difficult to implement automatically these techniques in a fault injection tool, mainly the insertion of saboteurs and the generation of mutants In this paper, we present new models of saboteurs and mutants that can be easily applicable in VFIT, a fault injection tool developed by the Fault-Tolerant Systems Research Group (GSTF) of the Technical University of Valencia

Proceedings ArticleDOI
08 Nov 2005
TL;DR: This work presents a programmable memory BISM architecture offering such flexibility at an area cost similar to traditional memory BIST schemes, which may lead to unacceptable area cost.
Abstract: In modern SoCs embedded memories include the large majority of defects. In addition defect types are becoming more complex and diverse and may escape detection during fabrication test. As a matter of fact memories have to be tested by test algorithms achieving very high fault coverage. Fixing the test algorithm during the design phase may not be compatible with this goal, as thorough screening inspection or customer returns may discover after fabrication unexpected fault types. A programmable BIST approach allowing selecting after fabrication a vast variety of memory tests is therefore desirable, but may lead to unacceptable area cost. In this work we present a programmable memory BIST architecture offering such flexibility at an area cost similar to traditional memory BIST schemes

Journal ArticleDOI
TL;DR: In this paper, transient pattern analysis is explored as a tool for fault detection and diagnosis of an HVAC system, and the results show that the evolution of fault residuals forms clear and distinct patterns that can be used to isolate faults.

Proceedings ArticleDOI
01 May 2005
TL;DR: Two efficient test solutions for the process variation related failures in SRAM are proposed: modification of March sequence, and a low-overhead DFT circuit to complement the March test for an overall test time reduction of 29%, compared to the existing test technique with similar fault coverage.
Abstract: In this paper, we have made a complete analysis of the emerging SRAM failure mechanisms due to process variations and mapped them to fault models. We have proposed two efficient test solutions for the process variation related failures in SRAM: (a) modification of March sequence, and (b) a low-overhead DFT circuit to complement the March test for an overall test time reduction of 29%, compared to the existing test technique with similar fault coverage.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: In this article, the authors have discussed and shown that location and numbers of Fault Indicator (FIs) have an effect on distribution reliability indices such as SAIFI, SAIDI, CAIFI.
Abstract: SUMMARY Reduction of failure rates and applying effective fault management can be affected in improvement of reliability indices in distribution systems. One of the ways to improving the reliability of distribution networks in fault management procedure is installing Fault Indicators (FIs) in overhead primary networks. FIs allow operators to quickly identify the location of a fault on overhead lines feeders. FIs can reduce fault localization and therefore reduction in outage duration and outage cost. In this paper, modelling of FIs in reliability assessment and computing of related indices such as SAIFI, SAIDI, CAIFI is introduced. Using model development and case studies it is discussed and shown that location and numbers of FIs effect in distribution reliability indices.

Proceedings ArticleDOI
08 Nov 2005
TL;DR: New scan flip-flops are proposed to improve delay fault coverage for circuits with scan using broadside tests and a circuit topology based Flip-flop selection procedure that offers a scalable method for increasing the transition fault coverage.
Abstract: Testing of delay faults require two pattern tests. Broadside and skewed-load testing are two approaches to test for delay faults in scan designs. The broadside approach is often preferred over the skewed-load approach in designs that also use the system clock for scan operations, since skewed-load requires a fast (at-speed) scan enable signal while broadside testing does not. In this paper, we propose new scan flip-flops to improve delay fault coverage for circuits with scan using broadside tests. The proposed flip-flops do not require a control signal to switch at-speed. This is a distinct advantage as the design effort required for timing closure of such control signals is significant. We also propose a circuit topology based flip-flop selection procedure that offers a scalable method for increasing the transition fault coverage. Experimental results on industrial circuits are included

Proceedings ArticleDOI
08 Nov 2005
TL;DR: This paper introduces a method which extends the use of available gate level stuck-at fault diagnosis tools to stuck-open fault diagnosis, and transforms the transistor level circuit description to a gate level description where stuck- open faults are represented by stuck- at faults.
Abstract: While most of the fault diagnosis tools are based on gate level fault models, for instance the stuck-at model, many faults are actually at the transistor level. The stuck-open fault is one example. In this paper we introduce a method which extends the use of available gate level stuck-at fault diagnosis tools to stuck-open fault diagnosis. The method transforms the transistor level circuit description to a gate level description where stuck-open faults are represented by stuck-at faults, so that the stuck-open faults can be diagnosed directly by any of the stuck-at fault diagnosis tools. The transformation is only performed on selected gates and thus has little extra computational cost. This method also applies to the diagnosis of multiple stuck-open faults within a gate. Successful diagnosis results are presented using wafer test data and an internal diagnosis tool from Philips