scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2000"


Proceedings ArticleDOI
01 Aug 2000
TL;DR: Can prioritization techniques be effective when aimed at specific modified versions; what tradeoffs exist between fine granularity and coarse granularity prioritized techniques; and can the incorporation of measures of fault proneness into prioritization technique improve their effectiveness?
Abstract: Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we reported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: (1) can prioritization techniques be effective when aimed at specific modified versions; (2) what tradeoffs exist between fine granularity and coarse granularity prioritization techniques; (3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? This paper reports the results of new experiments addressing these questions.

783 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: The concept of the sphere of replication is introduced, which abstract both the physical redundancy of a lockstepped system and the logical redundancy of an SRT processor, and two mechanisms-slack fetch and branch outcome queue-are proposed and evaluated that enhance the performance of anSRT processor by allowing one thread to prefetch cache misses and branch results for the other thread.
Abstract: Smaller feature sizes, reduced voltage levels, higher transistor counts, and reduced noise margins make future generations of microprocessors increasingly prone to transient hardware faults. Most commercial fault-tolerant computers use fully replicated hardware components to detect microprocessor faults. The components are lockstepped (cycle-by-cycle synchronized) to ensure that, in each cycle, they perform the same operation on the same inputs, producing the same outputs in the absence of faults. Unfortunately, for a given hardware budget, full replication reduces performance by statically partitioning resources among redundant operations.We demonstrate that a Simultaneous and Redundantly Threaded (SRT) processor—derived from a Simultaneous Multithreaded (SMT) processor—provides transient fault coverage with significantly higher performance. An SRT processor provides transient fault coverage by running identical copies of the same program simultaneously as independent threads. An SRT processor provides higher performance because it dynamically schedules its hardware resources among the redundant copies. However, dynamic scheduling makes it difficult to implement lockstepping, because corresponding instructions from redundant threads may not execute in the same cycle or in the same order.This paper makes four contributions to the design of SRT processors. First, we introduce the concept of the sphere of replication, which abstracts both the physical redundancy of a lockstepped system and the logical redundancy of an SRT processor. This framework aids in identifying the scope of fault coverage and the input and output values requiring special handling. Second, we identify two viable spheres of replication in an SRT processor, and show that one of them provides fault detection while checking only committed stores and uncached loads. Third, we identify the need for consistent replication of load values, and propose and evaluate two new mechanisms for satisfying this requirement. Finally, we propose and evaluate two mechanisms—slack fetch and branch outcome queue—that enhance the performance of an SRT processor by allowing one thread to prefetch cache misses and branch results for the other thread. Our results with 11 SPEC95 benchmarks show that an SRT processor can outperform an equivalently sized, on-chip, hardware-replicated solution by 16% on average, with a maximum benefit of up to 29%.

672 citations


Journal ArticleDOI
TL;DR: In this paper, a fault estimation and compensation method was proposed to compensate for actuator and sensor faults in highly automated systems. But the method is limited to the case when there is a complete loss of an actuator.
Abstract: The general fault-tolerant control method described in the article addresses actuator and sensor faults, which often affect highly automated systems. These faults correspond to a loss of actuator effectiveness or fault sensor measurements. After describing these faults, a fault estimation and compensation method was proposed. In addition to providing information to operators concerning the system operating conditions, the fault diagnosis module is especially important in fault-tolerant control systems where one needs to know exactly which element is faulty to react safely. The method's abilities to compensate for such faults are illustrated by applying it to a winding machine, which represents a subsystem of many industrial systems. The results show that once the fault is detected and isolated, it is easy to reduce its effect on the system, and process control is resumed with degraded performances close to nominal ones. Thus, stopping the system immediately can be avoided. However, the limits of this method are reached when there is the complete loss of an actuator. In this case, only a hardware redundancy is effective and could ensure performance reliability. The method proposed here assumes the availability of the state variables for measurement.

269 citations


Proceedings ArticleDOI
27 Mar 2000
TL;DR: NFTAPE as discussed by the authors is a tool for composing automated fault injection experiments from available lightweight fault injectors, triggers, monitors, and other components, which helps to solve the problems of no single tool is sufficient for injecting all necessary fault models; it is difficult to port these tools to new systems.
Abstract: Many fault injection tools are available for dependability assessment. Although these tools are good at injecting a single fault model into a single system, they suffer from two main limitations for use in distributed systems: (1) no single tool is sufficient for injecting all necessary fault models; (2) it is difficult to port these tools to new systems. NFTAPE, a tool for composing automated fault injection experiments from available lightweight fault injectors, triggers, monitors, and other components, helps to solve these problems. We have conducted experiments using NFTAPE with several types of lightweight fault injectors, including driver-based, debugger-based, target-specific, simulation-based, hardware-based, and performance-fault injections. Two example experiments are described in this paper. The first uses a hardware fault injector with a Myrinet LAN; the other uses a Software Implemented Fault Injection (SWIFI) fault injector to target a space-imaging application.

222 citations


Journal ArticleDOI
TL;DR: The design modifications include some gating logic for masking the scan path activity during shifting, and the synthesis of additional logic for suppressing random patterns which do not contribute to increase the fault coverage.
Abstract: Power consumption of digital systems may increase significantly during testing. In this paper, systems equipped with a scan-based built-in self-test like the STUMPS architecture are analyzed, the modules and modes with the highest power consumption are identified, and design modifications to reduce power consumption are proposed. The design modifications include some gating logic for masking the scan path activity during shifting, and the synthesis of additional logic for suppressing random patterns which do not contribute to increase the fault coverage. These design changes reduce power consumption during BIST by several orders of magnitude, at very low cost in terms of area and performance.

188 citations


Proceedings ArticleDOI
30 Apr 2000
TL;DR: Using this notation, the space of all possible memory faults has been constructed and it has been shown that this space is infinite, and contains the currently established fault models.
Abstract: This paper presents a notation for describing functional fault models, which may occur in memory devices. Using this notation, the space of all possible memory faults has been constructed. It has been shown that this space is infinite, and contains the currently established fault models. New fault models in this space have been identified and verified using resistive and capacitive defect injection and simulation of a DRAM model.

187 citations


Proceedings ArticleDOI
03 Oct 2000
TL;DR: The data presented shows that N-detect test sets are particularly effective for both timing and hard failures, and the use of IDDq tests and VLV tests for detecting defects whose presence doesn't interfere with normal operation during manufacturing test, but which cause early life failure.
Abstract: This paper studies some manufacturing test data collected for an experimental digital IC. Test results for a large variety of single-stuck fault based test sets are shown and compared with a number of test sets based on other fault models. The defects present in the chips studied are characterized based on the chip tester responses. The data presented shows that N-detect test sets are particularly effective for both timing and hard failures. In these test sets each single-stuck fault is detected by at least N different test patterns. We also present data on the use of IDDq tests and VLV (very low voltage) tests for detecting defects whose presence doesn't interfere with normal operation during manufacturing test, but which cause early life failure.

182 citations


Proceedings ArticleDOI
03 Oct 2000
TL;DR: A logic diagnosis tool with applicability to a spectrum of logic DFT, ATPG and test strategies including full/almost fullscan circuits with combinational ATPG, partial-scan and non-scan circuits in general and to functional patterns in general is presented.
Abstract: Logic fault diagnosis or fault isolation is the process of analyzing the failing logic portions of an integrated circuit to isolate the cause of failure. Fault diagnosis plays an important role in multiple applications at different stages of design and manufacturing. A logic diagnosis tool with applicability to a spectrum of logic DFT, ATPG and test strategies including full/almost fullscan circuits with combinational ATPG, partial-scan and non-scan circuits with sequential ATPG and to functional patterns in general is presented. Novel features incorporated into the tool include static and dynamic structural processing for partial-scan circuits, windowed fault simulation, and diagnostic models for open defects and cover algorithms for multiple fault diagnosis. Experimental results include simulation results on processor functional blocks and silicon results on chipsets and processors from artificially induced defects and production fallout.

157 citations


Proceedings ArticleDOI
17 Apr 2000
TL;DR: On-line, multi-level fault tolerant (FT) technique for system functions and applications mapped to partially and dynamically reconfigurable FPGAs based on the roving self testing areas (STARs) fault detection/location strategy.
Abstract: In this paper we present an on-line, multi-level fault tolerant (FT) technique for system functions and applications mapped to partially and dynamically reconfigurable FPGAs. Our method is based on the roving self testing areas (STARs) fault detection/location strategy presented in Abramovici et al. (1999). In STARs, the area under test uses partial reconfiguration properties to modify the configuration of the area under test without affecting the configuration of the system function and dynamic reconfiguration properties to allow uninterrupted execution of the system function while reconfiguration takes place. In this paper we take this one step further. Once a fault (or multiple faults) is detected we dynamically reconfigure the working area application around the fault with no additional system function interruption (other than the interruption when a STAR moves to a new location). We also apply the concept of partially usable blocks to increase fault tolerance. Our method has been successfully implemented and demonstrated on the ORCA 2CA series FPGAs from Lucent Technologies.

147 citations


Proceedings ArticleDOI
03 Oct 2000
TL;DR: This paper proposes a technique of combining LBIST and deterministic ATPG to form "hybrid test patterns" which merge pseudo-random and Deterministic test data which reduce the number of pseudorandom patterns by orders of magnitude, thus addressing power issues.
Abstract: A common approach for large industrial designs is to use logic built-in self-test (LBIST) followed by test data from an external tester. Because the fault coverage with LBIST alone is not sufficient, there is a need to top-up the fault coverage with additional deterministic test patterns from an external tester. This paper proposes a technique of combining LBIST and deterministic ATPG to form "hybrid test patterns" which merge pseudo-random and deterministic test data. Experiments have been done on the Motorola PowerPC/sup TM/ microprocessor core to study the proposed hybrid test patterns. Hybrid test patterns provide several advantages: (1) can be applied using STUMPS architecture (Bardell, 82) with a minor modification, (2) significantly reduce external test data stored in tester memory, (3) reduce the number of pseudorandom patterns by orders of magnitude, thus addressing power issues.

129 citations


Proceedings ArticleDOI
Said Hamdioui1, A.J. Van De Goor1
04 Dec 2000
TL;DR: A new march test detecting all realistic faults, with a test length of 14n, will be introduced, and its fault coverage is compared with other known tests.
Abstract: In this paper a complete analysis of spot defects in industrial SRAMs will be presented. All possible defects are simulated, and the resulting electrical faults are transformed into functional fault models. The existence of the usually used theoretical memory fault models will be verified and new ones will be presented. Finally, a new march test detecting all realistic faults, with a test length of 14n, will be introduced, and its fault coverage is compared with other known tests.

Book ChapterDOI
19 Jun 2000
TL;DR: In this paper, a test suite is complete with respect to a given fault model when each implementation from the fault domain passes it if and only if the postulated conformance relation holds between the implementation and its specification.
Abstract: The annotated bibliography highlights work in the area of algorithmic test generation from formal specifications with guaranteed fault coverage, i.e., fault model-driven test derivation. A fault model is understood as a triple, comprising a finite state specification, conformance relation and fault domain that is the set of possible implementations. The fault model can be specialized to Input/Output FSM, Labeled Transition System, or Input/Output Automaton and to a number of conformance relations such as FSM equivalence, reduction or quasiequivalence, trace inclusion or trace equivalence and others. The fault domain usually reflects test assumptions, as an example, it can be the universe of all possible I/O FSMs with a given number of states, a classical fault domain in FSM-based testing. A test suite is complete with respect to a given fault model when each implementation from the fault domain passes it if and only if the postulated conformance relation holds between the implementation and its specification. A complete test suite is said to provide fault coverage guarantee for a given fault model.

Journal ArticleDOI
TL;DR: A software modification strategy allowing on-line detection of transient errors based on a set of rules for introducing redundancy in the high-level code, which is therefore particularly suited for low-cost safety-critical microprocessor-based applications.
Abstract: This paper deals with a software modification strategy allowing on-line detection of transient errors. Being based on a set of rules for introducing redundancy in the high-level code, the method can be completely automated, and is therefore particularly suited for low-cost safety-critical microprocessor-based applications. Experimental results are presented and discussed, demonstrating the effectiveness of the approach in terms of fault detection capabilities.

Proceedings ArticleDOI
30 Apr 2000
TL;DR: This paper proposes an algorithm to design a test pattern generator based on cellular automata for testing combinational circuits that effectively reduces power consumption while attaining high fault coverage and experimental results show that this approach reduces the power consumed during test by 34% on average.
Abstract: In the last decade, researchers devoted much effort to reduce the average power consumption in VLSI systems during normal operation mode, while power consumption during test operation mode was usually neglected. However, during test application, circuits are subjected to an activity level higher than the normal one: the extra power consumption due to test application may thus cause severe hazards to circuit reliability. Moreover, it can dramatically shorten battery life when periodic testing of battery-powered systems is considered. In this paper we propose an algorithm to design a test pattern generator based on cellular automata for testing combinational circuits that effectively reduces power consumption while attaining high fault coverage. Experimental results show that our approach reduces the power consumed during test by 34% on average, without affecting fault coverage, test length and area overhead.

Journal ArticleDOI
03 Oct 2000
TL;DR: Experimental results show that complete fault coverage can be achieved for industrial circuits up to 100 K gates with 10,000 test patterns, at a total area cost for BIST hardware of typically 5–15%.
Abstract: We present the application of a deterministic logic BIST scheme on state-of-the-art industrial circuits. Experimental results show that complete fault coverage can be achieved for industrial circuits up to 100 K gates with 10000 test patterns, at a total area cost for BIST hardware of typically 5%-15%. It is demonstrated that a tradeoff is possible between test quality, test time, and silicon area. In contrast to BIST schemes based on test point insertion no modifications of the circuit under test are required, complete fault efficiency is guaranteed, and the impact on the design process is minimized.

Journal ArticleDOI
TL;DR: In this paper, a fault transient detector unit at the relaying point is used to capture fault generated high frequency transient signals contained in the primary currents, and the decision to trip is based on the relative arrival times of these high frequency components as they propagate through the system.
Abstract: This paper presents a new technique for high-speed protection of transmission lines, the positional protection technique. The technique uses a fault transient detector unit at the relaying point to capture fault generated high frequency transient signals contained in the primary currents. The decision to trip is based on the relative arrival times of these high frequency components as they propagate through the system. Extensive simulation studies of technique were carried out to examine the response to different power system and fault conditions. Results show that the scheme is insensitive to fault type, fault resistance, fault inception angle and system source configuration, and that it is able to offer both very high accuracy and speed in fault detection.

Journal ArticleDOI
TL;DR: This work outlines the strategies needed by the supervisors of a hierarchical controller for dealing with faults and adverse environmental conditions on an automated highway system and gives examples of their detailed operation.
Abstract: A hierarchical controller for dealing with faults and adverse environmental conditions on an automated highway system is proposed. The controller extends a previous control hierarchy designed to work under normal conditions of operation. The faults are classified according to the capabilities remaining on the vehicle or roadside after the fault has occurred. Information about these capabilities is used by supervisors in each of the layers of the hierarchy to select appropriate fault handling strategies. We outline the strategies needed by the supervisors and give examples of their detailed operation.

Proceedings ArticleDOI
30 Apr 2000
TL;DR: This paper proposes a functional self-test technique that is deterministic in nature, targeting the structural test need of manageable components with the aid of processor functionality, and enables at-speed testing of GHz processors with low speed testers.
Abstract: At-speed testing is becoming increasingly difficult with external testers as the speed of microprocessors approaches the GHz range. One solution to this problem is built-in self-test. However, due to their reliance on random patterns, current logic BIST techniques are not able to deal with large designs without adding high test overhead. In this paper, we propose a functional self-test technique that is deterministic in nature. By targeting the structural test need of manageable components with the aid of processor functionality, this technique has the fault coverage advantage of deterministic structural testing and the at-speed advantage of functional testing. Most importantly, by relieving testers from test application, it enables at-speed testing of GHz processors with low speed testers. We have demonstrated our methodology on a simple accumulator-based microprocessor. The results show that with the proposed technique, we are able to apply high-quality at-speed tests with no test overhead.

Patent
16 Jun 2000
TL;DR: In this article, a test system with a shielded enclosure and a common air interface for testing the transmit and receive functionality of wireless communication devices such as mobile phones is presented, where the test antenna is designed to maximize coupling with the antenna(s) of one or more types of wireless devices, while also minimizing variations in test measurements that might result from the particularized location of batteries or processing circuitry within such devices.
Abstract: A test system having a shielded enclosure and common air interface for testing the transmit and receive functionality of wireless communication devices such as mobile phones. A test system according to the present invention provides improved fault coverage by permitting robust testing of the entire signal path of a wireless device, including the antenna structure. In one embodiment of the invention, an RF-shielded enclosure having a test chamber is provided. The structure of the shielded enclosure blocks ambient RF energy from entering the test chamber and interfering with testing operations. The shielded enclosure may be lined with an RF absorbing material to improve test repeatability. A novel test antenna structure is disposed in the test chamber for wirelessly communicating test signals to a device under test. The test antenna is designed to maximize coupling with the antenna(s) of one or more types of wireless devices, while also minimizing variations in test measurements that might result from the particularized location of batteries or processing circuitry within such devices. The test antenna is coupled to an RF connector that provides a connection point for an external test set functioning as a base-station simulator for use in testing all or a subset of the features of the device under test. The test antenna may be formed on a printed circuit board. In one such embodiment, the elements of the test antenna are formed on a printed circuit board, with the element nearest the antenna of the device under test formed on the side of the circuit board nearest the device under test. The remainder of the test antenna is formed on the opposite side of the printed circuit board.

Proceedings ArticleDOI
30 Apr 2000
TL;DR: This work proposes low power BIST schemes for datapath architectures built around multiplier-accumulator pairs, based on deterministic test patterns, and finds that these schemes are more efficient than pseudorandom BIST for the same high fault coverage target.
Abstract: Power in processing cores (microprocessors, DSPs) is primarily consumed in the functional modules of the datapath. Among these modules, multipliers consume the largest amount of power due to their size and complexity. We propose low power BIST schemes for datapath architectures built around multiplier-accumulator pairs, based on deterministic test patterns. Two alternatives are proposed depending on whether the target is low energy dissipation during a BIST session or low power dissipation (i.e. average energy dissipation between successive test vectors). The proposed BIST schemes are more efficient than pseudorandom BIST for the same high fault coverage target. Up to 78.33% energy saving is achieved by the proposed low energy BIST scheme and up to 82.22% power saving is achieved by the proposed low power BIST scheme, compared with pseudorandom BIST.

Proceedings ArticleDOI
03 Oct 2000
TL;DR: The experimental results for two microprocessors indicate that the test instruction sequences can be successfully generated for a high percentage of testable path delay faults.
Abstract: This paper addresses the problem of testing path delay faults in a microprocessor core using its instruction set We propose to self-test a processor core by running an automatically synthesized test program which can achieve a high path delay fault coverage This paper discusses the method and the prototype software framework for synthesizing such a test program Based on the processor's instruction set architecture, micro-architecture, RTL netlist as well as gate-level netlist on which the path delay faults are modeled, the method generates deterministic tests (in the form of instruction sequences) by cleverly combining structural and instruction-level test generation techniques The experimental results for two microprocessors indicate that the test instruction sequences can be successfully generated for a high percentage of testable path delay faults

Proceedings ArticleDOI
01 Jun 2000
TL;DR: A new fault representation mechanism for digital circuits based on fault tuples, which shows a 17% reduction of average CPU time when performing sim ulation on all fault types simultaneously, as opposed to individually.
Abstract: We introduce a new fault representation mechanism for digital circuits based on fault tuples. A fault tuple is a simple 3-element condition for a signal line, its value, and clock cycle constrain t. AND-OR expressions of fault tuples are used to represent arbitrary misbehaviors. A fault simulator based on fault tuples was used to conduct experiments on benc hmark circuits. Simulation results show that a 17% reduction of average CPU time is achiev ed when performing sim ulation on all fault types simultaneously, as opposed to individually. We expect further improvements in speedup when the shared characteristics of the various fault types are better exploited.

Proceedings ArticleDOI
30 Apr 2000
TL;DR: The synthesis algorithm for synthesizing BIST test pattern generators using the C-compatibility technique into ATOM, an advanced ATPG system for combinational circuits achieves 100% stuck-at fault coverage in much smaller test application time than the previously published counter-based exhaustive BIST pattern generators.
Abstract: This paper presents a new technique, called C-compatibility, for reducing the test application time of the counter-based exhaustive built-in-self-test (BIST) test pattern generators. This technique reduces the test application time by reducing the size of the binary counter used in the test pattern generators. We have incorporated the synthesis algorithm for synthesizing BIST test pattern generators using the C-compatibility technique into ATOM, an advanced ATPG system for combinational circuits. The experimental results showed that the test pattern generators synthesized using this technique for the ISCAS 85 and full scan versions of the ISCAS 89 benchmark circuits achieve 100% stuck-at fault coverage in much smaller test application time than the previously published counter-based exhaustive BIST pattern generators.

Journal ArticleDOI
TL;DR: A new coverage metric for delay fault tests is proposed, which resembles path delay test and not the gate or transition delay test, and the maximum number of tests (or faults) is limited to twice the number of lines.
Abstract: We propose a new coverage metric for delay fault tests. The coverage is measured for each line with a rising and a falling transition, but the test criterion differs from that of the slow-to-rise and slow-to-fall transition faults. A line is tested by a line delay test, which is a robust path delay test for the longest sensitizable path producing a given transition on the target line. Thus, the test criterion resembles path delay test and not the gate or transition delay test. Yet, the maximum number of tests (or faults) is limited to twice the number of lines. In a two-pass test-generation procedure, we first attempt delay tests for a minimal set of longest paths for all lines. Fault simulation is used to determine the coverage metric. For uncovered lines, in the second pass, several paths of decreasing lengths are targeted. We give results for several benchmark circuits.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This paper presents a new approach for parametric fault simulation and test vector generation that utilizes the process information and the sensitivity of the circuit principal components in order to generate statistical models of the fault-free and the faulty circuit.
Abstract: Process variation has forever been the major fail cause of analog circuits where small deviations in component values cause large deviations in the measured output parameters. This paper presents a new approach for parametric fault simulation and test vector generation. The proposed approach utilizes the process information and the sensitivity of the circuit principal components in order to generate statistical models of the fault-free and the faulty circuit. The obtained information is then used as a measurement to quantify the testability of the circuit. This approach, extended by hard fault testing, has been implemented as automated tool set for IC testing called FaultMaxx and TestMaxx.

Patent
29 Jun 2000
TL;DR: In this article, a shadow mode created by a fault handling virtual machine is invoked to bring the computer system back to a normal operating state in which the component or action causing the initial nonrecoverable fault is avoided.
Abstract: Methods, apparatus, and computer program products are disclosed for analyzing and recovering from severe to catastrophic faults in a computer system. When a fault that cannot be handled by the computer system's normal fault handling processes, a shadow mode created by a fault handling virtual machine is invoked. The fault handling virtual machine executes only when the normally nonrecoverable fault is encountered and executes as a triangulated or shadow mode on the system. Once shadow mode is invoked, fault context data is collected on the system and used to analyze and recover from the fault. More specifically, one or more post-fault stable states are constructed by the fault handling virtual machine. These stable states are used to bring the computer system back to a normal operating state in which the component or action causing the initial nonrecoverable fault is avoided. Persistent faults may be encountered while the virtual machine is attempting to recover from the initial fault.

Journal ArticleDOI
TL;DR: A new fault-tolerant control approach is presented, based on the on-line estimation of an eventual fault and the addition of a new control law to the nominal control law in order to reduce the fault effect once this fault is detected and isolated.
Abstract: Fault-tolerant control or reconfigurable control systems are generally based on a nominal control law associated with a fault detection and isolation module. A general review of techniques dealing with this problem is given and a new fault-tolerant control approach is presented. This method is based on the on-line estimation of an eventual fault and the addition of a new control law to the nominal control law in order to reduce the fault effect once this fault is detected and isolated. The performances of this method depend on the time delay between the occurrence of the fault and its detection and isolation. A modified approach is then proposed in order to avoid the problems generated by delays, false alarms or non-detection inherent to diagnosis techniques. These methods are applied to a pilot plant and their performances are compared and discussed.

Proceedings ArticleDOI
30 Apr 2000
TL;DR: The experimental results for two microprocessors (Parwan and DLX) indicate that a significant percentage of structurally testable paths are functionally untestable and thus need not be tested.
Abstract: This paper addresses the problem of testing path delay faults in a microprocessor using instructions. It is observed that a structurally testable path (i.e., a path testable through at-speed scan) in a microprocessor might not be testable by its instructions simply because no instruction sequence can produce the desired test sequence which can sensitize the paths and capture the fault effect into the destination output/flip-flop at-speed. These paths are called functionally untestable paths. We discuss the impact of delay defects on the functionally untestable paths on the overall circuit performance and illustrate that they do not need to be tested if the delay defect does not cause the path delay to exceed twice the clock period. Identification of such paths helps determine the achievable path delay fault coverage and reduce the subsequent test generation effort. The experimental results for two microprocessors (Parwan and DLX) indicate that a significant percentage of structurally testable paths are functionally untestable and thus need not be tested.

Proceedings ArticleDOI
03 Oct 2000
TL;DR: The results suggest that untargetted test patterns perform almost as well as those targetted on a transition fault model, despite appearing to have a much lower fault coverage.
Abstract: This paper reflects on some recent results that show the value of delay-fault tests on a deep sub-micron process. However, the results also suggest that untargetted test patterns perform almost as well as those targetted on a transition fault model, despite appearing to have a much lower fault coverage. This leads to an examination of the defect mechanisms in deep sub-micron ICs, in particular the relationship of crosstalk and power-rail coupling to resistive opens and resistive bridges. A number of new fault mechanisms are described. The paper shows the importance of initialization conditions for resistive opens and the importance of noise margins with resistive bridges. These noise margin considerations throw doubts on the idea used by other authors of the "critical resistance" of a bridge.

Journal ArticleDOI
TL;DR: A testable EXOR-Sum-of-Products (ESOP) circuit realization and a simple, universal test set which detects all single stuck-at faults in the internal lines and the primary inputs/ outputs of the realization are given.
Abstract: A testable EXOR-Sum-of-Products (ESOP) circuit realization and a simple, universal test set which detects all single stuck-at faults in the internal lines and the primary inputs/outputs of the realization are given. Since ESOP is the most general form of AND-EXOR representations, our realization and test set are more versatile than those described by other researchers for the restricted GRM, FPRM, and PPRM forms of AND-EXOR circuits. Our circuit realization requires only two extra inputs for controllability and one extra output for observability. The cardinality of our test set for an n input circuit is (n+6). For Built-in Self-Test (BIST) applications, we show that our test set can be generated internally as easily as a pseudorandom pattern and that it provides 100 percent single stuck-at fault coverage. In addition, our test set requires a much shorter test cycle than a comparable pseudoexhaustive or pseudorandom test set.