scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 2001"


Journal ArticleDOI
TL;DR: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal as discussed by the authors, such as rate of fault detection, a measure of how quickly faults are detected within the testing process.
Abstract: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites.

1,200 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: Techniques are presented in this paper that allow for substantial compression of Automatic Test Pattern Generation (ATPG) produced test vectors, allowing for a more than 10-fold reduction in tester scan buffer data volume on ATPG compacted tests.
Abstract: Rapid increases in the wire-able gate counts of ASICs stress existing manufacturing test equipment in terms of test data volume and test capacity. Techniques are presented in this paper that allow for substantial compression of Automatic Test Pattern Generation (ATPG) produced test vectors. We show compression efficiencies allowing a more than 10-fold reduction in tester scan buffer data volume on ATPG compacted tests. In addition, we obtain almost a 2/spl times/ scan test time reduction. By implementing these techniques for production testing of huge-gate-count ASICs, IBM will continue using existing automated test equipment (ATE)-avoiding costly upgrades and replacements.

368 citations


Proceedings ArticleDOI
01 Oct 2001
TL;DR: A safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language and also handles incomplete programs.
Abstract: Regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. To reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. A safe regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. Safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language. Unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. The technique can thus be safely applied in the (very common) case of Java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. The paper also describes RETEST, a regression-test-selection algorithm can be effective in reducing the size of the test suite.

344 citations


Proceedings ArticleDOI
19 Nov 2001
TL;DR: The specific SmartBIST implementation shown in this paper guarantees that all test cubes can be successfully encoded by the modified ATPG algorithm irrespective of the number and position of the care bits.
Abstract: SmartBIST is a name for a family of streaming scan test pattern decoders that are suitable for on-chip integration. The automatic test pattern generation (ATPG) algorithms are modified to generate scan test stimulus vectors in a highly compacted source format that is compatible with the SmartBIST decoder hardware. The compacted stimulus vectors are streamed from automatic test equipment (ATE) to the decoder, which expands the data stream in real-time into fully expanded scan test vectors. SmartBIST encoding and decoding use simple algebraic techniques similar to those used for LFSR-coding (also known as LFSR-reseeding). The specific SmartBIST implementation shown in this paper guarantees that all test cubes can be successfully encoded by the modified ATPG algorithm irrespective of the number and position of the care bits.

285 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: Delay fault test application via enhanced scan and skewed load techniques is shown to allow scan-based delay tests to be applied that are unrealizable in normal operation, and rather than higher coverage being a positive feature, it has negative impact on yield and designer productivity.
Abstract: Delay fault test application via enhanced scan and skewed load techniques is shown to allow scan-based delay tests to be applied that are unrealizable in normal operation. Rather than higher coverage being a positive feature, it is shown to have negative impact on yield and designer productivity. The use of functionally justified tests is defended by both a motivating example and data from benchmark circuits. Implications on overhead, yield, timing optimization, and test debug are discussed.

252 citations


Proceedings ArticleDOI
22 Jun 2001
TL;DR: A test pattern compression scheme is proposed in order to reduce test data volume and application time and increase the number of scan chains that can be supported by an ATE by utilizing an on-chip decompressor.
Abstract: A test pattern compression scheme is proposed in order to reduce test data volume and application time. The number of scan chains that can be supported by an ATE is significantly increased by utilizing an on-chip decompressor. The functionality of the ATE is kept intact by moving the decompression task to the circuit under test. While the number of virtual scan chains visible to the ATE is kept small, the number of internal scan chains driven by the decompressed pattern sequence can be significantly increased.

244 citations


Proceedings ArticleDOI
01 Jan 2001
TL;DR: This work presents a method for automatically generating test cases according to structural coverage criteria, and shows how a model checker can be used to automatically generate complete test sequences that provide a pre-defined coverage of any software development artifact that can be represented as a finite state model.
Abstract: Presents a method for automatically generating test cases according to structural coverage criteria. We show how a model checker can be used to automatically generate complete test sequences that provide a pre-defined coverage of any software development artifact that can be represented as a finite state model. Our goal is to help reduce the high cost of developing test cases for safety-critical software applications that require a certain level of coverage for certification, e.g. safety-critical avionics systems that need to demonstrate MC/DC (modified condition and decision) coverage of the code. We define a formal framework which is suitable for modeling software artifacts like requirements models, software specifications or implementations. We then show how various structural coverage criteria can be formalized and used to make a model checker provide test sequences to achieve this coverage. To illustrate our approach, we demonstrate how a model checker can be used to generate test sequences for MC/DC coverage of a small case example.

198 citations


Proceedings ArticleDOI
29 Mar 2001
TL;DR: A new low power test-per-clock BIST test pattern generator that provides test vectors which can reduce the switching activity during test operation and numerous advantages can be found in applying such a technique during BIST.
Abstract: In this paper, we present a new low power test-per-clock BIST test pattern generator that provides test vectors which can reduce the switching activity during test operation. The proposed low power/energy BIST technique is based on a modified clock scheme for the TPG and the clock tree feeding the TPG. Numerous advantages can be found in applying such a technique during BIST.

154 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: The proposed technique handles both stuck-at and timing failures (transition faults and hold time faults) and improves the diagnostic resolution by ranking the suspect scan cells inside a range of scan cells.
Abstract: In this paper, we present a scan chain fault diagnosis procedure. The diagnosis for a single scan chain fault is performed in three steps. The first step uses special chain test patterns to determine both the faulty chain and the fault type in the faulty chain. The second step uses a novel procedure to generate special test patterns to identify the suspect scan cell within a range of scan cells. Unlike previously proposed methods that restrict the location of the faulty scan cell only from the scan chain output side, our method restricts the location of the faulty scan cell from both the scan chain output side and the scan chain input side. Hence the number of suspect scan cells is reduced significantly in this step. The final step further improves the diagnostic resolution by ranking the suspect scan cells inside this range. The proposed technique handles both stuck-at and timing failures (transition faults and hold time faults). The extension of the procedure to diagnose multiple faults is discussed. The experimental results show the effectiveness of the proposed method.

147 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: The McKinley processor is the result of a joint design effort between Intel and Hewlett-Packard engineers, and is the second processor implementation of the Itanium/sup TM/ Processor Family.
Abstract: The McKinley processor is the result of a joint design effort between Intel and Hewlett-Packard engineers, and is the second processor implementation of the Itanium/sup TM/ processor family (IPF) architecture. This paper describes the methodology developed for testing a complex high-performance microprocessor design. An overview of the processor is presented, along with the goals for the test methodology. Details of the test control blocks, scan methodology, and clocking are given. The scanlatch design, trade-offs and verification processes are discussed, along with some details of ATPG modeling and memory array testing. Finally, some results are presented.

139 citations


Proceedings ArticleDOI
08 Oct 2001
TL;DR: The initial experience shows that this approach of requirement-based test generation may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.
Abstract: Testing large software systems is very laborious and expensive. Model-based test generation techniques are used to automatically generate tests for large software systems. However, these techniques require manually created system models that are used for test generation. In addition, generated test cases are not associated with individual requirements. In this paper, we present a novel approach of requirement-based test generation. The approach accepts a software specification as a set of individual requirements expressed in textual and SDL formats (a common practice in the industry). From these requirements, system model is automatically created with requirement information mapped to the model. The system model is used to automatically generate test cases related to individual requirements. Several test generation strategies are presented. The approach is extended to requirement-based regression test generation related to changes on the requirement level. Our initial experience shows that this approach may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: A novel architecture for scan-based mixed mode BIST relies on a two-dimensional compression scheme, which combines the advantages of known vertical and horizontal compression techniques.
Abstract: A novel architecture for scan-based mixed mode BIST is presented. To reduce the storage requirements for the deterministic patterns it relies on a two-dimensional compression scheme, which combines the advantages of known vertical and horizontal compression techniques. To reduce both the number of patterns to be stored and the number of bits to be stored for each pattern, deterministic test cubes are encoded as seeds of an LFSR (horizontal compression), and the seeds are again compressed into seeds of a folding counter sequence (vertical compression). The proposed BIST architecture is fully compatible with standard scan design, simple and flexible, so that sharing between several logic cores is possible. Experimental results show that the proposed scheme requires less test data storage than previously published approaches providing the same flexibility and scan compatibility.

Proceedings ArticleDOI
22 Jun 2001
TL;DR: Experimental results over MCNC benchmarks show that this approach outperforms SIS and other BDD-based decomposition methods in terms of area and delay of the resulting circuits with comparable CPU time.
Abstract: We propose a new BDD-based method for decomposition of multi-output incompletely specified logic functions into netlists of two-input logic gates. The algorithm uses the internal don't-cares during the decomposition to produce compact well-balanced netlists with short delay. The resulting netlists are provably non-redundant and facilitate test pattern generation. Experimental results over MCNC benchmarks show that our approach outperforms SIS and other BDD-based decomposition methods in terms of area and delay of the resulting circuits with comparable CPU time.

Journal ArticleDOI
TL;DR: The Poirot tool isolates and diagnoses defects through fault modeling and simulation, and functional and sequential test pattern applications show success with circuits having a high degree of observability.
Abstract: The Poirot tool isolates and diagnoses defects through fault modeling and simulation. Along with a carefully selected partitioning strategy, functional and sequential test pattern applications show success with circuits having a high degree of observability.

Journal ArticleDOI
TL;DR: This paper proposes a new pattern generation technique for delay testing and dynamic timing analysis that can take into account the impact of the power supply noise on the signal propagation delays and shows that the new patterns produce significantly longer delays on the selected paths.
Abstract: Noise effects such as power supply and crosstalk noise can significantly impact the performance of deep submicrometer designs. Existing delay testing and timing analysis techniques cannot capture the effects of noise on the signal/cell delays. Therefore, these techniques cannot capture the worst case timing scenarios and the predicted circuit performance might not reflect the worst case circuit delay. More accurate and efficient timing analysis and delay testing strategies need to be developed to predict and guarantee the performance of deep submicrometer designs. In this paper, we propose a new pattern generation technique for delay testing and dynamic timing analysis that can take into account the impact of the power supply noise on the signal propagation delays. In addition to sensitizing the selected paths, the new patterns also cause high power supply noise on the nodes in these paths. Thus, they also cause longer propagation delays for the nodes along the paths. Our experimental results on benchmark circuits show that the new patterns produce significantly longer delays on the selected paths compared to the patterns derived using existing pattern generation methods.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: A transition fault test set, TARO, in which each Transition fault is propagated to All the Reachable Outputs, is created and the experimental results show that an input pattern sequence is needed to detect the defects in the eight "challenge" Murphy chips.
Abstract: The test results of eight "challenge" Murphy chips that escaped either at least one of the 100% single stuck-at fault test sets or the 100% transition fault test set were analyzed. The results show that: (1) an input pattern sequence is needed to detect the defects in the eight chips; (2) the detection of a transition fault depends on the outputs at which it is observed; (3) a transition fault test set is more effective if each transition fault is detected more than once. A transition fault test set, TARO, in which each Transition fault is propagated to All the Reachable Outputs, is created and the experimental results are presented. The TARO test set detected all the eight "challenge" Murphy chips.

Journal ArticleDOI
TL;DR: A low-overhead scheme for achieving complete (100%) fault coverage during built-in self test of circuits with scan is presented and experimental results indicate that complete fault coverage can be obtained with low hardware overhead.
Abstract: A low-overhead scheme for achieving complete (100%) fault coverage during built-in self test of circuits with scan is presented. It does not require modifying the function logic and does not degrade system performance (beyond using scan). Deterministic test cubes that detect the random-pattern-resistant (r.p.r.) faults are embedded in a pseudorandom sequence of bits generated by a linear feedback shift register (LFSR). This is accomplished by altering the pseudorandom sequence by adding logic at the LFSR's serial output to "fix" certain bits. A procedure for synthesizing the bit-fixing logic for embedding the test cubes is described. Experimental results indicate that complete fault coverage can be obtained with low hardware overhead. Further reduction in overhead is possible by using a special correlating automatic test pattern generation procedure that is described for finding test cubes for the r.p.r. faults in a way that maximizes bitwise correlation.

Journal ArticleDOI
TL;DR: An algorithm for generating test patterns automatically from functional register-transfer level (RTL) circuits that target detection of stuck-at faults in the circuit at the logic level, using a data structure named assignment decision diagram that has been proposed previously in the field of high-level synthesis.
Abstract: In this paper, we present an algorithm for generating test patterns automatically from functional register-transfer level (RTL) circuits that target detection of stuck-at faults in the circuit at the logic level. In order to do this, we utilize a data structure named assignment decision diagram that has been proposed previously in the field of high-level synthesis. With the advent of RTL synthesis tools, functional RTL designs are now widely used in the industry to cut design turn around time. This paper addresses the problem of test pattern generation directly at this level due to a number of advantages inherent at the RTL. Since the number of primitive elements at the RTL is usually less than the logic level, the problem size is reduced leading to a reduction in the test-generation time over logic-level automatic test pattern generation (ATPG). Also, a reduction in the number of backtracks can lead to improved fault coverage and reduced test application time over logic-level techniques. The test patterns thus generated can also be used to perform RTL-RTL and RTL-logic validation. The algorithm is very versatile and can tackle almost any type of single-clock design, although performance varies according to the design style. It gracefully degrades to an inefficient logic-level ATPG algorithm if it is applied to a logic-level circuit. Experimental results demonstrate that over 1000 times reduction in test-generation time can be achieved by this algorithm on certain types of RTL circuits without any compromise in fault coverage.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: This paper analyzes how compactors affect test and diagnosis and shows that compactors can be designed to actually improve the testability of certain faults, while providing full diagnosis capability.
Abstract: Originally developed decades ago, logic built-in self-test (BIST) evolved and is now increasingly being adopted to cope with rapid growth in design size and complexity. Compared to deterministic pattern test, logic BIST requires many more test patterns, and therefore, increased test time unless many more internal scan chains can be shifted in parallel. To match this large number of scan chains, the width of the signature analyzer would have to be enlarged, which would result in large area overhead and signature storage space. Instead, a combinational space-compactor is inserted between the scan chain outputs and the signature analyzer inputs. However, the compactor may deteriorate the ability to test and diagnose the design. This paper analyzes how compactors affect test and diagnosis and shows that compactors can be designed to actually improve the testability of certain faults, while providing full diagnosis capability. Algorithms that allow automated design of optimal compactors are presented and results are discussed.

Proceedings ArticleDOI
22 Jun 2001
TL;DR: RFN, a formal property verification tool based on abstraction refinement, is developed to verify various properties of real-world RTL designs containing approximately 5,000 registers, which represents an order of magnitude improvement over previous results.
Abstract: We present RFN, a formal property verification tool based on abstraction refinement. Abstraction refinement is a strategy for property verification. It iteratively refines an abstract model to better approximate the behavior of the original design in the hope that the abstract model alone will provide enough evidence to prove or disprove the property.However, previous work on abstraction refinement was only demonstrated on designs with up to 500 registers. We developed RFN to verify real-world designs that may contain thousands of registers. RFN differs from the previous work in several ways. First, instead of relying on a single engine, RFN employs multiple formal verification engines, including a BDD-ATPG hybrid engine and a conventional BDD-based fixpoint engine, for finding error traces or proving properties on the abstract model. Second, RFN uses a novel two-phase process involving 3-valued simulation and sequential ATPG to determine how to refine the abstract model. Third, RFN avoids the weakness of other abstraction-refinement algorithms --- finding error traces on the original design, by utilizing the error trace of the abstract model to guide sequential ATPG to find an error trace on the original design.We implemented and applied a prototype of RFN to verify various properties of real-world RTL designs containing approximately 5,000 registers, which represents an order of magnitude improvement over previous results. On these designs, we successfully proved a few properties and discovered a design violation.

Proceedings ArticleDOI
22 Jun 2001
TL;DR: In this article, the authors present a technique to reduce both test data volume and scan power dissipation using test data compression for system-on-a-chip testing by using Golomb coding of precomputed test sets.
Abstract: We present a novel technique to reduce both test data volume and scan power dissipation using test data compression for system-on-a-chip testing. Power dissipation during test mode using ATPG-compacted test patterns is much higher than during functional mode. We show that Golomb coding of precomputed test sets leads to significant savings in peak and average power, without requiring either a slower scan clock or blocking logic in the scan cells. We also improve upon prior work on Golomb coding by showing that a separate cyclical scan register is not necessary for pattern decompression. Experimental results for the larger ISCAS 89 benchmarks show that reduced test data volume and low power scan testing can indeed be achieved in all cases.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: Experimental results show that the proposed BIST schemes can attain 100% fault coverage for all of benchmark circuits with drastically reduced test sequence lengths, achieved at low hardware cost even for benchmark circuits that have large number scan inputs.
Abstract: Two noble scan based BIST architectures, namely parallel fixing and serial fixing BIST, which can be implemented at very low hardware cost even for random pattern resistant circuits that have large number of scan elements, are proposed. Both of the proposed BIST schemes use 3-weight weighted random BIST techniques to reduce test sequence lengths by improving detection probabilities of random pattern resistant faults. A special ATPG is used to generate suitable test cube sets that lead to BIST circuits that require minimum hardware overhead. Experimental results show that the proposed BIST schemes can attain 100% fault coverage for all of benchmark circuits with drastically reduced test sequence lengths. This reduction in test sequence length is achieved at low hardware cost even for benchmark circuits that have large number scan inputs.

Proceedings ArticleDOI
04 Nov 2001
TL;DR: A method for identifying X inputs of test vectors in a given test set by using fault simulation and procedures similar to implication and justification of ATPG algorithms is proposed.
Abstract: Given a test set for stuck at faults, some of primary input values may be changed to opposite logic values without losing fault coverage. We can regard such input values as don't care (X). In this paper, we propose a method for identifying X inputs of test vectors in a given test set. While there are many combinations of X inputs in the test set generally, the proposed method finds one including X inputs as many as possible, by using fault simulation and procedures similar to implication and justification of ATPG algorithms. Experimental results for ISCAS benchmark circuits show that approximately 66% of inputs of un-compacted test sets could be X in average. Even for compacted test sets, the method found that approximately 47% of inputs are X. Finally, we discuss how logic values are reassigned to the identified X inputs where several applications exist to make test vectors more desirable.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: An automatic test pattern generation (ATPG) method is presented for a scan-based test architecture which minimizes ATE storage requirements and reduces the bandwidth between the automatic test equipment and the chip under test.
Abstract: An automatic test pattern generation (ATPG) method is presented for a scan-based test architecture which minimizes ATE storage requirements and reduces the bandwidth between the automatic test equipment (ATE) and the chip under test. To generate tailored deterministic test patterns, a standard ATPG tool performing dynamic compaction and allowing constraints on circuit inputs is used. The combination of an appropriate test architecture and the tailored test patterns reduces the test data volume up to two orders of magnitude compared with standard compacted test sets.

Proceedings ArticleDOI
13 Mar 2001
TL;DR: A method for the generation of effective programs for the self-test of a processor that can be partially automated and combines ideas from traditional functional approaches and from the ATPG field is described.
Abstract: Testing is a crucial issue in SOC development and production process. A popular solution for SOCs that include microprocessor cores is based on making them execute a test program. Thus, implementing a very attractive BIST solution. This paper describes a method for the generation of effective programs for the self-test of a processor. The method can be partially automated and combines ideas from traditional functional approaches and from the ATPG field. We assess the feasibility and effectiveness of the method by applying it to a 8051 core.

Proceedings ArticleDOI
22 Jun 2001
TL;DR: The proposed delay model accurately captures the effect of the targeted delay phe-nomena over a wide range of transition times and skews and cap-tures the effects of more variables than table lookup methods can handle.
Abstract: We present a new model to capture the delay phenomena associ-ated with simultaneous to-controlling transitions. The proposed delay model accurately captures the effect of the targeted delay phe-nomena over a wide range of transition times and skews. It also cap-tures the effects of more variables than table lookup methods can handle. The model helps improve the accuracy of static timing anal-ysis, incremental timing refinement, and timing-based ATPG.

Journal ArticleDOI
TL;DR: The proposed improvement allows us to drop tests without simulating them based on the fact that the faults they detect will be detected by tests that will be simulated later, hence the name of the improved procedure: forward-looking fault simulation.
Abstract: Fault simulation of a test set in an order different from the order of generation (e.g., reverse- or random-order fault simulation) is used as a fast and effective method to drop unnecessary tests from a test set in order to reduce its size. We propose an improvement to this type of fault simulation process that makes it even more effective in reducing the test-set size. The proposed improvement allows us to drop tests without simulating them based on the fact that the faults they detect will be detected by tests that will be simulated later, hence the name of the improved procedure: forward-looking fault simulation. We present experimental results to demonstrate the effectiveness of the proposed improvement.

Patent
Joseph T. Apuzzo1, John Paul Marino1, Curtis L. Hoskins1, Timothy L. Race1, Hemant R. Suri1 
23 Oct 2001
TL;DR: In this article, a functional testing and evaluation technique is provided employing an abstraction matrix that describes a complex software component to be tested, and test cases are derived from the at least one test case scenario and used to test the software component.
Abstract: A functional testing and evaluation technique is provided employing an abstraction matrix that describes a complex software component to be tested. The abstraction matrix includes at least one test case scenario and mapped expected results therefore. Test cases are derived from the at least one test case scenario and used to test the software component, thereby generating test results. The test results are automatically evaluated using the abstraction matrix. The evaluating includes comparing a test case to the at least one test case scenario of the abstraction matrix and if a match is found, comparing the test result for that test case with the mapped expected result therefore in the abstraction matrix.

Proceedings ArticleDOI
29 Mar 2001
TL;DR: This paper presents a new test resource partitioning scheme that is a hybrid approach between external testing and BIST, based on weighted pseudo-random testing and uses a novel approach for compressing and storing the weight sets.
Abstract: This paper presents a new test resource partitioning scheme that is a hybrid approach between external testing and BIST. It reduces tester storage requirements and tester bandwidth requirements by orders of magnitude compared to conventional external testing, but requires much less area overhead than a full BIST implementation providing the same fault coverage. The proposed approach is based on weighted pseudo-random testing and uses a novel approach for compressing and storing the weight sets. Three levels of compression are used to greatly reduce test costs. No test points or any modifications are made to the function logic. The proposed scheme requires adding only a small amount of additional hardware to the STUMPS architecture. Experimental results comparing the proposed approach with other approaches are presented.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: The study focuses on the location and distribution of probable bridging defects and attempts to explain the findings in the context of the characteristics of the design and its implementation.
Abstract: Presents an experimental study of bridging fault locations on the Intel Pentium (TM) 4 CPU as determined by an inductive fault analysis tool. The study focuses on the location and distribution of probable bridging defects and attempts to explain the findings in the context of the characteristics of the design and its implementation. The coverage obtained against these faults by manually generated functional patterns is compared against that achieved by ATPG vectors.