scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 2008"


Journal ArticleDOI
TL;DR: A novel method to automatically generate test cases based on UML state models is presented, which is fully automatic and the generated test cases satisfy transition path coverage criteria.
Abstract: UML is widely accepted and used by industry for modelling and design of software systems. A novel method to automatically generate test cases based on UML state models is presented. In the present approach, the control and data flow logic available in the UML state diagram to generate test data are exploited. The state machine graph is traversed and the conditional predicates on every transition are selected. Then these conditional predicates are transformed and function minimisation technique is applied to generate test cases. The present test data generation scheme is fully automatic and the generated test cases satisfy transition path coverage criteria. The generated test cases can be used to test class as well as cluster-level state-dependent behaviours.

86 citations


Journal ArticleDOI
TL;DR: This paper presents a technique that applies structural knowledge about the circuit during the transformation of Boolean SAT solvers and shows that the size of the problem instances decreases, as well as the run time of the ATPG process.
Abstract: Due to the rapidly growing size of integrated circuits, there is a need for new algorithms for automatic test pattern generation (ATPG). While classical algorithms reach their limit, there have been recent advances in algorithms to solve Boolean Satisfiability (SAT). Because Boolean SAT solvers are working on conjunctive normal forms (CNFs), the problem has to be transformed. During transformation, relevant information about the problem might get lost and, therefore, is not available in the solving process. In this paper, we present a technique that applies structural knowledge about the circuit during the transformation. As a result, the size of the problem instances decreases, as well as the run time of the ATPG process. The technique was implemented, and experimental results are presented. The approach was combined with the ATPG framework of NXP Semiconductors. It is shown that the overall performance of an industrial framework can significantly be improved. Further experiments show the benefits with regard to the efficiency and robustness of the combined approach.

86 citations


01 Jan 2008
TL;DR: In this article, the authors present a technique that applies structural knowledge about the circuit during the transformation of a Boolean SAT solver to reduce the size of the problem instances and the run time of the ATPG process.
Abstract: Due to the rapidly growing size of integrated circuits, there is a need for new algorithms for automatic test pattern generation (ATPG). While classical algorithms reach their limit, there have been recent advances in algorithms to solve Boolean Satisfiability (SAT). Because Boolean SAT solvers are working on conjunctive normal forms (CNFs), the problem has to be transformed. During transformation, relevant information about the problem might get lost and, therefore, is not available in the solving process. In this paper, we present a technique that applies structural knowledge about the circuit during the transformation. As a result, the size of the problem instances decreases, as well as the run time of the ATPG process. The technique was implemented, and experimental results are presented. The approach was combined with the ATPG framework of NXP Semiconductors. It is shown that the overall performance of an industrial framework can significantly be improved. Further experiments show the benefits with regard to the efficiency and robustness of the combined approach.

83 citations


Proceedings ArticleDOI
27 Apr 2008
TL;DR: It is shown that, for the same pattern count, the selected patterns are more effective than timing-aware ATPG for detecting small delay defects caused by resistive shorts, resistive opens, and process variations.
Abstract: Timing-related defects are becoming increasingly important in nanometer technology designs. Small delay variations induced by crosstalk, process variations, power-supply noise, as well as resistive opens and shorts can potentially cause timing failures in a design, thereby leading to quality and reliability concerns. We present a test-grading technique to leverage the method of output deviations for screening small-delay defects (SDDs). A new gate-delay defect probability measure is defined to model delay variations for nanometer technologies. The proposed technique intelligently selects the best set of patterns for SDD detection from an n-detect pattern set generated using timing-unaware automatic test-pattern generation (ATPG). It offers significantly lower computational complexity and it excites a larger number of long paths compared to previously proposed timing-aware ATPG methods. We show that, for the same pattern count, the selected patterns are more effective than timing-aware ATPG for detecting small delay defects caused by resistive shorts, resistive opens, and process variations.

74 citations


Proceedings ArticleDOI
11 May 2008
TL;DR: This contribution introduces the risk-based testing technique RiteDAP, which automatically generates system test cases from activity diagrams and prioritizes those test cases based on risk and the results of applying the technique to a practical example are presented.
Abstract: In practice, available testing budgets limit the number of test cases that can be executed. Thus, a representative subset of all possible test cases must be chosen to guarantee adequate coverage of a test object. In risk-based testing, the probability of a fault and the damage that this fault can cause when leading to a failure is considered for test case prioritization. Existing approaches for risk-based testing provide guidelines for deriving test cases. However, those guidelines lack the level of detail and precision needed for automation. In this contribution, we introduce the risk-based testing technique RiteDAP, which automatically generates system test cases from activity diagrams and prioritizes those test cases based on risk. The results of applying the technique to a practical example are presented and the ability of different prioritization strategies to uncover faults is evaluated.

68 citations


Journal ArticleDOI
TL;DR: The efficacy of the proposed approach is assessed on a number of programs and the empirical results indicate that its performance is significantly better compared to existing dynamic test data generation methods.

65 citations


Proceedings ArticleDOI
01 Oct 2008
TL;DR: The classical triple modular redundancy (TMR) fault tolerant architecture is used as a case study and a new manner to implement the TMR architecture is proposed that makes it very effective for yield improvement purpose.
Abstract: With the technology entering the nano dimension, manufacturing processes are less and less reliable, thus drastically impacting the yield. A possible solution to alleviate this problem in the future could consist in using fault tolerant architectures to tolerate manufacturing defects. In this paper, we use the classical triple modular redundancy (TMR) fault tolerant architecture as a case study. Firstly we analyze the conditions that make the use of TMR architectures interesting for yield improvement purpose. In the second part of the paper, we investigate the test requirements for the TMR architecture and we propose a solution for generating test patterns for this type of architecture. Finally, we propose a new manner to implement the TMR architecture that makes it very effective for yield improvement purpose. Experimental results are provided on ISCAS and ITC benchmark circuits to prove the efficiency of the proposed approach in terms of yield improvement with a low area overhead.

60 citations


Proceedings ArticleDOI
10 Mar 2008
TL;DR: A pattern compaction technique that considers the layout and gate distribution when generating transition delay fault patterns, which will prevent large IR-drop in high demand regions while still allowing a suitable amount of switching to occur elsewhere on the chip to prevent fault coverage loss.
Abstract: Market and customer demands have continued to push the limits of CMOS performance. At-speed test has become a common method to ensure these high performance chips are being shipped to the customers fault-free. However, at-speed tests have been known to create higher-than-average switching activity, which normally is not accounted for in the design of the power supply network. This potentially creates conditions for additional delay in the chip; causing it to fail during test. In this paper, we propose a pattern compaction technique that considers the layout and gate distribution when generating transition delay fault patterns. The technique focuses on evenly distributing switching activity generated by the patterns across the layout rather than allowing high switching activity to occur in a small area in the chip that could occur with conventional delay fault pattern generation. Due to the relationship between switching activity and IR-drop, the reduction of switching will prevent large IR-drop in high demand regions while still allowing a suitable amount of switching to occur elsewhere on the chip to prevent fault coverage loss. This even distribution of switching on the chip will also result in avoiding hot-spots.

59 citations


Journal ArticleDOI
TL;DR: A new approach to prioritize test cases that takes into account the coverage requirements present in the relevant slices of the outputs of test cases is presented, which provides interesting insights into the effectiveness of using relevant slices for test case prioritization.

57 citations


Journal ArticleDOI
TL;DR: A brief history of test technology research is sketches, tracking the evolution of compression technology that has led to the success of scan compression, and presents the important concepts at a high level on a coarse timeline.
Abstract: The beginnings of the modern-day IC test trace back to the introduction of such fundamental concepts as scan, stuck-at faults, and the D-algorithm. Since then, several subsequent technologies have made significant improvements to the state of the art. Today, IC test has evolved into a multifaceted industry that supports innovation. Scan compression technology has proven to be a powerful antidote to this problem, as it has catalyzed reductions in test data volume and test application time of up to 100 times. This article sketches a brief history of test technology research, tracking the evolution of compression technology that has led to the success of scan compression. It is not our intent to identify specific inventors on a finegrained timeline. Instead, we present the important concepts at a high level, on a coarse timeline. Starting in 1998 and continuing to the present, numerous scan-compression-related inventions have had a major impact on the test landscape. However, this article also is not a survey of the various scan compression methods. Rather, we focus on the evolution of the types of constructs used to create breakthrough solutions.

55 citations


Proceedings ArticleDOI
03 Dec 2008
TL;DR: Empirical evidences demonstrate that G2Way, in some cases, outperformed other strategies in terms of the number of generated test data within reasonable execution time and compares its effectiveness against existing strategies including AETG and its variations.
Abstract: Our continuous dependencies on software (i.e. to assist as well as facilitate our daily chores) often raise dependability issue particularly when software is being employed harsh and life threatening or (safety) critical applications. Here, rigorous software testing becomes immensely important. Many combinations of possible input parameters, hardware/software environments, and system conditions need to be tested and verified against for conformance. Due to resource constraints as well as time and costing factors, considering all exhaustive test possibilities would be impossible (i.e. due to combinatorial explosion problem). Earlier work suggests that pairwise sampling strategy (i.e. based on two-way parameter interaction) can be effective. Building and complementing earlier work, this paper discusses an efficient pairwise test data generation strategy, called G2Way. In doing so, this paper demonstrates the correctness of G2Way as well as compares its effectiveness against existing strategies including AETG and its variations, IPO, SA, GA, ACA, and All Pairs. Empirical evidences demonstrate that G2Way, in some cases, outperformed other strategies in terms of the number of generated test data within reasonable execution time.

Proceedings ArticleDOI
27 Apr 2008
TL;DR: A highly optimized branch-and bound algorithm to determine the values to be assigned to the aggressor lines is used to reduce both the ATPG efforts and the number of aborts and the resulting test sets are smaller and achieve a higher defect coverage than stuck-at n-detection test sets, and are robust against process variations.
Abstract: We present a fully automated flow to generate test patterns for interconnect open defects. Both inter-layer opens (open- via defects) and arbitrary intra-layer opens can be targeted. An aggressor-victim model used in industry is employed to describe the electrical behavior of the open defect. The flow is implemented using standard commercial tools for parameter extraction (PEX) and test generation (ATPG). A highly optimized branch-and bound algorithm to determine the values to be assigned to the aggressor lines is used to reduce both the ATPG efforts and the number of aborts. The resulting test sets are smaller and achieve a higher defect coverage than stuck-at n-detection test sets, and are robust against process variations.

Proceedings ArticleDOI
12 Nov 2008
TL;DR: A method to test composite Web service described in BPEL is proposed and an algorithm to generate test cases is proposed based on simulation where the exploration is guided by test purposes.
Abstract: In order to specify the composition of Web services, WSBPEL was defined as an orchestrating language by an international standards consortium. In this paper, we propose a method to test composite Web service described in BPEL. As a first step, the BPEL specification is transformed into an Intermediate Format (IF) model that is based on timed automata, which enables modeling of timing constraints.We defined a conformance relation between two timed automata(of implementation and specification) and then proposed an algorithm to generate test cases. Test case generation is based on simulation where the exploration is guided by test purposes. The proposed method was implemented in a set of tools which were applied to a common Web service as a case study.

Proceedings ArticleDOI
01 Oct 2008
TL;DR: This work explores an adaptive strategy, by introducing a technique that prunes the test set based on a test correlation analysis, which delivers significant test time reductions while attaining higher test quality compared to previous adaptive test methodologies.
Abstract: The ever-increasing complexity of mixed-signal circuits imposes an increasingly complicated and comprehensive parametric test requirement, resulting in a highly lengthened manufacturing test phase. Attaining parametric test cost reduction with no test quality degradation constitutes a critical challenge during test development. The capability of parametric test data to capture systematic process variations engenders a highly accurate prediction of the efficiency of each test for a particular lot of chips even on the basis of a small quantity of characterized data. The predicted test efficiency further enables the adjustment of the test set and test order, leading to an early detection of faults. We explore such an adaptive strategy, by introducing a technique that prunes the test set based on a test correlation analysis. A test selection algorithm is proposed to identify the minimum set of tests that delivers a satisfactory defect coverage. A probabilistic measure that reflects the defect detection efficiency is used to order the test set so as to enhance the probability of an early detection of faulty chips. The test sequence is further optimized during the testing process by dynamically adjusting the initial test order to adapt to the local defect pattern fluctuations in the lot of chips under test. Experimental results show that the proposed technique delivers significant test time reductions while attaining higher test quality compared to previous adaptive test methodologies.

Journal ArticleDOI
TL;DR: This paper presents the use of the output deviation as a surrogate coverage-metric for pattern modeling and test grading and shows that, for the ISCAS benchmark circuits and as compared to other reordering methods, the proposed method provides ldquosteeperrdquo coverage curves for different fault models.
Abstract: At-speed functional testing, delay testing, and n-detection test sets are being used today to detect deep submicrometer defects. However, the resulting test data volumes are too high; the 2005 International Roadmap for Semiconductors predicts that test-application times will be 30 times larger in 2010 than they are today. In addition, many new types of defects cannot be accurately modeled using existing fault models. Therefore, there is a need to model the quality of test patterns such that they can be quickly assessed for defect screening. Test selection is required to choose the most effective pattern sequences from large test sets. Current industry practice for test selection is based on fault grading, which is computationally expensive and must also be repeated for every fault model. Moreover, although efficient methods exist today, for fault-oriented test generation, there is a lack of understanding on how best to combine the test sets thus obtained, i.e., derive the most effective union of the individual test sets without simply taking all the patterns for each fault model. This paper presents the use of the output deviation as a surrogate coverage-metric for pattern modeling and test grading. A flexible, but general, probabilistic-fault model is used to generate a probability map for the circuit, which can subsequently be used for test-pattern reordering. The output deviations resulting from the probability map(s) are used as a coverage-metric to model test patterns; the higher the deviation, the better the quality of the test pattern. We show that, for the ISCAS benchmark circuits and as compared to other reordering methods, the proposed method provides ldquosteeperrdquo coverage curves for different fault models.

Journal ArticleDOI
TL;DR: An efficient fault simulation procedure for this model is described and an efficient test generation procedure is discussed that combines tests for transition faults along the target paths in order to obtain tests that satisfy the requirements of the new model.
Abstract: We propose a new path delay fault model called the transition path delay fault model. This model addresses the following issue. The path delay fault model captures small extra delays, such that each one by itself will not cause the circuit to fail, but their cumulative effect along a path from inputs to outputs can result in faulty behavior. However, non-robust tests for path delay faults may not detect situations where the cumulative effect of small extra delays is sufficient to cause faulty behavior after any number of extra delays are accumulated along a subpath. Under the new path delay fault model, a path delay fault is detected when all the single transition faults along the path are detected by the same test. This ensures that if the accumulation of small extra delays along a subpath is sufficient to cause faulty behavior, the faulty behavior will be detected due to the detection of a transition fault at the end of the subpath. We discuss the new model and present experimental results to demonstrate its viability as an alternative to the standard path delay fault model. We describe an efficient fault simulation procedure for this model. We also describe test generation procedures. An efficient test generation procedure we discuss combines tests for transition faults along the target paths in order to obtain tests that satisfy the requirements of the new model.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: This work is the first to solve the yield loss caused by excessive power supply noise in at-speed scan testing by proposing a novel integrated ATPG scheme that efficiently and effectively performs compressible X-filling.
Abstract: Yield loss caused by excessive power supply noise has become a serious problem in at-speed scan testing. Although X-filling techniques are available to reduce the launch cycle switching activity, their performance may not be satisfactory in the linear-decompressor-based test compression environment. This work is the first to solve this problem by proposing a novel integrated ATPG scheme that efficiently and effectively performs compressible X-filling. Related theoretical principles are established, based on which the problem size is substantially reduced. The proposed scheme is validated by large benchmark circuits as well as an industry design in the embedded deterministic test (EDT) environment.


Journal ArticleDOI
TL;DR: The experimental results on two pipelined processor models demonstrate several orders-of-magnitude reduction in overall validation effort by drastically reducing both test-generation time and number of test programs required to achieve a coverage goal.
Abstract: Functional validation is a major bottleneck in pipelined processor design due to the combined effects of increasing design complexity and lack of efficient techniques for directed test generation. Directed test vectors can reduce overall validation effort, since shorter tests can obtain the same coverage goal compared to the random tests. This article presents a specification-driven directed test generation methodology. The proposed methodology makes three important contributions. First, a general graph model is developed that can capture the structure and behavior (instruction set) of a wide variety of pipelined processors. The graph model is generated from the processor specification. Next, we propose a functional fault model that is used to define the functional coverage for pipelined architectures. Finally, we propose two complementary test generation techniques: test generation using model checking, and test generation using template-based procedures. These test generation techniques accept the graph model of the architecture as input and generate test programs to detect all the faults in the functional fault model. Our experimental results on two pipelined processor models demonstrate several orders-of-magnitude reduction in overall validation effort by drastically reducing both test-generation time and number of test programs required to achieve a coverage goal.

Journal ArticleDOI
TL;DR: Novel and efficient methods for built-in self-test (BIST) of field-programmable gate arrays (FPGAs) for detection and diagnosis of permanent faults in current, as well as emerging, technologies that are expected to have high fault densities are presented.
Abstract: We present novel and efficient methods for built-in self-test (BIST) of field-programmable gate arrays (FPGAs) for detection and diagnosis of permanent faults in current, as well as emerging, technologies that are expected to have high fault densities. Our basic BIST methods can be used in both online and offline testing scenarios, although we focus on the former in this paper. We present 1- and 2-diagnosable BISTer designs that make up a ROving TEster (ROTE). Due to their provable diagnosabilities, these BISTers can avoid time-intensive adaptive diagnosis without significantly compromising diagnostic coverage-the percentage of faults correctly diagnosed. We also develop functional testing methods that test programmable logic blocks (PLBs) in only two circuit functions that will be mapped to them as the ROTE moves across a functioning FPGA. We extend our basic BISTer designs to those with test-pattern generators (TPGs) using multiple PLBs to more efficiently test the complex PLBs of current commercial FPGAs and to also prove the diagnosabilities of these designs. Simulation results show that our 1-diagnosable functional-test-based BISTer with a three-PLB TPG has very high diagnostic coverages-for example, for a random-fault distribution, our nonadaptive-diagnosis methods provide diagnostic coverages of 96% and 88% at fault densities of 10% and 25%, respectively, whereas the previous best nonadaptive-diagnosis method of the STAR-3 t 2 BISTer has diagnostic coverages of about 75% and 55% at these fault densities.

Proceedings ArticleDOI
30 Sep 2008
TL;DR: A method to identify a small, but important, subset of scan cells that are "likely" to capture an X, place them on separate "X-chains", create a combinational unload compressor tuned for these X-chains, and modify test generation to take advantage of this circuit is presented.
Abstract: Scan testing and scan compression are key to realizing cost reduction and quality control of ever more complex designs. However, compression can be limited if the density of unknown (X) values is high. We present a method to identify a small, but important, subset of scan cells that are "likely" to capture an X, place them on separate "X-chains", create a combinational unload compressor tuned for these X-chains, and modify test generation to take advantage of this circuit. This method is fully integrated in the design-for-test (DFT) flow, requires no additional user input and has negligible impact on area and timing. Test generation results on industrial designs demonstrate significantly increased compression, with no loss of coverage, for designs with high X-densities.

Proceedings ArticleDOI
27 Apr 2008
TL;DR: This work presents the first step in developing an alternative test methodology for scan cell internal faults, and defines a new flush test that is shown to add 2.3% and 8.8% to the stuck-at and stuck-on fault coverage, respectively.
Abstract: Scan chains contain approximately 50% of the logic transistors in large industrial designs. Yet, faults in the scan cells are not directly targeted by scan tests and assumed detected by flush tests. Reported results of targeting the scan cell internal faults using checking sequences show such tests to be about 4.5 times longer than scan stuck-at test sets and require a sequential test generator, even for full scan circuits. We present the first step in developing an alternative test methodology for scan cell internal faults. Fault detection capability of existing tests (flush tests, stuck-at tests and transition delay fault tests) are quantified. Existing tests are shown to have similar coverage as checking sequences. A new flush test, viz. half-speed flush test, is defined. This new test is shown to add 2.3% and 8.8% to the stuck-at and stuck-on fault coverage, respectively.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: Experimental results show that interconnect-delay variations can have a significant impact on the long paths that must be targeted for the detection of SDDs, and the proposed pattern-grading and pattern-selection method is more effective than a commercial timing-aware ATPG tool, and requires considerably less CPU time.
Abstract: Timing-related failures in high-performance integrated circuits are being increasingly dominated by small-delay defects (SDDs). Such delay faults are caused by process variations, crosstalk, power-supply noise, and defects such as resistive shorts and opens. Recently, the concept of output deviations has been presented as a surrogate long-path coverage metric for SDDs. However, this approach is focused only on delay variations for logic gates and it ignores chip layout, interconnect defects, and delay variations on interconnects. We present a layout-aware output deviations metric that can easily handle interconnect delay variations. Experimental results show that interconnect-delay variations can have a significant impact on the long paths that must be targeted for the detection of SDDs. For the same pattern count, the proposed pattern-grading and pattern-selection method is more effective than a commercial timing-aware ATPG tool for SDDs, and requires considerably less CPU time.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: This work presents a novel technique to generate a capture signal on-chip, with programmable delay, which enables faster than at-speed test and can be easily incorporated into the current scan based delay test methods.
Abstract: The increasing gap between modern chip frequencies and test clock frequencies provided by external test equipment, makes at-speed delay testing a challenge. We present a novel technique to generate a capture signal on-chip, with programmable delay, which enables faster than at-speed test. The test clock frequency can be programmed as a part of the test vector itself. Since test clock frequency can be controlled, it is no longer required to depend only on the long paths for detecting small delay defects, which provides flexibility in selecting test paths. The technique has minimal overhead in terms of area and design effort and can be easily incorporated into the current scan based delay test methods.

Proceedings ArticleDOI
10 Nov 2008
TL;DR: This paper proposes a novel capture power-aware test compression scheme that is able to keep scan capture power under a safe limit with little loss in test compression ratio.
Abstract: Large test data volume and high test power are two of the major concerns for the industry when testing large integrated circuits. With given test cubes in scan-based testing, the ldquodonpsilat-carerdquo bits can be exploited for test data compression and/or test power reduction. Prior work either targets only one of these two issues or considers to reduce test data volume and scan shift power together. In this paper, we propose a novel capture power-aware test compression scheme that is able to keep scan capture power under a safe limit with little loss in test compression ratio. Experimental results on benchmark circuits demonstrate the efficacy of the proposed approach.

Book ChapterDOI
01 Sep 2008
TL;DR: An approach for the construction of feature test models expressed in the CSP process algebra, from use cases described in a controlled natural language, which automatically generates test cases for both individual features and feature interactions in the context of an industrial cooperation with Motorola Inc.
Abstract: We introduce an approach for the construction of feature test models expressed in the CSP process algebra, from use cases described in a controlled natural language. From these models, our strategy automatically generates test cases for both individual features and feature interactions, in the context of an industrial cooperation with Motorola Inc., where each feature represents a mobile device functionality. The test case generation can be guided by test purposes, which allow selection based on particular traces of interest. More generally, we characterise a testing theory in terms of CSP: test models, test purposes, test cases, test execution, test verdicts and soundness are entirely defined in terms of CSP processes and refinement notions. We have also developed a tool, ATG, which mechanises the entire generation process.

Proceedings ArticleDOI
24 Nov 2008
TL;DR: CTX effectively reduces launch switching activity, thus yield loss risk, even with a small number of donpsilat care (X) bits as in test compression, without any impact on test data volume, fault coverage, performance, and circuit design.
Abstract: At-speed scan testing is susceptible to yield loss risk due to power supply noise caused by excessive launch switching activity. This paper proposes a novel two-stage scheme, namely CTX (Clock-Gating-Based Test Relaxation and X-Filling), for reducing switching activity when test stimulus is launched. Test relaxation and X-filling are conducted (1) to make as many FFs inactive as possible by disabling corresponding clock-control signals of clock-gating circuitry in Stage-1 (Clock-Disabling), and (2) to make as many remaining active FFs as possible to have equal input and output values in Stage-2 (FF-Silencing). CTX effectively reduces launch switching activity, thus yield loss risk, even with a small number of donpsilat care (X) bits as in test compression, without any impact on test data volume, fault coverage, performance, and circuit design.

Proceedings ArticleDOI
S. Eichenberger1, J. Geuzebroek1, C. Hora1, B. Kruseman1, Ananta K. Majhi1 
08 Dec 2008
TL;DR: In this paper, the authors present a methodology on how to apply volume scan diagnosis, known from the field of yield learning, to the domain of test quality learning, which allows to drastically accelerate the learning cycle.
Abstract: With test quality being an imperative, this paper presents a methodology on how to apply volume scan diagnosis - known from the field of yield learning - to the domain of test quality learning. Volume diagnosis allows to drastically accelerate the learning cycle. We give guidelines on how to improve test pattern generation strategies and try to answer which defects can be addressed deterministically with adequate fault models versus where probabilistic methods such as N-detect need be applied. The paper is based on a detailed analysis of scan diagnosis data from a production volume of well over one million devices of a 90 nm product.

Proceedings ArticleDOI
17 Dec 2008
TL;DR: This paper generates test scenarios from activity diagrams, which achieve test adequacy criteria perfectly and generates test cases by analyzing the respective sequence and class diagrams of each scenario, which achieves maximum path coverage criteria.
Abstract: Testing of software is a time-consuming activity which requires a great deal of planning and resources. Model-based testing is gaining importance as a research issue. In scenario-based testing, test scenarios are used for generating test cases, test drivers etc. UML is widely used to describe analysis and design specifications of software development. UML models are important source of information for test case design. UML activity diagrams describe the realization of the operation in design phase and also support description of parallel activities and synchronization aspects involved in different activities perfectly. In this paper we generate test scenarios from activity diagrams, which achieve test adequacy criteria perfectly. Finally we generate test cases by analyzing the respective sequence and class diagrams of each scenario, which achieves maximum path coverage criteria. Also in our approach, the cost of test model creation is reduced as design is reused.

Journal ArticleDOI
TL;DR: This paper proposes a new approach to reduce test data volume and test cycle count in scan-based testing by assuming a 1-to-1 scan configuration, in which the number of internal scan chains equals thenumber of external scan I/O ports or test channels from ATE.
Abstract: IC testing based on a full-scan design methodology and ATPG is the most widely used test strategy today. However, rapidly growing test costs are severely challenging the applicability of scan-based testing. Both test data size and number of test cycles increase drastically as circuit size grows and feature size shrinks. For a full-scan circuit, test data volume and test cycle count are both proportional to the number of test patterns N and the longest scan chain length L. To reduce test data volume and test cycle count, we can reduce N, L, or both. Earlier proposals focused on reducing the number of test patterns N through pattern compaction. All these proposals assume a 1-to-1 scan configuration, in which the number of internal scan chains equals the number of external scan I/O ports or test channels (two ports per channel) from ATE. Some have shown that ATPG for a circuit with multiple clocks using the multicapture clocking scheme, as opposed to one-hot clocking, generates a reduced number of test patterns.