scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 2012"


Journal ArticleDOI
TL;DR: The μtest prototype generates test suites that find significantly more seeded defects than the original manually written test suites, and is optimized toward finding defects modeled by mutation operators rather than covering code.
Abstract: To assess the quality of test suites, mutation analysis seeds artificial defects (mutations) into programs; a nondetected mutation indicates a weakness in the test suite We present an automated approach to generate unit tests that detect these mutations for object-oriented classes This has two advantages: First, the resulting test suite is optimized toward finding defects modeled by mutation operators rather than covering code Second, the state change caused by mutations induces oracles that precisely detect the mutants Evaluated on 10 open source libraries, our μtest prototype generates test suites that find significantly more seeded defects than the original manually written test suites

300 citations


Proceedings ArticleDOI
10 Dec 2012
TL;DR: An automated and systematic approach for testing and debugging networks called “Automatic Test Packet Generation” (ATPG), which reads router configurations and generates a device-independent model and finds that a small number of test packets suffices to test all rules in these networks.
Abstract: Networks are getting larger and more complex; yet administrators rely on rudimentary tools such as ping and traceroute to debug problems. We propose an automated and systematic approach for testing and debugging networks called "Automatic Test Packet Generation" (ATPG). ATPG reads router configurations and generates a device-independent model. The model is used to generate a minimum set of test packets to (minimally) exercise every link in the network or (maximally) exercise every rule in the network. Test packets are sent periodically, and detected failures trigger a separate mechanism to localize the fault. ATPG can detect both functional (e.g., incorrect firewall rule) and performance problems (e.g., congested queue). ATPG complements but goes beyond earlier work in static checking (which cannot detect liveness or performance faults) or fault localization (which only localize faults given liveness results).We describe our prototype ATPG implementation and results on two real-world data sets: Stanford University's backbone network and Internet2. We find that a small number of test packets suffices to test all rules in these networks: For example 4000 packets can cover all rules in Stanford backbone network while 54 is enough to cover all links. Sending 4000 test packets 10 times per second consumes less than 1% of link capacity. ATPG code and the data sets are publicly available1[1].

284 citations


Proceedings ArticleDOI
15 Jul 2012
TL;DR: This work proposes a novel approach that combines model-based and combinatorial testing in order to generate executable and effective test cases from a model, and introduces a post-optimization algorithm that can guarantee the combinatorsial criterion of choice on the whole set of test paths extracted from the model.
Abstract: Model-based testing relies on the assumption that effective adequacy criteria can be defined in terms of model coverage achieved by a set of test paths. However, such test paths are only abstract test cases and input test data must be specified to make them concrete. We propose a novel approach that combines model-based and combinatorial testing in order to generate executable and effective test cases from a model. Our approach starts from a finite state model and applies model-based testing to generate test paths that represent sequences of events to be executed against the system under test. Such paths are transformed to classification trees, enriched with domain input specifications such as data types and partitions. Finally, executable test cases are generated from those trees using t-way combinatorial criteria. While test cases that satisfy a combinatorial criterion can be generated for each individual test path obtained from the model, we introduce a post-optimization algorithm that can guarantee the combinatorial criterion of choice on the whole set of test paths extracted from the model. The resulting test suite is smaller, but it still satisfies the same adequacy criterion. We developed a tool and used it to evaluate our approach on 6 subject systems of various types and sizes, to study the effectiveness of the generated test suites, the reduction achieved by the post-optimization algorithm, as well as the effort required to produce them.

89 citations


Proceedings ArticleDOI
05 Nov 2012
TL;DR: A defect-oriented cell-aware (CA) library characterization and pattern-generation flow and its application to 1,900 cells of a 32-nm technology allowed us to detect cell-internal bridges and opens that caused static, gross-delay, and small-delay defects.
Abstract: This paper describes a new approach for significantly improving overall defect coverage for CMOS-based designs. We present results from a defect-oriented cell-aware (CA) library characterization and pattern-generation flow and its application to 1,900 cells of a 32-nm technology. The CA flow enabled us to detect cell-internal bridges and opens that caused static, gross-delay, and small-delay defects. We present highvolume production test results from a 32-nm notebook processor to which CA test patterns were applied, including the defect rate reduction in PPM that was achieved after testing 800,000 parts. We also present cell-internal diagnosis and physical failure analysis results from one failing part.

70 citations


Journal ArticleDOI
TL;DR: A static dependence analysis derived from program slicing that can be used to support search space reduction is proposed and provided to support the claim that input domain reduction has a significant effect on the performance of local, global, and hybrid search, while a purely random search is unaffected.
Abstract: Search-Based Test Data Generation reformulates testing goals as fitness functions so that test input generation can be automated by some chosen search-based optimization algorithm. The optimization algorithm searches the space of potential inputs, seeking those that are “fit for purpose,” guided by the fitness function. The search space of potential inputs can be very large, even for very small systems under test. Its size is, of course, a key determining factor affecting the performance of any search-based approach. However, despite the large volume of work on Search-Based Software Testing, the literature contains little that concerns the performance impact of search space reduction. This paper proposes a static dependence analysis derived from program slicing that can be used to support search space reduction. The paper presents both a theoretical and empirical analysis of the application of this approach to open source and industrial production code. The results provide evidence to support the claim that input domain reduction has a significant effect on the performance of local, global, and hybrid search, while a purely random search is unaffected.

60 citations


Journal ArticleDOI
TL;DR: It is found that if testers have a budgetary concern on the number of test cases for regression testing, the use of test case prioritization can save up to 40% of test CASE executions for commit builds without significantly affecting the effectiveness of fault localization.
Abstract: Context: Effective test case prioritization shortens the time to detect failures, and yet the use of fewer test cases may compromise the effectiveness of subsequent fault localization. Objective: The paper aims at finding whether several previously identified effectiveness factors of test case prioritization techniques, namely strategy, coverage granularity, and time cost, have observable consequences on the effectiveness of statistical fault localization techniques. Method: This paper uses a controlled experiment to examine these factors. The experiment includes 16 test case prioritization techniques and four statistical fault localization techniques using the Siemens suite of programs as well as grep, gzip, sed, and flex as subjects. The experiment studies the effects of the percentage of code examined to locate faults from these benchmark subjects after a given number of failures have been observed. Results: We find that if testers have a budgetary concern on the number of test cases for regression testing, the use of test case prioritization can save up to 40% of test case executions for commit builds without significantly affecting the effectiveness of fault localization. A statistical fault localization technique using a smaller fraction of a prioritized test suite is found to compromise its effectiveness seriously. Despite the presence of some variations, the inclusion of more failed test cases will generally improve the fault localization effectiveness during the integration process. Interestingly, during the variation periods, adding more failed test cases actually deteriorates the fault localization effectiveness. In terms of strategies, Random is found to be the most effective, followed by the ART and Additional strategies, while the Total strategy is the least effective. We do not observe sufficient empirical evidence to conclude that using different coverage granularity levels have different overall effects. Conclusion: The paper empirically identifies that strategy and time-cost of test case prioritization techniques are key factors affecting the effectiveness of statistical fault localization, while coverage granularity is not a significant factor. It also identifies a mid-range deterioration in fault localization effectiveness when adding more test cases to facilitate debugging.

51 citations


Proceedings ArticleDOI
05 Nov 2012
TL;DR: An automatic test pattern generation algorithm which considers waveforms and their propagation on each relevant line of the circuit and is capable of automatically generating a formal redundancy proof for undetectable small-delay faults; to the best of the knowledge this is the first such algorithm that is both scalable and complete.
Abstract: The detection of small-delay faults is traditionally performed by sensitizing transitions on a path of sufficient length from an input to an output of the circuit going through the fault site. While this approach allows efficient test generation algorithms, it may result in false positives and false negatives as well, i.e. undetected faults are classified as detected or detectable faults are classified as undetectable. We present an automatic test pattern generation algorithm which considers waveforms and their propagation on each relevant line of the circuit. The model incorporates individual delays for each gate and filtering of small glitches. The algorithm is based on an optimized encoding of the test generation problem by a Boolean satisfiability (SAT) instance and is implemented in the tool WaveSAT. Experimental results for ISCAS-85, ITC-99 and industrial circuits show that no known definition of path sensitization can eliminate false positives and false negatives at the same time, thus resulting in inadequate small-delay fault detection. WaveSAT generates a test if the fault is testable and is also capable of automatically generating a formal redundancy proof for undetectable small-delay faults; to the best of our knowledge this is the first such algorithm that is both scalable and complete.

45 citations


Journal ArticleDOI
TL;DR: In this article, an adaptive test scheme for analog circuits that capitalizes on alternate test to achieve a low cost for the majority of fabricated devices is presented, where the small fraction of devices for which the alternate test decision may be prone to error are identified and further action is taken.
Abstract: Adaptive test is a promising approach for test cost reduction. This article presents an adaptive test scheme for analog circuits that capitalizes on alternate test to achieve a low cost for the majority of fabricated devices. The small fraction of devices for which the alternate test decision may be prone to error are identified and further action is taken.

44 citations


Proceedings ArticleDOI
05 Nov 2012
TL;DR: This paper leverage and extend the 3D DfT wrapper for logic dies, such that, in conjunction with the boundary scan features in the Wide-I/O DRAM(s) stacked on top of it, testing the logic-memory interconnects is enabled.
Abstract: Three-dimensional (3D) die stacking is an emerging integration technology which brings benefits with respect to heterogeneous integration, inter-die interconnect density, performance, and energy efficiency, and component size and yield. In the past, we have described, for logic-on-logic die stacks, a 3D DfT (Design-for-Test) architecture and corresponding automation, based on die-level wrappers. Memory-on-logic stacks are among the first 3D products that will come to the market. Recently, JEDEC has released a standard for stackable Wide-I/O Mobile DRAMs (Dynamic Random Access Memories) which specifies the logic-memory interface. The standard includes boundary scan features in the DRAM memories. In this paper, we leverage and extend the 3D DfT wrapper for logic dies, such that, in conjunction with the boundary scan features in the Wide-I/O DRAM(s) stacked on top of it, testing the logic-memory interconnects is enabled. A dedicated Interconnect ATPG (Automatic Test Pattern Generation) algorithm is used to deliver effective and efficient dedicated test patterns. We have verified our proposed DfT extension on an industrial design and shown that the silicon area cost of the extended wrapper with JEDEC Wide-I/O interconnect test support is negligible.

42 citations


Proceedings ArticleDOI
28 May 2012
TL;DR: This tutorial will give an introduction to a new defect-oriented test method called cell-aware, which takes the layout of standard library cells into account when creating thecell-aware ATPG library view, which can be used in a normal chip design flow to generate production test patterns.
Abstract: This tutorial will give an introduction to a new defect-oriented test method called cell-aware. This new cell-aware method takes the layout of standard library cells into account when creating the cell-aware ATPG library view. The tutorial will cover the whole cell-aware library characterization flow consisting of a layout extraction step, an analog fault simulation step of all cell-internal bridges and opens and the cell-aware synthesis step to create the new cell-aware ATPG library views, which finally can be used in a normal chip design flow to generate production test patterns. These cell-aware production test patterns have a significantly higher quality than state-of-the-art patterns. Finally, production test results from several hundred thousand tested IC's are presented showing significant reduction of DPPM rates.

41 citations


Proceedings ArticleDOI
19 Nov 2012
TL;DR: Two approaches to reduce the test data volume (TDV) are proposed, one requires no additional hardware and the second is based on the new DFT hardware, named background chains.
Abstract: Test data compression has become a dominant approach to reduce the test cost today. Majority of test compression schemes are based on the fact that the generated test cubes have very few specified bits. This paper studies additional test cube properties and utilizes them to reduce the test data volume (TDV) further. Two approaches are proposed in this paper. The first one requires no additional hardware and the second one is based on the new DFT hardware, named background chains. The proposed approaches can be combined with other test compression schemes to achieve additional TDV reduction. The experimental results based on embedded deterministic test (EDT) show the proposed approaches achieve significant TDV reduction for industrial designs.

Patent
09 Jul 2012
TL;DR: In this article, a method and system for generating and processing test cases for effective black box testing of software applications is provided, where test cases are automatically generated based on parameters that are identified from automated manual test cases associated with business models.
Abstract: A method and system for generating and processing test cases for effective black box testing of software applications is provided. Test cases are automatically generated based on parameters that are identified from automated manual test cases associated with business models. The generated automated test cases cover one or more paths in the business models. Further, the automated test cases are optimized by determining minimal path covered by the automated test cases in the business models. The optimization is performed based on analysis of the one or more paths covered by the automated test cases in the business models. Furthermore, code coverage data of the optimized test cases are obtained by execution of the optimized test cases. Finally, based on the code coverage data and predetermined conditions, the optimized test cases are analyzed for at least prioritization and further optimization of the optimized test cases for effective black box testing.

Proceedings ArticleDOI
05 Nov 2012
TL;DR: SATSEQ is presented, a timing-aware ATPG system for small-delay faults in non-scan circuits that provides detection of small- delay faults through the longest functional paths and generates the shortest possible sub-sequences per fault.
Abstract: We present SATSEQ, a timing-aware ATPG system for small-delay faults in non-scan circuits. The tool identifies the longest paths suitable for functional fault propagation and generates the shortest possible sub-sequences per fault. Based on advanced model-checking techniques, SATSEQ provides detection of small-delay faults through the longest functional paths. All test sequences start at the circuit's initial state; therefore, overtesting is avoided. Moreover, potential invalidation of the fault detection is taken into account. Experimental results show high detection and better performance than scan testing in terms of test application time and overtesting-avoidance.

Journal ArticleDOI
TL;DR: This ATPG algorithm is based on Boolean Satisfiability (SAT) and utilizes the stuck-at fault model for representing signaling faults and a weighted partial Max-SAT formulation is used to enable efficient selection of the most effective drug.
Abstract: Cancer and other gene related diseases are usually caused by a failure in the signaling pathway between genes and cells. These failures can occur in different areas of the gene regulatory network, but can be abstracted as faults in the regulatory function. For effective cancer treatment, it is imperative to identify faults and select appropriate drugs to treat the faults. In this paper, we present an extensible Max-SAT based automatic test pattern generation (ATPG) algorithm for cancer therapy. This ATPG algorithm is based on Boolean Satisfiability (SAT) and utilizes the stuck-at fault model for representing signaling faults. A weighted partial Max-SAT formulation is used to enable efficient selection of the most effective drug. Several usage cases are presented for fault identification and drug selection. These cases include the identification of testable faults, optimal drug selection for single/multiple known faults, and optimal drug selection for overall fault coverage. Experimental results on growth factor (GF) signaling pathways demonstrate that our algorithm is flexible, and can yield an exact solution for each feature in much less than 1 second.

Proceedings ArticleDOI
05 Nov 2012
TL;DR: This paper addresses the issue of vulnerability to IR-drop-induced yield loss in nano-scale designs with a novel per-cell dynamic IR- drop estimation method that achieves both high accuracy and high time-efficiency.
Abstract: In return for increased operating frequency and reduced supply voltage in nano-scale designs, their vulnerability to IR-drop-induced yield loss grew increasingly apparent. Therefore, it is necessary to consider delay increase effect due to IR-drop during at-speed scan testing. However, it consumes significant amounts of time for precise IR-drop analysis. This paper addresses this issue with a novel per-cell dynamic IR-drop estimation method. Instead of performing time-consuming IR-drop analysis for each pattern one by one, the proposed method uses global cycle average power profile for each pattern and dynamic IR-drop profiles for a few representative patterns, thus total computation time is effectively reduced. Experimental results on benchmark circuits demonstrate that the proposed method achieves both high accuracy and high time-efficiency.

Journal ArticleDOI
TL;DR: This paper integrates the SAT solver in a novel way that leverages the conflict analysis of modern SAT solvers, which provides more than 4X speedup without special optimizations of the SATsolver for this particular application.
Abstract: In the face of large-scale process variations, statistical timing methodology has advanced significantly over the last few years, and statistical path selection takes advantage of it in at-speed testing. In deterministic path selection, the separation of path selection and test generation is known to require time consuming iteration between the two processes. This paper shows that in statistical path selection, this is not only the case, but also the quality of results can be severely degraded even after the iteration. To deal with this issue, we consider testability in the first place by integrating a satisfiability (SAT) solver, and this necessitates a new statistical path selection method. We integrate the SAT solver in a novel way that leverages the conflict analysis of modern SAT solvers, which provides more than 4X speedup without special optimizations of the SAT solver for this particular application. Our proposed method is based on a generalized path criticality metric whose properties allow efficient pruning. Our experimental results show that the proposed method achieves 47% better quality of results on average, and up to 361X speedup compared to statistical path selection followed by test generation.

Proceedings ArticleDOI
18 Apr 2012
TL;DR: A new test generation approach which has the ability to reduce the test set size significantly and employs the robustness of SAT-solvers to primarily push test compaction is proposed.
Abstract: The test set size is a highly important factor in the post-production test of circuits. A high pattern count in the test set leads to long test application time and exorbitant test costs. We propose a new test generation approach which has the ability to reduce the test set size significantly. In contrast to previous SAT-based ATPG techniques which were focused on dealing with hard single faults, the proposed approach employs the robustness of SAT-solvers to primarily push test compaction. Furthermore, a concept is introduced how the novel technique can be flexibly integrated into an existing industrial flow to reduce the pattern count. Experimental results on large industrial circuits show that the approach is able to reduce the pattern count of up to 63% compared to state-of-the-art dynamic compaction techniques.

Proceedings ArticleDOI
03 Sep 2012
TL;DR: A novel test case selection strategy based on Diversity Maximization Speedup (DMS), which orders a set of unlabeled test cases in a way that maximizes the effectiveness of a fault localization technique and can help existing fault localization techniques reduce their debugging cost.
Abstract: Fault localization is useful for reducing debugging effort. However, many fault localization techniques require non-trivial number of test cases with oracles, which can determine whether a program behaves correctly for every test input. Test oracle creation is expensive because it can take much manual labeling effort. Given a number of test cases to be executed, it is challenging to minimize the number of test cases requiring manual labeling and in the meantime achieve good fault localization accuracy. To address this challenge, this paper presents a novel test case selection strategy based on Diversity Maximization Speedup (DMS). DMS orders a set of unlabeled test cases in a way that maximizes the effectiveness of a fault localization technique. Developers are only expected to label a much smaller number of test cases along this ordering to achieve good fault localization results. Our experiments with more than 250 bugs from the Software-artifact Infrastructure Repository show (1) that DMS can help existing fault localization techniques to achieve comparable accuracy with on average 67% fewer labeled test cases than previously best test case prioritization techniques, and (2) that given a labeling budget (i.e., a fixed number of labeled test cases), DMS can help existing fault localization techniques reduce their debugging cost (in terms of the amount of code needed to be inspected to locate faults). We conduct hypothesis test and show that the saving of the debugging cost we achieve for the real C programs are statistically significant.

Proceedings ArticleDOI
23 Apr 2012
TL;DR: Several heuristics are proposed to constrain the SMT formula to further reduce the search space, including fault selection, excitation constraint, reduced primary output vector, and cone-of-influence reduction.
Abstract: A diagnostic test pattern generator using a Satisfiability Modulo Theory (SMT) solver is proposed. Rather than targeting a single fault pair at a time, the proposed SMT approach can distinguish multiple fault pairs in a single instance. Several heuristics are proposed to constrain the SMT formula to further reduce the search space, including fault selection, excitation constraint, reduced primary output vector, and cone-of-influence reduction. Experimental results for the ISCAS85 and full-scan versions of ISCAS89 benchmark circuits show that fewer diagnostic vectors are generated compared with conventional diagnostic test generation methods. Up to 73% reduction in the number of vectors generated can be achieved in large circuits.

Proceedings ArticleDOI
05 Nov 2012
TL;DR: This paper proposes a novel scheme to manage capture power in a pinpoint manner for achieving guaranteed capture power safety, improved small-delay test capability, and minimal test cost impact in at-speed scan test generation.
Abstract: This paper proposes a novel scheme to manage capture power in a pinpoint manner for achieving guaranteed capture power safety, improved small-delay test capability, and minimal test cost impact in at-speed scan test generation. First, switching activity around each long path sensitized by a test vector is checked to characterize it as hot (with excessively-high switching activity), warm (with normal/functional switching activity), or cold (with excessively-low switching activity). Then, X-restoration/X-filling-based rescue is conducted on the test vector to reduce switching activity around hot paths. If the rescue is insufficient to turn a hot path into a warm path, mask is then conducted on expected test response data to instruct the tester to ignore the potentially-false test response value from the hot path, thus achieving guaranteed capture power safety. Finally, X-restoration/X-filling-based warm-up is conducted on the test vector to increase switching activity around cold paths for improving their small-delay test capability. This novel approach of pinpoint capture power management has significant advantages over the conventionalapproachofglobalcapturepower management, as demonstrated by evaluation results on large ITC'99 benchmark circuits and detailed path delay analysis.

Patent
19 Dec 2012
TL;DR: In this article, the authors present a system and method to estimate both, the time and number of resources required to execute a test suite or a subset of test suite in parallel, with the objective of providing a balanced workload distribution.
Abstract: A system and method is disclosed to estimate both, the time and number of resources required to execute a test suite or a subset of test suite in parallel, with the objective of providing a balanced workload distribution. The present invention partitions test suite for parallelization, given the dependencies that exists between test cases and test execution time.

Journal ArticleDOI
TL;DR: The defect level model uses the behavior-attribution results of the current failing population to guide test-set customization to minimize defect level for a given constraint on test costs, or alternatively, ensure that defect level does not exceed some predetermined threshold.
Abstract: This paper describes a method for improving the test quality of digital circuits on a per-design basis by: 1) monitoring the defect behaviors that occur through volume diagnosis; and 2) changing the test patterns to match the identified behaviors. To characterize the behavior of a defect (i.e., the conditions when a defect is activated), physically-aware diagnosis is employed to extract the set of signal lines relevant to defect activation. Then, based on the set of signal lines derived, the defect is attributed to one of several behavior categories. Our defect level model uses the behavior-attribution results of the current failing population to guide test-set customization to minimize defect level for a given constraint on test costs, or alternatively, ensure that defect level does not exceed some predetermined threshold. Circuit-level simulation involving various types of defects shows that defect level can be reduced by 30% using this method. Simulation experiment on actual chips also demonstrates quality improvement.

Journal ArticleDOI
TL;DR: A new scan architecture is proposed to compress test stimulus data, compact test responses, and reduce test application time for launch-on-capture (LOC) delay testing.
Abstract: Test data compression is a much more difficult problem for launch-on-capture (LOC) delay testing, because test data for LOC delay testing is much more than that of stuck-at fault testing, and LOC delay fault test generation in the two-frame circuit model can specify many more inputs. A new scan architecture is proposed to compress test stimulus data, compact test responses, and reduce test application time for LOC delay fault testing. The new scan architecture merges a number of scan flip-flops into the same group, where all scan flip-flops in the same group are assigned the same values for all test pairs. Sufficient conditions are presented for including any pair of scan flip-flops into the same group for LOC transition, non-robust path delay, and robust path delay fault testing. Test data for LOC delay testing based on the new scan architecture can be compressed significantly. Test application time can also be reduced greatly. Sufficient conditions are presented to construct a test response compactor for LOC transition, non-robust, and robust path delay fault testing. Folded scan forest and test response compactor are constructed for further test data compression. Sufficient experimental results are presented to show the effectiveness of the method.

Journal ArticleDOI
TL;DR: A static linear behavior (SLB) analog fault model for switched-capacitor (SC) circuits that covers concurrent multiple parametric faults and catastrophic faults, and addresses the impractically long fault simulation time issue.
Abstract: This paper proposes a static linear behavior (SLB) analog fault model for switched-capacitor (SC) circuits. The SC circuits under test (CUT) are divided into functional macros including the operational amplifiers, the capacitors, and the switches. Each macro has specified design parameters from the design's perspectives. These design parameters constitute a parameter set which determines the practical transfer function of the CUT. The SLB fault model defines that a CUT is faulty if its parameter set results in transfer functions whose frequency responses are out of the design specification. We analyzed the fault effects of the macros and derived their faulty signal-flow graph models with which the faulty transfer function templates of the CUT can be automatically generated. Based on the templates, we proposed a test procedure that can estimate all the parameters in the parameter set so as to test the CUT with multiple faults. Different from conventional single fault assumption, the proposed SLB fault model covers concurrent multiple parametric faults and catastrophic faults. In addition, it does not need to conduct fault simulations before test as conventional analog fault models do. As a result, it addresses the impractically long fault simulation time issue. A fully-differential low-pass SC biquad filter was adopted as an example to demonstrate how to design and use efficient multitone tests to test for the parameter set. The multitone test results acquired during the test procedure also reveal the distortion and noise performance of the CUT though the SLB fault model does not include them.

Proceedings ArticleDOI
03 Oct 2012
TL;DR: A procedure that assigns the don't cares in a given test cube in such a way so as to minimize the resulting polynomial found by the Berlekamp-Massey algorithm.
Abstract: In built-in test pattern generation, a test cube is usually encoded or compressed by a seed vector that is used as the initial state of a Linear Feedback Shift Register (LFSR). The seed vector is found by solving a linear system of equations using a fixed (but arbitrarily chosen) characteristic polynomial for the LFSR In contrast, finding the LFSR characteristic polynomial to generate a given test cube provides more design freedom but results in a non-linear system of equations. In this paper, we address the latter problem using the Berlekamp-Massey (BM) algorithm. The BM algorithm is very efficient and obviates the need of solving a non-linear system, but it cannot work with don't care values. We present therefore a procedure that assigns the don't cares in a given test cube in such a way so as to minimize the resulting polynomial found by BM. Experimental results demonstrate the substantial improvement over a previous technique that assigns the don't cares greedily.

Proceedings ArticleDOI
30 May 2012
TL;DR: The results show that PSO-based method outperforms other algorithms such as GA both in the coverage effect of test data and the convergence speed.
Abstract: Automated generation of test data has always been a challenging problem in the area of software testing. Recently, meta-heuristic search (MHS) techniques have been proven to be a powerful tool to solve this difficulty. In the paper, we introduce an up-to-date search technique, i.e. particle swarm optimization (PSO), to settle this difficulty. After the basic idea of PSO is addressed, the overall framework of PSO-based test data generation is discussed. Here, the inputs of program under test are encoded into particles. During the search process, PSO algorithm is used to generate test inputs with the highest possible coverage rate. Once a set of test inputs is produced, test driver will seed them into program to run and collect coverage information simultaneously. Then, the value of fitness function for branch coverage can be calculated based on such information, which can direct the algorithm optimization in next iteration. In order to validate our method, five real-world programs are used for experimental analysis. The results show that PSO-based method outperforms other algorithms such as GA both in the coverage effect of test data and the convergence speed.

Proceedings ArticleDOI
19 Nov 2012
TL;DR: Experimental results show that a hazard-free robust test can be efficiently found for most testable timing-critical faults without much reduction in path length, and this approach is proposed which is based on Pseudo-Boolean Optimization.
Abstract: Advances in the chip manufacturing process impose new requirements for post-production test. Small Delay Defects (SDDs) have become a serious problem during chip testing. Timing-aware ATPG is typically used to generate tests for this kind of defects. Here, the faults are detected through the longest path. In this paper, a novel timing-aware ATPG approach is proposed which is based on Pseudo-Boolean Optimization (PBO) in order to leverage the recent advances in solving techniques in this field. Additionally, the PBO-based approach is able to cope with the generation of hazard-free robust tests by extending the problem formulation. As a result, the faults are detected through the longest robustly testable path, i.e. independently from other delay faults. Experimental results show that a hazard-free robust test can be efficiently found for most testable timing-critical faults without much reduction in path length.

Book
31 Jan 2012
TL;DR: A fast and highly fault efficient SAT-based ATPG framework is presented which is also able to generate high-quality delay tests such as robust path delay tests, as well as tests with long propagation paths to detect small delay defects.
Abstract: This book provides an overview of automatic test pattern generation (ATPG) and introduces novel techniques to complement classical ATPG, based on Boolean Satisfiability (SAT). A fast and highly fault efficient SAT-based ATPG framework is presented which is also able to generate high-quality delay tests such as robust path delay tests, as well as tests with long propagation paths to detect small delay defects. The aim of the techniques and methodologies presented in this book is to improve SAT-based ATPG, in order to make it applicable in industrial practice. Readers will learn to improve the performance and robustness of the overall test generation process, so that the ATPG algorithm reliably will generate test patterns for most targeted faults in acceptable run time to meet the high fault coverage demands of industry. The techniques and improvements presented in this book provide the following advantages: Provides a comprehensive introduction to test generation and Boolean Satisfiability (SAT);Describes a highly fault efficient SAT-based ATPG framework; Introduces circuit-oriented SAT solving techniques, which make use of structural information and are able to accelerate the search process significantly;Provides SAT formulations for the prevalent delay faults models, in addition to the classical stuck-at fault model;Includes an industrial perspective on the state-of-the-art in the testing, along with SAT; two topics typically distinguished from each other.

Journal ArticleDOI
Irith Pomeranz1
TL;DR: This work modifies the test data so as to compact it as well as increase the fault coverage, which results in improved fault coverage compared to the use of broadside (or skewed-load) tests alone, and in reduced test data volume compared with the case where broadside and skewed- load tests are stored separately.
Abstract: Skewed-load and broadside tests complement each other and allow higher delay fault coverage to be achieved for a standard-scan circuit that supports both types of tests. The difference between the two types of tests is mainly in the test application process. The input test data required for both of them are similar. This similarity is used in this work to compute compact input test data that can be used as a basis for forming both types of tests. It results in improved fault coverage compared to the use of broadside (or skewed-load) tests alone, and in reduced test data volume compared to the case where broadside and skewed-load tests are stored separately. Experimental results are presented using a procedure that accepts a test set of any type, and computes input test data suitable for the application of both types of tests. The procedure modifies the test data so as to compact it as well as increase the fault coverage. The procedure is applied to a broadside test set and to mixed test sets that consist of both types of tests.

Proceedings ArticleDOI
23 Apr 2012
TL;DR: A compact test generation and application method for reversible circuits which achieves high (100%) fault coverage and can be adopted for BIST implementations.
Abstract: Reversibility as an inherent requirement of quantum computation motivates further research on reversible logic. Due to anticipated high failure rates for such technologies, thorough testing is a must for these circuits. In this paper, we present a compact test generation and application method for reversible circuits which achieves high (100%) fault coverage and can be adopted for BIST implementations. In this method, the next test pattern is the response of the reversible circuit to the previous test pattern. A test generation algorithm to minimize test time and achieve 100% fault coverage is also presented. Simulation results on a set of reversible benchmark circuits confirm that this approach can detect all single missing/repeated gate faults as well as the majority of multiple faults.