scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 2011"


Proceedings ArticleDOI
05 Jun 2011
TL;DR: A complementary flow is presented to verify the presence of Trojans in 3PIPs by identifying suspicious signals (SS) with formal verification, coverage analysis, removing redundant circuit, sequential automatic test pattern generation (ATPG), and equivalence theorems.
Abstract: The intellectual property (IP) blocks are designed by hundreds of IP vendors distributed across the world. Such IPs cannot be assumed trusted as hardware Trojans can be maliciously inserted into them and could be used in military, financial and other critical applications. It is extremely difficult to detect Trojans in third-party IPs (3PIPs) simply with conventional verification methods as well as methods developed for detecting Trojans in fabricated ICs. This paper first discusses the difficulties to detect Trojans in 3PIPs. Then a complementary flow is presented to verify the presence of Trojans in 3PIPs by identifying suspicious signals (SS) with formal verification, coverage analysis, removing redundant circuit, sequential automatic test pattern generation (ATPG), and equivalence theorems. Experimental results, shown in the paper for detecting many Trojans inserted into RS232 circuit, demonstrate the efficiency of the flow.

214 citations


Journal ArticleDOI
TL;DR: This paper develops a new family of coverage criteria for GUI testing grounded in combinatorial interaction testing and shows that by increasing the event combinations tested and by controlling the relative positions of events defined by the new criteria, it can detect a large number of faults that were undetectable by earlier techniques.
Abstract: Graphical user interfaces (GUIs), due to their event-driven nature, present an enormous and potentially unbounded way for users to interact with software. During testing, it is important to “adequately cover” this interaction space. In this paper, we develop a new family of coverage criteria for GUI testing grounded in combinatorial interaction testing. The key motivation of using combinatorial techniques is that they enable us to incorporate “context” into the criteria in terms of event combinations, sequence length, and by including all possible positions for each event. Our new criteria range in both efficiency (measured by the size of the test suite) and effectiveness (the ability of the test suites to detect faults). In a case study on eight applications, we automatically generate test cases and systematically explore the impact of context, as captured by our new criteria. Our study shows that by increasing the event combinations tested and by controlling the relative positions of events defined by the new criteria, we can detect a large number of faults that were undetectable by earlier techniques.

184 citations


Patent
Franco Cesari1
29 Dec 2011
TL;DR: In this paper, an embodiment of an additional functional logic circuit block, named "inter-domain on chip clock controller" (icOCC), interfaced with every suitably adapted clock-gating circuit (OCC) of the different clock domains is presented.
Abstract: An embodiment is directed to extended test coverage of complex multi-clock-domain integrated circuits without forgoing a structured and repeatable standard approach, thus avoiding custom solutions and freeing the designer to implement his RTL code, respecting only generally few mandatory rules identified by the DFT engineer Such an embodiment is achieved by introducing in the test circuit an embodiment of an additional functional logic circuit block, named “inter-domain on chip clock controller” (icOCC), interfaced with every suitably adapted clock-gating circuit (OCC), of the different clock domains The icOCC actuates synchronization among the different OCCs that source the test clock signals coming from an external ATE or ATPG tool and from internal at-speed test clock generators to the respective circuitries of the distinct clock domains Scan structures like the OCCs, scan chain, etc, may be instantiated at gate pre-scan level, with low impact onto the functional RTL code written by the designer

94 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: A mixed symbolic execution based approach that is unique in how it favors program paths associated with a performance measure of interest, operates in an iterative-deepening beam-search fashion to discard paths that are unlikely to lead to high-load tests, and generates a test suite of a given size and level of diversity.
Abstract: Load tests aim to validate whether system performance is acceptable under peak conditions. Existing test generation techniques induce load by increasing the size or rate of the input. Ignoring the particular input values, however, may lead to test suites that grossly mischaracterize a system's performance. To address this limitation we introduce a mixed symbolic execution based approach that is unique in how it 1) favors program paths associated with a performance measure of interest, 2) operates in an iterative-deepening beam-search fashion to discard paths that are unlikely to lead to high-load tests, and 3) generates a test suite of a given size and level of diversity. An assessment of the approach shows it generates test suites that induce program response times and memory consumption several times worse than the compared alternatives, it scales to large and complex inputs, and it exposes a diversity of resource consuming program behavior.

89 citations


Journal ArticleDOI
TL;DR: Test case selection in model‐based testing is discussed focusing on the use of a similarity function, and it is shown that similarity‐based selection can be more effective than random selection when applied to automatically generated test suites.
Abstract: Test case selection in model-based testing is discussed focusing on the use of a similarity function. Automatically generated test suites usually have redundant test cases. The reason is that test generation algorithms are usually based on structural coverage criteria that are applied exhaustively. These criteria may not be helpful to detect redundant test cases as well as the suites are usually impractical due to the huge number of test cases that can be generated. Both problems are addressed by applying a similarity function. The idea is to keep in the suite the less similar test cases according to a goal that is defined in terms of the intended size of the test suite. The strategy presented is compared with random selection by considering transition-based and fault-based coverage. The results show that, in most of the cases, similarity-based selection can be more effective than random selection when applied to automatically generated test suites. Copyright © 2009 John Wiley & Sons, Ltd.

86 citations


Proceedings ArticleDOI
14 Mar 2011
TL;DR: This paper develops a multi-level logic synthesis algorithm for error tolerant applications that minimizes the cost of the circuit by exploiting the budget for approximations provided by error tolerance.
Abstract: Starting from a functional description or a gate level circuit, the goal of the multi-level logic optimization is to obtain a version of the circuit that implements the original function at a lower cost. For error tolerant applications — images, video, audio, graphics, and games — it is known that errors at the outputs are tolerable provided that their severities are within application-specified thresholds. In this paper, we perform application level analysis to show that significant errors at the circuit level are tolerable. Then we develop a multi-level logic synthesis algorithm for error tolerant applications that minimizes the cost of the circuit by exploiting the budget for approximations provided by error tolerance. We use circuit area as the cost metric and use a test generation algorithm to select faults that introduce errors of low severities but provide significant area reductions. Selected faults are injected to simplify the circuit for the experiments. Results show that our approach provides significant reductions in circuit area even for modest error tolerance budgets.

72 citations


Proceedings ArticleDOI
29 Nov 2011
TL;DR: An extensive experiment is proposed, based on the state-of-the art SPLOT feature models repository, showing that \pw tool scales over variability spaces with millions of configurations and covers pair wise with less configurations than other available tools.
Abstract: Feature models are commonly used to specify variability in software product lines. Several tools support feature models for variability management at different steps in the development process. However, tool support for test configuration generation is currently limited. This test generation task consists in systematically selecting a set of configurations that represent a relevant sample of the variability space and that can be used to test the product line. In this paper we propose \pw tool to analyze feature models and automatically generate a set of configurations that cover all pair wise interactions between features. \pw tool relies on constraint programming to generate configurations that satisfy all constraints imposed by the feature model and to minimize the set of the tests configurations. This work also proposes an extensive experiment, based on the state-of-the art SPLOT feature models repository, showing that \pw tool scales over variability spaces with millions of configurations and covers pair wise with less configurations than other available tools.

68 citations


Journal ArticleDOI
TL;DR: It is demonstrated that compression ratios can be order of magnitude higher, if the cube merging continues despite conflicts on certain positions, and that test clusters make it possible to deliver test patterns in a flexible power-aware fashion.
Abstract: The embedded deterministic test-based compression uses cube merging to reduce a pattern count, the amount of test data, and test time. It gradually expands a test pattern by incorporating compatible test cubes. This paper demonstrates that compression ratios can be order of magnitude higher, if the cube merging continues despite conflicts on certain positions. Our novel solution produces test clusters, each comprising a parent pattern and a number of its derivatives obtained by imposing extra bits on it. In order to load scan chains with patterns that feature original test cubes, only data necessary to recreate parent patterns as well as information regarding locations and values of the corresponding conflicting bits are required. A test controller can then deliver tests by repeatedly applying the same parent pattern, every time using a different control pattern to decide whether a given scan chain receives data from the parent pattern, or another pattern is used instead to recover content of the original test cube. Compression of incompatible test cubes preserves all benefits of continuous flow decompression and offers compression ratios of order 1000× with encoding efficiency much higher than 1.0. We also demonstrate that test clusters make it possible to deliver test patterns in a flexible power-aware fashion. This framework achieves significant reductions in switching activity during scan loading as well as additional test data volume reductions due to encoding algorithms employed to compress parent and control vectors.

45 citations


Proceedings ArticleDOI
01 Nov 2011
TL;DR: An enhanced XML-based automated approach for generating test cases from activity diagrams that saves time and effort besides, increases the quality of generated test cases, therefore optimizes the overall performance of the testing process.
Abstract: Test case generation is a core phase in any testing process, therefore automating it plays a tremendous role in reducing the time and effort spent during the testing process. This paper proposes an enhanced XML-based automated approach for generating test cases from activity diagrams. The proposed architecture creates a special table called Activity Dependency Table (ADT) for each XML file. The ADT covers all the functionalities in the activity diagram as well as handling the decisions, loops, fork, join, merge, object and conditional threads. Then it automatically generates a directed graph called Activity Dependency Graph (ADG) that is used in conjunction with the ADT to extract all the possible final test cases. The proposed model validates the generated test paths during the generation process to ensure meeting a hybrid coverage criterion. The generated test cases can be sent to any requirements management tool to be traced against the requirements. The proposed model is prototyped on 30 differently sized activity diagrams in different domains An experimental evaluation of the proposed model is done as well. It saves time and effort besides, increases the quality of generated test cases, therefore optimizes the overall performance of the testing process Moreover, the generated test cases can be executed on the system under test using any automatic test execution tool.

43 citations


Journal ArticleDOI
TL;DR: A new power-aware test scheduling scheme is proposed, which is extended to cases for multiple port ATEs and Experimental results are presented to show the effectiveness of the proposed method in reducing the NoC test cost and test data volume by comparing to the previous methods.
Abstract: Reuse of network-on-chip (NoC) for test data and test response delivery is attractive. However, previous techniques do not effectively use the bandwidths of the network by delivering test packets to all cores separately, which can make very much test cost and test data volume. The NoC core testing problem is formulated as a unicast-based multicast problem in order to reduce test data delivery time in the NoC. Test response data are forwarded back to the automated test equipment (ATE) via the communication channels using the reverse paths of test data delivery, which are compacted on the way from each processor to the ATE. A new power-aware test scheduling scheme is proposed, which is extended to cases for multiple port ATEs. Test data is further compressed before delivering and a low-power test application scheme is used for the cores because power produced by cores is the bottleneck of NoC test. Experimental results are presented to show the effectiveness of the proposed method in reducing the NoC test cost and test data volume by comparing to the previous methods.

43 citations


Proceedings ArticleDOI
01 May 2011
TL;DR: A novel ATPG technique where all fault models of interest are concurrently targeted in a single ATPG run is proposed, independent of any special ATPG tool or scan compression technique and requires no change or additional support in an existing ATPG system.
Abstract: ATPG tool generated patterns are a major component of test data for large SOCs. With increasing sizes of chips, higher integration involving IP cores and the need for patterns targeting multiple fault models for better defect coverage in newer technologies, the issues of adequate coverage and reasonable test data volume and application time dominate the economics of test. We address the problem of generating compact set of test patterns across multiple fault models. Traditional approaches use separate ATPG for each fault models and minimize patterns either during pattern generation through static or dynamic compaction, or after pattern generation by simulating all patterns over all fault models for static compaction. We propose a novel ATPG technique where all fault models of interest are concurrently targeted in a single ATPG run. Patterns are generated in small intervals, each consisting of 16, 32 or 64 patterns. In each interval fault model specific ATPG setups generate separate pattern sets for their respective fault model. An effectiveness criterion then selects exactly one of those pattern sets. The selected set covers untargeted faults that would have required the most additional patterns. Pattern generation intervals are repeated until required coverage for faults of all models of interest is achieved. The sum total of all selected interval pattern sets is the overall test set for the DUT. Experiments on industrial circuits show pattern count reductions of 21% to 68%. The technique is independent of any special ATPG tool or scan compression technique and requires no change or additional support in an existing ATPG system.

Proceedings ArticleDOI
04 Jul 2011
TL;DR: Two complementary approaches for automatic test pattern generation for reversible circuits are introduced and evaluated, including a simulation-based technique and methods based on Boolean satisfiability and pseudo-Boolean optimization.
Abstract: Research in the domain of reversible circuits found significant interest in the last years - not least because of the promising applications e.g. in quantum computation and low-power design. First physical realizations are already available, motivating the development of efficient testing methods for this kind of circuits. In this paper, complementary approaches for automatic test pattern generation for reversible circuits are introduced and evaluated. Besides a simulation-based technique, methods based on Boolean satisfiability and pseudo-Boolean optimization are thereby applied. Experiments on large reversible circuits show the suitability of the proposed approaches with respect to different application scenarios and test goals, respectively.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: This paper introduces an automated test case generation approach for industrial automation applications which are specified by UML state chart diagrams and presents a prototype application of the presented approach for a sorting machine.
Abstract: The need for increasing flexibility of industrial automation system products leads to the trend to shift functional behavior from hardware solutions to software components. This trend causes an increasing complexity of software components and the need for comprehensive and automated testing approaches to ensure a requested quality level. Nevertheless, a key task in software testing is to identify appropriate test cases typically requiring high effort for test case generation and rework effort for adapting test cases in case of requirements changes. Semi-automated derivation of test cases based on models, like UML, can support test case generation. In this paper we introduce an automated test case generation approach for industrial automation applications which are specified by UML state chart diagrams. In addition we present a prototype application of the presented approach for a sorting machine. Major results showed that state charts (a) can support efficient test case generation and (b) enable automated code generation of test cases and code for the industrial automation domain.

Journal ArticleDOI
TL;DR: Results show the effectiveness of the proposed test methods employing the two test metrics based on a discrimination factor using normalized Euclidean distances and the other utilizing Mahalanobis distances against three other test methods, namely, a test method utilizing the harmonic magnitude components of the measured signal spectrum, and a method based on the WT of the measurement signal.
Abstract: Methods for testing both parametric and catastrophic faults in analog and mixed-signal circuits are presented. They are based on the wavelet transform (WT) of the measured signal, be it supply current (IPS) or output voltage (VOUT) waveform. The tolerance limit, which affects fault detectability, for the good or reference circuit is set by statistical processing data obtained from a set of fault-free circuits. In the wavelet analysis, two test metrics, one based on a discrimination factor using normalized Euclidean distances and the other utilizing Mahalanobis distances, are introduced. Both metrics rely on wavelet energy computation. Simulation results from the application of the proposed test methods in testing known analog and mixed-signal circuit benchmarks are given. In addition, experimental results from testing actual circuits and from production line testing of a commercial electronic circuit are presented. These results show the effectiveness of the proposed test methods employing the two test metrics against three other test methods, namely, a test method based on the root-mean-square value of the measured signal, a test method utilizing the harmonic magnitude components of the measured signal spectrum, and a method based on the WT of the measured signal.

Patent
Xijiang Lin1, Kun-Han Tsai2, Mark Kassab1, Chen Wang1, Janusz Rajski1 
31 Oct 2011
TL;DR: In this paper, a timing-aware automatic test pattern generation (ATPG) is proposed to improve the quality of a test set generated for detecting delay defects or holding time defects.
Abstract: Disclosed herein are exemplary methods, apparatus, and systems for performing timing-aware automatic test pattern generation (ATPG) that can be used, for example, to improve the quality of a test set generated for detecting delay defects or holding time defects. In certain embodiments, timing information derived from various sources (e.g. from Standard Delay Format (SDF) files) is integrated into an ATPG tool. The timing information can be used to guide the test generator to detect the faults through certain paths (e.g., paths having a selected length, or range of lengths, such as the longest or shortest paths). To avoid propagating the faults through similar paths repeatedly, a weighted random method can be used to improve the path coverage during test generation. Experimental results show that significant test quality improvement can be achieved when applying embodiments of timing-aware ATPG to industrial designs.

Proceedings ArticleDOI
01 May 2011
TL;DR: This scheme is the first of its kind for achieving guaranteed launch safety with minimal impact on test quality and test costs, which is the ultimate goal of power-aware at-speed scan test generation.
Abstract: At-speed scan testing may suffer from severe yield loss due to the launch safety problem, where test responses are invalidated by excessive launch switching activity (LSA) caused by test stimulus launching in the at-speed test cycle. However, previous low-power test generation techniques can only reduce LSA to some extent but cannot guarantee launch safety. This paper proposes a novel & practical power-aware test generation flow, featuring guaranteed launch safety. The basic idea is to enhance ATPG with a unique two-phase (rescue & mask) scheme by targeting at the real cause of the launch safety problem, i.e., the excessive LSA in the neighboring areas (namely impact areas) around long paths sensitized by a test vector. The rescue phase is to reduce excessive LSA in impact areas in a focused manner, and the mask phase is to exclude from use in fault detection the uncertain test response at the endpoint of any long sensitized path that still has excessive LSA in its impact area even after the rescue phase is executed. This scheme is the first of its kind for achieving guaranteed launch safety with minimal impact on test quality and test costs, which is the ultimate goal of power-aware at-speed scan test generation.

Proceedings ArticleDOI
01 Sep 2011
TL;DR: This paper focuses on a new approach to significantly improve the overall defect coverage for CMOS-based designs with the final goal to eliminate any system-level test.
Abstract: This paper focuses on a new approach to significantly improve the overall defect coverage for CMOS-based designs with the final goal to eliminate any system-level test. This methodology describes the pattern generation flow for detecting cell-internal small-delay defects caused by cell-internal resistive bridges. Results have been evaluated on 1,900 library cells of a 32-nm technology. First production test results are presented from evaluating additional defect detections achieved with different fault models on a 45-nm design.

Proceedings ArticleDOI
Irith Pomeranz1
01 May 2011
TL;DR: Experimental results demonstrate that the procedure is able to reduce the sizes of available mixed test sets significantly and modifies the types of significant numbers of tests before including them in the compacted test set.
Abstract: Test sets that consist of both broadside and skewed-load tests provide improved delay fault coverage for standard-scan circuits. This paper describes a static test compaction procedure for such mixed test sets. The unique feature of the procedure is that it can modify the type of a test (from broadside to skewed-load or from skewed-load to broadside) if this contributes to test compaction. Experimental results demonstrate that the procedure is able to reduce the sizes of available mixed test sets significantly. Moreover, it modifies the types of significant numbers of tests before including them in the compacted test set.

Journal ArticleDOI
TL;DR: A formal model uses a fixed bound in time and exploits fault detection circuitry to cope with the complexity of the underlying sequential equivalence check and returns a lower and an upper bound on the robustness of a digital circuit with respect to transient faults.
Abstract: Continuously shrinking feature sizes result in an increasing susceptibility of circuits to transient faults, e.g., due to environmental radiation. Approaches to implement fault tolerance are known. But assessing the fault tolerance of a given implementation is a hard verification problem. Here, we propose the use of formal methods to assess the robustness of a digital circuit with respect to transient faults. Our formal model uses a fixed bound in time and exploits fault detection circuitry to cope with the complexity of the underlying sequential equivalence check. As a result, a lower and an upper bound on the robustness are returned together with vulnerable components. The underlying algorithm and techniques to improve the efficiency are presented. In experiments, we evaluate the method on circuits with different fault detection mechanisms.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: This work proposes an automatic test case generation approach to verify the system behavior in erroneous situations using fault injection, simulating component (device) defects during runtime and demonstrates the applicability on a laboratory plant.
Abstract: The development of PLC control software in machine and plant automation is facing increasing challenges, since more and more functionality and safety aspects are in the control software's responsibility. Reliability and robustness of reactive systems in long-term operation is being influenced by physical conditions. These aspects must be considered at an early development stage in order to reduce development costs and fulfill quality requirements at the same time. We propose an automatic test case generation approach to verify the system behavior in erroneous situations using fault injection, simulating component (device) defects during runtime. We focus on the generation of a reduced set of meaningful test cases to be executed in a simulated environment to increase reliability. The applicability is demonstrated on a laboratory plant.

Journal ArticleDOI
TL;DR: A novel technique for fault detection as well as fault location in a reversible combinational circuit under the missing gate fault model is presented, which admits a universal test set (UTS) of size (n+1) that detects all single missing-Gate faults (SMGFs), repeated-gate faults (RGFs, and partial missing- gate faults (PMGFs) in the circuit.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper presents an approach to improve the effectiveness of spectrum based fault localization by incorporating the relative importance of different test cases in the calculation of suspiciousness scores.
Abstract: Spectrum based fault localization techniques such as Tarantula and Ochiai calculate the suspiciousness score of a program statement using the number of failing and passing test cases that execute the statement. These techniques implicitly assume that all test cases are equally important. However, research on test case generation and selection techniques has shown that using certain test cases can lead to more effective fault localization than others. In this paper, we present an approach to improve the effectiveness of spectrum based fault localization by incorporating the relative importance of different test cases in the calculation of suspiciousness scores.

Proceedings ArticleDOI
23 May 2011
TL;DR: A novel flow to determine the functional power to be used as test power (upper and lower) limits during at-speed delay testing and comparison purpose between the above-mentioned test scheme and power consumption during the functional operation mode of a given circuit is proposed.
Abstract: High power consumption during test may lead to yield loss and premature aging. In particular, excessive peak power during at-speed delay fault testing represents an important issue. In the literature, several techniques have been proposed to reduce peak power consumption during at-speed LOC or LOS delay testing. On the other hand, some experiments have proved that too much test power reduction might lead to test escape and reliability problems. So, in order to avoid any yield loss and test escape due to power issues during test, test power has to map the power consumed during functional mode. In literature, some techniques have been proposed to apply test vectors that mimic functional operation from the switching activity point of view. The process consists of shifting-in a test vector (at low speed) and then applying several successive at-speed clock cycles before capturing the test response. In this paper, we propose a novel flow to determine the functional power to be used as test power (upper and lower) limits during at-speed delay testing. This flow is also used for comparison purpose between the above-mentioned test scheme and power consumption during the functional operation mode of a given circuit. The proposed methodology has been validated on an Intel MC8051 micro controller synthesized in a 65nm industrial technology.

Proceedings ArticleDOI
20 Nov 2011
TL;DR: A versatile method that enumerates all or a user-specified number of longest sensitisable paths in the whole circuit or through specific components, which allows the method to benefit from recent advances in SAT-solving technology, but also to avoid some of the drawbacks of previous structural approaches.
Abstract: We present a versatile method that enumerates all or a user-specified number of longest sensitisable paths in the whole circuit or through specific components. The path information can be used for design and test of circuits affected by statistical process variations. The algorithm encodes all aspects of the path search as an instance of the Boolean Satisfiability Problem (SAT), which allows the method not only to benefit from recent advances in SAT-solving technology, but also to avoid some of the drawbacks of previous structural approaches. Experimental results for academic and industrial benchmark circuits demonstrate the method's accuracy and scalability.

Proceedings ArticleDOI
27 Jun 2011
TL;DR: A multi-fidelity system of computer codes for the analysis and design of vehicles having extensive areas of laminar flow is under development at the NASA Langley Research Center, and two new components have been developed.
Abstract: A multi-fidelity system of computer codes for the analysis and design of vehicles having extensive areas of laminar flow is under development at the NASA Langley Research Center. The overall approach consists of the loose coupling of a flow solver, a transition prediction method and a design module using shell scripts, along with interface modules to prepare the input for each method. This approach allows the user to select the flow solver and transition prediction module, as well as run mode for each code, based on the fidelity most compatible with the problem and available resources. The design module can be any method that designs to a specified target pressure distribution. In addition to the interface modules, two new components have been developed: 1) an efficient, empirical transition prediction module (MATTC) that provides n-factor growth distributions without requiring boundary layer information; and 2) an automated target pressure generation code (ATPG) that develops a target pressure distribution that meets a variety of flow and geometry constraints. The ATPG code also includes empirical estimates of several drag components to allow the optimization of the target pressure distribution. The current system has been developed for the design of subsonic and transonic airfoils and wings, but may be extendable to other speed ranges and components. Several analysis and design examples are included to demonstrate the current capabilities of the system.

Proceedings ArticleDOI
01 Sep 2011
TL;DR: This paper presents a novel framework of faster-than-at-speed test to minimize the slack of the sensitized path for each fault, and presents methods to maximize the Sensitized path delay and further reduce the pattern count under a constraint on the allowable slack size.
Abstract: Faster-than-at-speed testing is an effective approach to screen small delay defects (SDDs) and increase test quality and in-field reliability. This paper presents a novel framework of faster-than-at-speed test to minimize the slack of the sensitized path for each fault. The basic strategy is to use multiple faster-than-at-speed test timings with endpoint masking for each pattern. By performing a detailed analysis of the sensitized path delay for active faults and active endpoints in each pattern, we can minimize the slack for the detectable faults while preventing a large increase in pattern count. We also present methods to maximize the sensitized path delay and further reduce the pattern count under a constraint on the allowable slack size, instead of minimizing the slack. Experimental results for ITC'99 benchmark circuits show the effectiveness of the proposed methods in terms of slack size and sensitized path delay for detectable faults, statistical delay quality level (SDQL) and pattern count.

Journal ArticleDOI
TL;DR: A thermal image matter-element used to design a circuit board signal fault diagnosis system that can attain fast fault determination and reduced manpower is presented.
Abstract: This paper presents a thermal image matter-element used to design a circuit board signal fault diagnosis system. When a circuit element presents faults the temperature distribution will skew. Therefore, extension theory is used to build several kinds of thermal image matter-element models with fault circuits. According to the matter-element and correlation function, the fault type in the testing circuit is detected by analyzing the correlation degree between the typical fault models and test circuit boards. This new method can attain fast fault determination and reduced manpower.

Proceedings ArticleDOI
14 Mar 2011
TL;DR: A methodology to avoid power droop during scan capture without compromising at-speed test coverage is presented, based on the use of a low area overhead hardware controller to control the clock gates.
Abstract: Excessive power dissipation caused by large amount of switching activities has been a major issue in scan-based testing. For large designs, the excessive switching activities during launch cycle can cause severe power droop, which cannot be recovered before capture cycle, rendering the at-speed scan testing more susceptible to the power droop. In this paper, we present a methodology to avoid power droop during scan capture without compromising at-speed test coverage. It is based on the use of a low area overhead hardware controller to control the clock gates. The methodology is ATPG (Automatic Test Pattern Generation)-independent, hence pattern generation time is not affected and pattern manipulation is not required. The effectiveness of this technique is demonstrated on several industrial designs.

Book ChapterDOI
19 Sep 2011
TL;DR: This paper describes an approach combining fault- and model-based testing which has been realized in the European project MOGENTES2, using UML state machines for representing requirements, and discusses results of its application to a use case from the automotive domain.
Abstract: In principle, automated test case generation - both from source code and models - is a fairly evolved technology, which is on the way to common use in industrial testing and quality assessment of safety-related, softwareintensive systems However, common coverage measures such as branch or MC/DC1 for source code and states or transitions for state-based models provide only very limited information about the covered (implementation) faults Fault-based test case generation tries to improve this situation by looking for detecting faults explicitly This paper describes an approach combining fault- and model-based testing which has been realized in the European project MOGENTES2, using UML state machines for representing requirements, and discusses results of its application to a use case from the automotive domain

Proceedings ArticleDOI
01 May 2011
TL;DR: BIST with dynamic clock showed about 19% test time reduction for the largest ISCAS89 circuits in which the hardware activity monitor and scan clock control required about 2–3% hardware overhead.
Abstract: We dynamically monitor per cycle scan activity to speed up the scan clock for low activity cycles without exceeding the specified peak power budget. The activity monitor is implemented either as on-chip hardware or through pre-simulated and stored test data. In either case a handshake protocol controls the rate of test data flow between the automatic test equipment (ATE) and device under test (DUT). The test time reduction accomplished depends upon an average activity factor α. For low α, about 50% test time reduction is analytically shown. With moderate activity, α = 0.5, simulated test data gives about 25% test time reduction for ITC02 benchmarks. For full scan s38584, the dynamic scan clock control reduced the test time by 19% when fully specified ATPG vectors were used and by 43% for vectors with don't cares. BIST with dynamic clock showed about 19% test time reduction for the largest ISCAS89 circuits in which the hardware activity monitor and scan clock control required about 2–3% hardware overhead.