scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 2013"


Proceedings ArticleDOI
22 Sep 2013
TL;DR: The results show that the test cases prioritized using ROCKET (Prioritization for Continuous Regression Testing) provide faster fault detection, and increase regression fault detection rate, revealing 30% more faults for 20% of the test suite executed, comparing to manually prioritized test cases.
Abstract: Regression testing in continuous integration environment is bounded by tight time constraints. To satisfy time constraints and achieve testing goals, test cases must be efficiently ordered in execution. Prioritization techniques are commonly used to order test cases to reflect their importance according to one or more criteria. Reduced time to test or high fault detection rate are such important criteria. In this paper, we present a case study of a test prioritization approach ROCKET (Prioritization for Continuous Regression Testing) to improve the efficiency of continuous regression testing of industrial video conferencing software. ROCKET orders test cases based on historical failure data, test execution time and domain-specific heuristics. It uses a weighted function to compute test priority. The weights are higher if tests uncover regression faults in recent iterations of software testing and reduce time to detection of faults. The results of the study show that the test cases prioritized using ROCKET (1) provide faster fault detection, and (2) increase regression fault detection rate, revealing 30% more faults for 20% of the test suite executed, comparing to manually prioritized test cases.

145 citations


Proceedings ArticleDOI
06 Jul 2013
TL;DR: Weight-based Genetic Algorithms (GAs) are applied to minimize the test suite for testing a product, while preserving fault detection capability and testing coverage of the original test suite, and Random-Weighted GA (RWGA) achieved significantly better performance.
Abstract: Test minimization techniques aim at identifying and eliminating redundant test cases from test suites in order to reduce the total number of test cases to execute, thereby improving the efficiency of testing. In the context of software product line, we can save effort and cost in the selection and minimization of test cases for testing a specific product by modeling the product line. However, minimizing the test suite for a product requires addressing two potential issues: 1) the minimized test suite may not cover all test requirements compared with the original suite; 2) the minimized test suite may have less fault revealing capability than the original suite. In this paper, we apply weight-based Genetic Algorithms (GAs) to minimize the test suite for testing a product, while preserving fault detection capability and testing coverage of the original test suite. The challenge behind is to define an appropriate fitness function, which is able to preserve the coverage of complex testing criteria (e.g., Combinatorial Interaction Testing criterion). Based on the defined fitness function, we have empirically evaluated three different weight-based GAs on an industrial case study provided by Cisco Systems, Inc. Norway. We also presented our results of applying the three weight-based GAs on five existing case studies from the literature. Based on these case studies, we conclude that among the three weight-based GAs, Random-Weighted GA (RWGA) achieved significantly better performance than the other ones.

105 citations


Journal ArticleDOI
TL;DR: A family of test case prioritization techniques that use the dependency information from a test suite to prioritize that test suite, which increases the rate of fault detection compared to the rates achieved by the untreated order, random orders, and test suites ordered using existing "coarse-grained” techniques based on function coverage.
Abstract: Test case prioritization is the process of ordering the execution of test cases to achieve a certain goal, such as increasing the rate of fault detection. Increasing the rate of fault detection can provide earlier feedback to system developers, improving fault fixing activity and, ultimately, software delivery. Many existing test case prioritization techniques consider that tests can be run in any order. However, due to functional dependencies that may exist between some test cases-that is, one test case must be executed before another-this is often not the case. In this paper, we present a family of test case prioritization techniques that use the dependency information from a test suite to prioritize that test suite. The nature of the techniques preserves the dependencies in the test ordering. The hypothesis of this work is that dependencies between tests are representative of interactions in the system under test, and executing complex interactions earlier is likely to increase the fault detection rate, compared to arbitrary test orderings. Empirical evaluations on six systems built toward industry use demonstrate that these techniques increase the rate of fault detection compared to the rates achieved by the untreated order, random orders, and test suites ordered using existing "coarse-grained” techniques based on function coverage.

68 citations


Proceedings ArticleDOI
11 Nov 2013
TL;DR: This work extends the search-based test generation tool EVOSUITE to use entropy in the fitness function of its underlying genetic algorithm, and applies it to seven real faults, leading to a 91% average reduction of diagnosis candidates needed to inspect to find the true faulty one.
Abstract: Spectrum-based Bayesian reasoning can effectively rank candidate fault locations based on passing/failing test cases, but the diagnostic quality highly depends on the size and diversity of the underlying test suite. As test suites in practice often do not exhibit the necessary properties, we present a technique to extend existing test suites with new test cases that optimize the diagnostic quality. We apply probability theory concepts to guide test case generation using entropy, such that the amount of uncertainty in the diagnostic ranking is minimized. Our ENTBUG prototype extends the search-based test generation tool EVOSUITE to use entropy in the fitness function of its underlying genetic algorithm, and we applied it to seven real faults. Empirical results show that our approach reduces the entropy of the diagnostic ranking by 49% on average (compared to using the original test suite), leading to a 91% average reduction of diagnosis candidates needed to inspect to find the true faulty one.

65 citations


Proceedings ArticleDOI
18 Nov 2013
TL;DR: An enhanced dynamic test compaction approach which leverages the high implicative power of modern SAT solvers and is able to achieve high compaction - for certain benchmarks even smaller test sets than the currently best known results are obtained.
Abstract: Automatic Test Pattern Generation (ATPG) based on Boolean Satisfiability (SAT) is a robust alternative to classical structural ATPG. Due to the powerful reasoning engines of modern SAT solvers, SAT-based algorithms typically provide a high test coverage because of the ability to reliably classify hard-to-detect faults. However, a drawback of SAT-based ATPG is the test compaction ability. In this paper, we propose an enhanced dynamic test compaction approach which leverages the high implicative power of modern SAT solvers. Fault detection constraints are encoded into the SAT instance and a formal optimization procedure is applied to increase the detection ability of the generated tests. Experiments show that the proposed approach is able to achieve high compaction -- for certain benchmarks even smaller test sets than the currently best known results are obtained.

49 citations


Journal ArticleDOI
TL;DR: Experimental results of the differential scan attacks employed in this paper suggest that tools using X-masking and X-tolerance are vulnerable and leak information about the secret key.
Abstract: Test compression is widely used for reducing test time and cost of a very large scale integration circuit. It is also claimed to provide security against scan-based side-channel attacks. This paper pursues the legitimacy of this claim and presents scan attack vulnerabilities of test compression schemes used in commercial electronic design automation tools. A publicly available advanced encryption standard design is used and test compression structures provided by Synopsys, Cadence, and Mentor Graphics design for testability tools are inserted into the design. Experimental results of the differential scan attacks employed in this paper suggest that tools using X-masking and X-tolerance are vulnerable and leak information about the secret key. Differential scan attacks on these schemes have been demonstrated to have a best case success rate of 94.22% and 74.94%, respectively, for a random scan design. On the other hand, time compaction seems to be the strongest choice with the best case success rate of 3.55%. In addition, similar attacks are also performed on existing scan attack countermeasures proposed in the literature, thus experimentally evaluating their practical security. Finally, a suitable countermeasure is proposed and compared to the previously proposed countermeasures.

40 citations


Proceedings ArticleDOI
27 May 2013
TL;DR: This is the first automated method for efficient pattern retargeting in complex reconfigurable scan architectures such as P1687-based networks and achieves an access time reduction by up to 88× or 2.4× in average w.r.t. unoptimized satisfying solutions.
Abstract: Efficient access to on-chip instrumentation is a key enabler for post-silicon validation, debug, bringup or diagnosis. Reconfigurable scan networks, as proposed by e.g. the IEEE Std. P1687, emerge as an effective and affordable means to cope with the increasing complexity of on-chip infrastructure. To access an element in a reconfigurable scan network, a scan-in bit sequence must be generated according to the current state and structure of the network. Due to sequential and combinational dependencies, the scan pattern generation process (pattern retargeting) poses a complex decision and optimization problem. This work presents a method for scan pattern generation with reduced access time. We map the access time reduction to a pseudo-Boolean optimization problem, which enables the use of efficient solvers to exhaustively explore the search space of valid scan-in sequences. This is the first automated method for efficient pattern retargeting in complex reconfigurable scan architectures such as P1687-based networks. It supports the concurrent access to multiple target scan registers (access merging) and generates reduced (short) scan-in sequences, considering all sequential and combinational dependencies. The proposed method achieves an access time reduction by up to 88× or 2.4× in average w.r.t. unoptimized satisfying solutions.

35 citations


Proceedings ArticleDOI
18 Mar 2013
TL;DR: A novel and general technique for automated test generation that combines tight bounds with incremental SAT solving, and the testing criterion driven approach implemented in the prototype tool FAJITA, enable to effectively generate test suites for container classes with rich contracts, more efficiently than other state-of-the-art tools.
Abstract: We present a novel and general technique for automated test generation that combines tight bounds with incremental SAT solving. The proposed technique uses incremental SAT to build test suites targeting a specific testing criterion, amongst various black-box and white-box criteria. As our experimental results show, the combination of tight bounds with incremental SAT, and the testing criterion driven approach implemented in our prototype tool FAJITA, enable us to effectively generate test suites for container classes with rich contracts, more efficiently than other state-of-the-art tools.

33 citations


Journal ArticleDOI
TL;DR: The executing trace collected during executing training programs on the processor under test is utilizes to simplify mappings and functional constraint extraction for ports of inner components, which facilitate structural test generation with constraints at gate level, and automatic test instruction generation (ATIG) even for hidden control logic (HCL).
Abstract: Software-based self-testing (SBST) has been a promising method for processor testing, but the complexity of the state-of-art processors still poses great challenges for SBST. This paper utilizes the executing trace collected during executing training programs on the processor under test to simplify mappings and functional constraint extraction for ports of inner components, which facilitate structural test generation with constraints at gate level, and automatic test instruction generation (ATIG) even for hidden control logic (HCL). In addition, for sequential HCL, we present a test routine generation technique on the basis of an extended finite state machine, so that structural patterns for combinational subcircuits in the sequential HCL can be mapped into the test routines to form a test program. Experimental results demonstrate that the proposed ATIG method can achieve good structural fault coverage with compact test programs on modern processors.

33 citations


Journal ArticleDOI
TL;DR: The recent methods rely on fewer and longer test cases to reduce the overall test suite length, while the traditional methods produce more and shorter test cases.
Abstract: Context: Testing from finite state machines has been investigated due to its well-founded and sound theory as well as its practical application. There has been a recurrent interest in developing methods capable of generating test suites that detect all faults in a given fault domain. However, the proposal of new methods motivates the comparison with traditional methods. Objective: We compare the methods that generate complete test suites from finite states machines. The test suites produced by the W, HSI, H, SPY, and P methods are analyzed in different configurations. Method: Complete and partial machines were randomly generated varying numbers of states, inputs, outputs, and transitions. These different configurations were used to compare test suite characteristics (number of resets, test case length) and the test suite length (i.e., the sum of the length of its test cases). The fault detection ratio was evaluated using mutation testing to produce faulty implementations with an extra state. Results: On average, the recent methods (H, SPY, and P) produced longer test cases but smaller test suites than the traditional methods (W, HSI). The recent methods generated test suites of similar length, though P produced slightly smaller test suites. The SPY and P methods had the highest fault detection ratios and HSI had the lowest. For all methods, there was a positive correlation between the number of resets and the test suite length and between the test case length and the fault detection ratio. Conclusion: The recent methods rely on fewer and longer test cases to reduce the overall test suite length, while the traditional methods produce more and shorter test cases. Longer test cases are correlated to fault detection ratio which favored SPY, though all methods have a ratio of over 92%.

32 citations


Book ChapterDOI
29 Sep 2013
TL;DR: This paper challenges the idea of using coverage criteria for test selection and instead proposes an approach based on fault models that instantiate the developed general fault model to describe existing fault models and shows by example how to derive test cases.
Abstract: Because they are comparatively easy to implement, structural coverage criteria are commonly used for test derivation in model- and code-based testing. However, there is a lack of compelling evidence that they are useful for finding faults, specifically so when compared to random testing. This paper challenges the idea of using coverage criteria for test selection and instead proposes an approach based on fault models. We define a general fault model as a transformation from correct to incorrect programs and/or a partition of the input data space. Thereby, we leverage the idea of fault injection for test assessment to test derivation. We instantiate the developed general fault model to describe existing fault models. We also show by example how to derive test cases.

Journal ArticleDOI
TL;DR: This paper presents an efficient pattern evaluation and selection procedure for screening SDDs that are caused by physical defects and by delays added to paths by process variations and crosstalk and demonstrates that this method sensitizes more LPs, detects more SDDs with a much smaller pattern count, and needs less CPU runtime compared with a commercial timing-aware ATPG tool.
Abstract: The population of small-delay defects (SDDs) in integrated circuits increases significantly as technology scales to 65 nm and below. Therefore, testing for SDDs is necessary to ensure the quality and reliability of high-performance integrated circuits fabricated with the latest technologies. Commercial timing-aware automatic test pattern generation (ATPG) tools have been developed for SDD detection. However, they only use static timing analysis reports in the form of standard delay format for path-length calculation and neglect important underlying causes, such as process variations, crosstalk, and power-supply noise, which can also induce small delays into the circuit and impact the timing of targeted paths. In this paper, we present an efficient pattern evaluation and selection procedure for screening SDDs that are caused by physical defects and by delays added to paths by process variations and crosstalk. In this procedure, the best patterns for SDDs are selected from a large repository test set. Experimental results demonstrate that our method sensitizes more LPs, detects more SDDs with a much smaller pattern count, and needs less CPU runtime compared with a commercial timing-aware ATPG tool.

Proceedings ArticleDOI
18 Nov 2013
TL;DR: Proposed ATPG reduces external test set sizes and test data volumes by 24% in comparison to that obtained by a state of the art commercial ATPG for BIST ready designs.
Abstract: In this work we consider ATPG methods tailored to BIST ready designs to improve compression of external tests for such designs. Proposed ATPG reduces external test set sizes and test data volumes by 24% in comparison to that obtained by a state of the art commercial ATPG for BIST ready designs.

Proceedings ArticleDOI
29 Apr 2013
TL;DR: This work proposes the first approach for automated testing of flow-based microfluidic biochips that are designed using membrane-based valves for flow control and achieves 100% coverage of faults that model defects in channels and valves.
Abstract: Recent advances in flow-based microfluidics have led to the emergence of biochemistry-on-a-chip as a new paradigm in clinical diagnostics and biomolecular recognition. However, a potential roadblock in the deployment of microfluidic biochips is the lack of test techniques to screen defective devices before they are used for biochemical analysis. Defective chips lead to repetition of experiments, which is undesirable due to high reagent cost and limited availability of samples. Prior work on fault detection in biochips has been limited to digital (“droplet”) microfluidics and other electrode-based technology platforms. We propose the first approach for automated testing of flow-based microfluidic biochips that are designed using membrane-based valves for flow control. The proposed test technique is based on a behavioral abstraction of physical defects in microchannels and valves. The flow paths and flow control in the microfluidic device are modeled as a logic circuit composed of Boolean gates, which allows us to carry out test generation using standard ATPG tools. The tests derived using the logic circuit model are then mapped to fluidic operations involving pumps and pressure meters in the biochip. Feedback from pressure meters can be compared to expected responses based on the logic circuit model, whereby the types and positions of defects are identified. We show how a fabricated biochip can be tested using the proposed method, and we achieve 100% coverage of faults that model defects in channels and valves.

Proceedings ArticleDOI
29 Apr 2013
TL;DR: A simulation-based X'Filling method, Bit-Flip, is proposed to maximize the power supply noise during PKLPG test and demonstrates that the method can significantly increase effective WSA while limiting the fill rate.
Abstract: Pseudo functional K Longest Path Per Gate (KLPG) test (PKLPG) is proposed to generate delay tests that test the longest paths while having power supply noise similar to that seen during normal functional operation. Our experimental results show that PKLPG is more vulnerable to under-testing than traditional two-cycle transition fault test. In this work, a simulation-based X'Filling method, Bit-Flip, is proposed to maximize the power supply noise during PKLPG test. Given a set of partially-specified scan patterns, random filling is done and then an iterative procedure is invoked to flip some of the filled bits, to increase the effective weighted switching activity (WSA). Experimental results on both compacted and uncompacted test patterns are presented. The results demonstrate that our method can significantly increase effective WSA while limiting the fill rate.

Proceedings ArticleDOI
08 Jul 2013
TL;DR: A smart test controller that is able to prevent all known scan attacks is presented, which does not require any additional signals, it is transparent to the designer and it does not requirement any modifications of the test protocol and procedure.
Abstract: Structural testing is one important step in the production of integrated circuits. The most common DIT technique is the insertion of scan-chains, which increases the observability and the controllability of the circuit's internal nodes. Nevertheless, malicious users can use the scan chains to observe confidential data stored in devices implementing cryptographic primitives. Therefore, scan chains inserted in secure ICs can be considered as a source of information leakage. Several countermeasures exist to cope with this type of problem. However, they either introduce high area overheads or they require modifications to the original design or the test protocol. In this paper we present a smart test controller that is able to prevent all known scan attacks. The controller does not require any additional signals, it is transparent to the designer and it does not require any modifications of the test protocol and procedure. Moreover, it introduces a very small area overhead.

Proceedings ArticleDOI
18 Mar 2013
TL;DR: An improved inductive fault analysis approach is used to locate potential faults at layout level and calculate the relative probability of each fault to improve fault coverage or to improve defect resilience of the circuit.
Abstract: High test quality can be achieved through defect oriented testing using analog fault modeling approach. However, this approach is computationally demanding and typically hard to apply to large scale circuits. In this work, we use an improved inductive fault analysis approach to locate potential faults at layout level and calculate the relative probability of each fault. Our proposed method yields actionable results such as fault coverage of each test, potential faults, and probability of each fault. We show that the computational requirement can be significantly reduced by incorporating fault probabilities. These results can be used to improve fault coverage or to improve defect resilience of the circuit.

Proceedings ArticleDOI
18 Nov 2013
TL;DR: A concolic testing approach to generation of post-silicon tests with virtual prototypes by identifying device states under test from concrete executions of a virtual prototype based on the concept of device transaction, symbolically execute the virtual prototype from these device states to generate tests, and issue the generated tests concretely to the silicon device.
Abstract: Post-silicon validation is a crucial stage in the system development cycle. To accelerate post-silicon validation, high-quality tests should be ready before the first silicon prototype becomes available. In this paper, we present a concolic testing approach to generation of post-silicon tests with virtual prototypes. We identify device states under test from concrete executions of a virtual prototype based on the concept of device transaction, symbolically execute the virtual prototype from these device states to generate tests, and issue the generated tests concretely to the silicon device. We have applied this approach to virtual prototypes of three network adapters to generate their tests. The generated test cases have been issued to both virtual prototypes and silicon devices. We observed significant coverage improvement with generated test cases. Furthermore, we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect.

Proceedings ArticleDOI
29 Apr 2013
TL;DR: This work presents for the first time a framework that yields provably optimal test cubes by using the theory of quantified Boolean formulas (QBF) and demonstrates the quality gain of the proposed method.
Abstract: Circuits that employ test pattern compression rely on test cubes to achieve high compression ratios. The less inputs of a test pattern are specified, the better it can be compacted and hence the lower the test application time. Although there exist previous approaches to generate such test cubes, none of them are optimal. We present for the first time a framework that yields provably optimal test cubes by using the theory of quantified Boolean formulas (QBF). Extensive comparisons with previous methods demonstrate the quality gain of the proposed method.

Proceedings ArticleDOI
08 Jul 2013
TL;DR: A novel at-speed test technique called Pulse-Vanishing test (PV-test), in which a short-duration pulse signal is applied to an interposer wire under test at the d river end, which indicates the presence of a delay fault.
Abstract: Testing the speed of post-bond interposer wires in a 2.5-D stacked IC is essential for silicon debugging, yield learning, and even for fault tolerance. In this paper, we present a novel at-speed test technique called Pulse-Vanishing test (PV-test), in which a short-duration pulse signal is applied to an interposer wire under test at the d river end. If the pulse signal can successfully propagate through the interposer wire and reach the other end, then the interposer wire is considered fault-free. Otherwise, it indicates the presence of a delay fault. This new test technique has several technical merits. For example, the Design-for-Testability (DfT) circuit for an interposer wire is similar to the boundary scan cell and can be controlled through scan chain. Also, it can be easily adapted to perform at-speed Built-In Self-Test (BIST) supporting on-the-spot diagnosis.

Proceedings ArticleDOI
18 Nov 2013
TL;DR: PACOST combines concrete simulation and symbolic simulation in a path constraint solver to generate a set of valid input vectors for exploring different simulation paths, followed by next state selection considering abstract distances.
Abstract: Test generation for hard-to-reach states has been one of the hardest tasks in functional verification. In this paper, we present PACOST, a PAth Constraint Solving based Test generation method which operates in an abstraction-guided simulation framework to cover hard-to-reach states. PACOST combines concrete simulation and symbolic simulation in a path constraint solver to generate a set of valid input vectors for exploring different simulation paths, followed by next state selection considering abstract distances. In addition, two backtracking strategies are proposed to alleviate the dead end problem and ensure fast converge to the target state. Experimental results show that PACOST is effective in covering hard-to-reach states.

Journal ArticleDOI
TL;DR: The experimental results confirm that the proposed method can effectively generate test data that not only traverse the target path but also detect faults lying in it.
Abstract: The aim of software testing is to find faults in a program under test, so generating test data that can expose the faults of a program is very important. To date, current studies on generating test data for path coverage do not perform well in detecting low probability faults on the covered path. The automatic generation of test data for both path coverage and fault detection using genetic algorithms is the focus of this study. To this end, the problem is first formulated as a bi-objective optimization problem with one constraint whose objectives are the number of faults detected in the traversed path and the risk level of these faults, and whose constraint is that the traversed path must be the target path. An evolutionary algorithmis employed to solve the formulatedmodel, and several types of fault detectionmethods are given. Finally, the proposed method is applied to several real-world programs, and compared with a random method and evolutionary optimization method in the following three aspects: the number of generations and the time consumption needed to generate desired test data, and the success rate of detecting faults. The experimental results confirm that the proposed method can effectively generate test data that not only traverse the target path but also detect faults lying in it.

Journal ArticleDOI
TL;DR: An adaptive test flow for mixed-signal circuits that aims at optimizing the test set on a per-device basis so that more test resources can be devoted to marginal devices while passing devices that are not marginal with less testing is presented.
Abstract: We present an adaptive test flow for mixed-signal circuits that aims at optimizing the test set on a per-device basis so that more test resources can be devoted to marginal devices while passing devices that are not marginal with less testing. Cumulative statistics of the process are monitored using a differential entropy-based approach and updated only when necessary. Thus, process shift is captured and continuously incorporated into the analysis. We also include provisions to identify potentially defective devices and test them more extensively since these devices do not conform to learned collective information. We conduct experiments on an low-noise amplifier circuit in simulations, and apply our techniques to production data of two distinct industrial circuits. Both the simulation results and the results on large-scale production data show that adaptive test provides the best tradeoff between test time and test quality as measured in terms of defective parts per million.

Journal ArticleDOI
TL;DR: An iterative approach to DBFDI that is capable of recovering the model and detecting the fault pertaining to that particular cause of the model loss and providing accurate unfolding-in-time of the finer details of the fault, thereby completing the picture of fault detection and estimation of the system under test.

Journal ArticleDOI
TL;DR: An algorithm that selects a small number of test patterns for small delay defects from a large N-detect test set using static upper and lower bound analysis to quickly estimate the sensitized path length so that the central processing unit (CPU) time can be reduced.
Abstract: This letter proposes an algorithm that selects a small number of test patterns for small delay defects from a large N-detect test set. This algorithm uses static upper and lower bound analysis to quickly estimate the sensitized path length so that the central processing unit (CPU) time can be reduced. By ignoring easy faults, only a partial fault dictionary, instead of a complete fault dictionary, is built for test pattern selection. Experimental results on large International Test Conference benchmark circuits show that, with very similar quality, the selected test set is 46% smaller and the CPU time is 42% faster than that of timing-aware automated test pattern generation (ATPG). With the proposed selection algorithm, small delay defect test sets are no longer very expensive to apply.

Proceedings ArticleDOI
18 Nov 2013
TL;DR: A formal hybridization that combines a Register Transfer Level (RTL) stochastic swarm intelligence based test vector generation with the Verilator Verilog- to-C++ source-to-source compiler is presented to maintain high speed of execution while improving metric performance.
Abstract: Although stochastic search techniques have shown promise in test generation and design validation, they often fail when there is a specific, random-resistant sequence of vectors required to exercise a target In order to combat this, deterministic techniques are added, resulting in a hybrid solution to maintain high speed of execution while improving metric performance This paper presents a formal hybridization that combines a Register Transfer Level (RTL) stochastic swarm intelligence based test vector generation with the Verilator Verilog-to-C++ source-to-source compiler Verilator generates a fast cycle accurate C++ simulation unit for Verilog descriptions and provides instrumentation for branch and toggle coverage metrics This RTL model can also be used to generate a bounded model checking (BMC) instance During the stochastic search, the bounded model checker is launched to expand the unexplored search frontier and aid in the navigation of narrow paths Additionally, an inductive reach ability test is applied in order to eliminate unreachable branches from our search space These additions have significantly improved branch coverage, reaching 100% in several ITC99 benchmarks Additionally, compared to previous functional test generation methods, we achieve substantial speedup achieved with purely stochastic methods

Proceedings ArticleDOI
01 Sep 2013
TL;DR: Results on industrial designs show that high quality compressed ATPG patterns can be efficiently re-applied in a very low-pin SoC test environment with very low overhead.
Abstract: IP cores that are embedded in SoCs usually include embedded test compression hardware. When multiple cores are embedded in a SoC with limited tester-contacted pins, there is a need for a structured test-access mechanism (TAM) architecture that allows compressed test data stimuli and responses to be efficiently distributed to the embedded cores. This paper presents SmartScan, a TAM architecture that is based on time-domain multiplexing of compressed data. Results on industrial designs show that high quality compressed ATPG patterns can be efficiently re-applied in a very low-pin SoC test environment with very low overhead.

30 May 2013
TL;DR: A procedure to identify circuit sites where a possible Hardware Trojan may be easily inserted and to automatically generate test patterns able to excite these sites is proposed.
Abstract: Hardware Trojans are malicious alterations to a circuit. These modifications can be inserted either during the design phase or during the fabrication process. Due to the diversity of Hardware Trojans (HTs), detecting and/or locating them are challenging tasks. Numerous approaches have been proposed to address this problem. Methods based on logic testing consist in trying to activate potential Hardware Trojans in order to detect erroneous outputs during simulation. However, traditional ATPG testing may not be sufficient to detect Hardware Trojans. Hardware Trojans are indeed stealthy in nature i.e. mostly inactive unless they are triggered by a rare value. The activation of a Hardware Trojan is therefore a major concern. In this paper, we propose a procedure to identify circuit sites where a possible HT may be easily inserted. The selection of the sites is based on the assumption that the HT is triggered (i) by signals that have potential rare values, (ii) in paths that are not critical, and (iii) combining multiple gates that are close one to the other in the circuit's layout, and close to available space. This identification is then used to automatically generate test patterns able to excite these sites.

Proceedings ArticleDOI
18 Nov 2013
TL;DR: This technique models correlations between basic gate types in the standard cell library, paths, and ring oscillators (ROs) considering variations to improve the analysis of paths delay considering actual silicon variations.
Abstract: Current ATPGs rely on timing analysis tools to identify critical paths for generating path-delay fault (PDF) test patterns. However, the model-based conventional static timing analysis (STA) and statistical static timing analysis (SSTA) tools are not capable of considering the actual silicon variations. In this paper, we present a timing analysis technique that improves the analysis of paths delay considering actual silicon variations. This technique models correlations between basic gate types in the standard cell library, paths, and ring oscillators (ROs) considering variations. The post-silicon measurements on the ROs can help predict actual delay distribution. Paths are then ranked and critical path-delay faults are identified accordingly. This technique is more accurate than STA in conventional PDF test flow, and faster and more accurate than SSTA method. The ranking results show that our flow is advantageous to the rankings obtained using STA and SSTA with ≥15% and ≥ 52% test cost (PDF pattern count) reduction, respectively.

Journal ArticleDOI
TL;DR: A unified capture scheme is proposed to generate programmable clock signals for the detection of both SDDs and circuit aging and the proposed aging-resistant design method enables the offline test circuit to be reused in online operations.
Abstract: Small delay defect (SDD) and aging-induced circuit failure are both prominent reliability concerns for nanoscale integrated circuits. Faster-than-at-speed testing is effective on SDD detection in manufacturing testing, which is always implemented by designing a suite of test signal generation circuits on the chip. Meanwhile, the integration of online aging sensors is becoming attractive in monitoring aging-induced delay degradation in the runtime. These design requirements, if implemented in separate ways, will increase the complexity of a reliable design and consume more die area. In this paper, a unified capture scheme is proposed to generate programmable clock signals for the detection of both SDDs and circuit aging. Our motivation arises from the observations that SDD detection and online aging prediction both need to capture circuit response ahead of the functional clock. The proposed aging-resistant design method enables the offline test circuit to be reused in online operations. Reversed short channel effect is also exploited to make the underlying circuit resilient to process variations. The proposed scheme is validated by intensive HSPICE simulations. Experimental results demonstrate the effectiveness in terms of low area, power, and performance overheads.