scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A stochastic pattern generation and optimization framework for variation-tolerant, power-safe scan test

TL;DR: It is argued that false delay test failures can be avoided by generating "safe" patterns that are tolerant to on-chip variations, which uses process variation information, power grid topology and regional constraints on switching activity.
Abstract: Process variation is an increasingly dominant phenomenon affecting both power and performance in sub-100 nm technologies Cost considerations often do not permit over-designing the power supply infrastructure for test mode, considering the worst-case scenario Test application must not over-exercise the power supply grids, lest the tests will damage the device or lead to false test failures The problem of debugging a delay test failure can therefore be highly complex We argue that false delay test failures can be avoided by generating "safe" patterns that are tolerant to on-chip variations A statistical framework for power-safe pattern generation is proposed, which uses process variation information, power grid topology and regional constraints on switching activity Experimental results are provided on benchmark circuits to demonstrate the effectiveness of the framework
Citations
More filters
Proceedings ArticleDOI
24 Nov 2008
TL;DR: CTX effectively reduces launch switching activity, thus yield loss risk, even with a small number of donpsilat care (X) bits as in test compression, without any impact on test data volume, fault coverage, performance, and circuit design.
Abstract: At-speed scan testing is susceptible to yield loss risk due to power supply noise caused by excessive launch switching activity. This paper proposes a novel two-stage scheme, namely CTX (Clock-Gating-Based Test Relaxation and X-Filling), for reducing switching activity when test stimulus is launched. Test relaxation and X-filling are conducted (1) to make as many FFs inactive as possible by disabling corresponding clock-control signals of clock-gating circuitry in Stage-1 (Clock-Disabling), and (2) to make as many remaining active FFs as possible to have equal input and output values in Stage-2 (FF-Silencing). CTX effectively reduces launch switching activity, thus yield loss risk, even with a small number of donpsilat care (X) bits as in test compression, without any impact on test data volume, fault coverage, performance, and circuit design.

39 citations

Proceedings ArticleDOI
25 May 2008
TL;DR: This paper proposes a novel and practical capture-safe test generation scheme, featuring reliable capture-safety checking and effective capture- safety improvement by combining X-bit identification & X-filling with low launch- switching-activity test generation.
Abstract: Capture-safety, defined as the avoidance of any timing error due to unduly high launch switching activity in capture mode during at-speed scan testing, is critical for avoiding test- induced yield loss. Although point techniques are available for reducing capture IR-drop, there is a lack of complete capture-safe test generation flows. The paper addresses this problem by proposing a novel and practical capture-safe test generation scheme, featuring (1) reliable capture-safety checking and (2) effective capture-safety improvement by combining X-bit identification & X-filling with low launch- switching-activity test generation. This scheme is compatible with existing ATPG flows, and achieves capture-safety with no changes in the circuit-under-test or the clocking scheme.

32 citations


Cites background or methods or result from "A stochastic pattern generation and..."

  • ...Its goal is to achieve capture-safety with no fault coverage loss after excluding capture-undetectable faults, less test data inflation if any, and minimal ATPG change....

    [...]

  • ...On the other hand, launch switching activity can be estimated by toggle constraint metrics, such as global toggle constraint (GTC), global instantaneous toggle constraint (GITC), and regional instantaneous toggle constraint (RITC) [5]....

    [...]

  • ...It is also referred to as supply-voltagenoise-safety [4] or power-safety [5]....

    [...]

  • ...LCP (Low-Capture-Power) ATPG These techniques reduce launch switching activity through carefully determining logic values (0 or 1) for fault detection during test generation, by adding more constraints to conventional ATPG algorithms [4] or by employing new ATPG algorithms [5, 10]....

    [...]

  • ...Since the cost of directly analyzing path delay impact is prohibitive, realistic capture-safety checking often uses indirect metrics to estimate IR-drop [4] or launch switching activity [5]....

    [...]

Journal ArticleDOI
TL;DR: Experimental results on ISCAS'89 and ITC'99 benchmark circuits show that an average of 75% of faults originally detected only by power-risky patterns can be detected by refining power-safe patterns and that most of the remaining faults can be detect by the low-power test generation process.
Abstract: During an at-speed scan-based test, excessive capture power may cause significant current demand, resulting in the IR-drop problem and unnecessary yield loss. Many methods address this problem by reducing the switching activities of power-risky patterns. These methods may not be efficient when the number of power-risky patterns is large or when some of the patterns require extremely high power. In this paper, we propose discarding all power-risky patterns and starting with power-safe patterns only. Our test generation procedure includes two processes, namely, test pattern refinement and low-power test pattern regeneration. The first process is used to refine the power-safe patterns to detect faults originally detected only by power-risky patterns. If some faults are still undetected after this process, the second process is applied to generate new power-safe patterns to detect these faults. The patterns obtained using the proposed procedure are guaranteed to be power-safe for the given power constraints. To the best of our knowledge, this is the first method that refines only the power-safe patterns to address the capture power problem. Experimental results on ISCAS'89 and ITC'99 benchmark circuits show that an average of 75% of faults originally detected only by power-risky patterns can be detected by refining power-safe patterns and that most of the remaining faults can be detected by the low-power test generation process. Furthermore, the required test data volume can be reduced by 12.76% on average with little or no fault coverage loss.

27 citations

Proceedings ArticleDOI
05 Nov 2012
TL;DR: This paper addresses the issue of vulnerability to IR-drop-induced yield loss in nano-scale designs with a novel per-cell dynamic IR- drop estimation method that achieves both high accuracy and high time-efficiency.
Abstract: In return for increased operating frequency and reduced supply voltage in nano-scale designs, their vulnerability to IR-drop-induced yield loss grew increasingly apparent. Therefore, it is necessary to consider delay increase effect due to IR-drop during at-speed scan testing. However, it consumes significant amounts of time for precise IR-drop analysis. This paper addresses this issue with a novel per-cell dynamic IR-drop estimation method. Instead of performing time-consuming IR-drop analysis for each pattern one by one, the proposed method uses global cycle average power profile for each pattern and dynamic IR-drop profiles for a few representative patterns, thus total computation time is effectively reduced. Experimental results on benchmark circuits demonstrate that the proposed method achieves both high accuracy and high time-efficiency.

27 citations


Cites background or methods or result from "A stochastic pattern generation and..."

  • ...Our experiments seem to be inconsistent with several IRdrop analyses that consider power consumption in some restricted local regions [8, 10-13]....

    [...]

  • ...Techniques in [10] and [11] consider power consumption for local regions since locally high power consumption tends to affect IR-drop at particular region....

    [...]

  • ...In the technique [11], a circuit layout is divided into multiple regions and a test pattern is evaluated based on not only global metrics such as global toggle constraint (GTC) which limits toggle count in a whole circuit throughout the test cycle and global instantaneous toggle constraints (GITC) which limits toggle count in a whole circuit at any time instant, but also a local metric of regional instantaneous toggle constraint (RITC) which limits toggle count in each region at any time instant....

    [...]

Proceedings ArticleDOI
18 Jan 2010
TL;DR: In this article, a new weight assignment scheme for logic switching activity was proposed, which enhances the IR-drop assessment capability of the existing weighted switching activity (WSA) model by including the power grid network structure information.
Abstract: For two-pattern at-speed scan testing, the excessive power supply noise at the launch cycle may cause the circuit under test to malfunction, leading to yield loss. This paper proposes a new weight assignment scheme for logic switching activity; it enhances the IR-drop assessment capability of the existing weighted switching activity (WSA) model. By including the power grid network structure information, the proposed weight assignment better reflects the regional IR-drop impact of each switching event. For ATPG, such comprehensive information is crucial in determining whether a switching event burdens the IR-drop effect. Simulation results show that, compared with previous weight assignment schemes, the estimated regional IR-drop profiles better correlate with those generated by commercial tools.

21 citations


Cites background or methods from "A stochastic pattern generation and..."

  • ...Since at-speed scan testing is being challenged by the yield loss problem [15, 11], many techniques have been proposed to reduce the launch cycle SA by, for example, power-aware ATPG [12, 1, 2, 5 ], X-filling [17, 13, 3, 19], or design-fortest (DfT) [14, 16]....

    [...]

  • ...Techniques in [18, 5 , 7] proposed to divide cells in a chip into groups according to the layout....

    [...]

  • ...The partitioning criterion is to keep the number of cells per group to be about 20 as in [ 5 ]....

    [...]

References
More filters
Book
01 Jan 1990
TL;DR: The new edition of Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Abstract: For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design.

2,758 citations


"A stochastic pattern generation and..." refers background in this paper

  • ...When there are several candidate gates in the D-frontier [22], these gates are evaluated based on the size of the timing windows at their outputs and paObjective() selects the candidate gate having the least timing window size....

    [...]

Journal ArticleDOI
Goel1
TL;DR: PODEM (path-oriented decision making) is a new test generation algorithm for combinational logic circuits that uses an implicit enumeration approach analogous to that used for solving 0-1 integer programming problems and is significantly more efficient than DALG over the general spectrum of combinational Logic circuits.
Abstract: The D-algorithm (DALG) is shown to be ineffective for the class of combinational logic circuits that is used to implement error correction and translation (ECAT) functions. PODEM (path-oriented decision making) is a new test generation algorithm for combinational logic circuits. PODEM uses an implicit enumeration approach analogous to that used for solving 0-1 integer programming problems. It is shown that PODEM is very efficient for ECAT circuits and is significantly more efficient than DALG over the general spectrum of combinational logic circuits. A distinctive feature of PODEM is its simplicity when compared to the D-algorithm. PODEM is a complete algorithm in that it will generate a test if one exists. Heuristics are used to achieve an efficient implicit search of the space of all possible primary input patterns until either a test is found or the space is exhausted.

1,112 citations


"A stochastic pattern generation and..." refers methods in this paper

  • ...We implemented an LTVA pattern generator based on the well-known PODEM algorithm [20]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a model describing the maximum clock frequency distribution of a microprocessor is derived and compared with wafer sort data for a recent 0.25-/spl mu/m microprocessor.
Abstract: A model describing the maximum clock frequency (FMAX) distribution of a microprocessor is derived and compared with wafer sort data for a recent 0.25-/spl mu/m microprocessor. The model agrees closely with measured data in mean, variance, and shape. Results demonstrate that within-die fluctuations primarily impact the FMAX mean and die-to-die fluctuations determine the majority of the FMAX variance. Employing rigorously derived device and circuit models, the impact of die-to-die and within-die parameter fluctuations on future FMAX distributions is forecast for the 180, 130, 100, 70, and 50-nm technology generations. Model predictions reveal that systematic within-die fluctuations impose the largest performance degradation resulting from parameter fluctuations. Assuming a 3/spl sigma/ channel length deviation of 20%, projections for the 50-nm technology generation indicate that essentially a generation of performance gain can be lost due to systematic within-die fluctuations. Key insights from this work elucidate the recommendations that manufacturing process controls be targeted specifically toward sources of systematic within-die fluctuations, and the development of new circuit design methodologies be aimed at suppressing the effect of within-die parameter fluctuations.

751 citations


"A stochastic pattern generation and..." refers background in this paper

  • ...In [17], the authors claim that for the 50-nm technology node, the performance gain provided by an entire technology node can be lost due to systematic intra-die variations....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time.
Abstract: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time. The presented scheme is widely applicable and easy to deploy because it is based on the standard scan/ATPG methodology and adopts a very simple flow. It is nonintrusive as it does not require any modifications to the core logic such as the insertion of test points or logic bounding unknown states. The EDT scheme consists of logic embedded on a chip and a new deterministic test-pattern generation technique. The main contributions of the paper are test-stimuli compression schemes that allow us to deliver test data to the on-chip continuous-flow decompressor. In particular, it can be done by repeating certain patterns at the rates, which are adjusted to the requirements of the test cubes. Experimental results show that for industrial circuits with test cubes with very low fill rates, ranging from 3% to 0.2%, these schemes result in compression ratios of 30 to 500 times. A comprehensive analysis of the encoding efficiency of the proposed compression schemes is also provided.

529 citations


"A stochastic pattern generation and..." refers methods in this paper

  • ...The compressed stimuli for the test cube are then obtained by solving a system of linear equations corresponding to the inverse function of the decompressor [23]....

    [...]

Proceedings ArticleDOI
09 Nov 2003
TL;DR: In this paper, a new statistical timing analysis method that accounts for inter-and intra-die process variations and their spatial correlations is presented, where a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time.
Abstract: Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.

434 citations