scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Input Test Data Compression Based on the Reuse of Parts of Dictionary Entries: Static and Dynamic Approaches

TL;DR: A new test data compression method for intellectual property (IP) cores testing, based on the reuse of parts of dictionary entries, is presented, supported with extensive simulation results and comparisons to already known testData compression methods suitable for IP cores testing.
Abstract: In this paper, we present a new test data compression method for intellectual property (IP) cores testing, based on the reuse of parts of dictionary entries. Two approaches are investigated: the static and the dynamic. In the static approach, the content of the dictionary is constant during the testing of a core, while in the dynamic approach the testing of a core consists of several test sessions and the content of the dictionary is different during each test session. The efficiency of the proposed method is supported with extensive simulation results and comparisons to already known test data compression methods suitable for IP cores testing.
Citations
More filters
Journal ArticleDOI
TL;DR: Experimental results confirm that the Star-EDT can act as a valuable form of deterministic BIST, and elevates compression ratios to values typically unachievable through conventional reseeding-based solutions.
Abstract: This paper presents Star-EDT—a novel deterministic test compression scheme. The proposed solution seamlessly integrates with EDT-based compression and takes advantage of two key observations: 1) there exist clusters of test vectors that can detect many random-resistant faults with a cluster comprising a parent pattern and its derivatives obtained through simple transformations and 2) a significant majority of specified positions of ATPG-produced test cubes are typically clustered within a single or, at most, a few scan chains. The Star-EDT approach elevates compression ratios to values typically unachievable through conventional reseeding-based solutions. Experimental results obtained for large industrial designs, including those with a new class of test points aware of ATPG-induced conflicts, illustrate feasibility of the proposed deterministic test scheme and are reported herein. In particular, they confirm that the Star-EDT can act as a valuable form of deterministic BIST.

9 citations


Additional excerpts

  • ...All presented data were collected for test cubes containing at least 4 specified bits....

    [...]

Journal ArticleDOI
TL;DR: A new test data compression method based on reusing a stored set with tri-state coding (TSC) is presented, which improves a compression ratio and a test time on both International Symposium on Circuits and Systems'89 and large International Test Conference'99 benchmark circuits.
Abstract: As technology processes scale up and design complexities grow, system-on-chip integration continues to rise rapidly According to these trends, increasing test data volume is one of the biggest challenges in the testing industry In this paper, we present a new test data compression method based on reusing a stored set with tri-state coding (TSC) For improving the compression efficiency, a twisted ring counter is used to reconfigure twist function It is useful to reuse previously used data for making next data by using the function of feedback of the ring counter Moreover, the TSC is used to increase the range information transmission without additional input ports Experimental results show that this compression method improves a compression ratio and a test time on both International Symposium on Circuits and Systems’89 and large International Test Conference’99 benchmark circuits in most cases compared to the results of the previous work without a heavy burden on the hardware

7 citations

Journal ArticleDOI
TL;DR: From the analysis of simulation results, it is proved that the proposed LCR code enhances a compression ratio and reduce the test time follows the International Symposium on Circuits and Systems'89.

6 citations

Proceedings ArticleDOI
05 Feb 2015
TL;DR: This work has combined both temperature reduction and compression into a single problem and solved it, and presents an intermediate approach that performs a trade-off between temperature and compression ratio.
Abstract: In this paper, we have proposed a new thermal-aware test data compression technique using dictionary based coding. Huge test data volume and chip temperature are two major challenges for test engineers. Temperature of a chip can be reduced to a large extent by minimizing transition count in scan chains using efficient don't-care filling. On the other hand, high compression ratio can be achieved by filling the don't-cares intelligently to get more similar sub-vectors from test vectors. Although, both of the problems rely on don't-care bit filling, most of the existing works have considered them as separate problems. In our work, we have combined both temperature reduction and compression into a single problem and solved it. We present an intermediate approach that performs a trade-off between temperature and compression ratio. Experimental results on ISCAS'89 and ITC'99 benchmarks show the flexibility of the proposed method to achieve a balance between temperature and compression ratio.

6 citations


Cites background or methods from "Input Test Data Compression Based o..."

  • ...Although some other works [13], [14] achieve a better compression ratio with more computational complexity and extra hardware overhead, all are based on the clique partitioning....

    [...]

  • ...Method like dividing the test slices further into smaller sub-slices [13] can also be incorporated with our method to improve compression ratio without hampering the balance between CR and temperature of the chip....

    [...]

Journal ArticleDOI
TL;DR: From the analysis of ISCAS 89 benchmark circuit’s performance, the proposed coding scheme outperforms the conventional test data compression methods and aims to obtain a Large Compression Ratio.
Abstract: The advancement in technologies has been increasing with increase in integrating scales which allows fabricating millions of transistors on a chip. This demands the efficient testing circuit to evaluate the fault present, where the large volume of data volume needs to be tested during manufacturing and fabrication. Therefore, the challenging fact arises in methodologies to achieve the test data compression. Even though various techniques in the present scenario reduce the testing time, improvement in data volume reduction is still a demanding factor. The proposed scheme aims to obtain a Large Compression Ratio. By using Augmented Recurrence Hopping based Run-Length Coding (ARHRLC) Test Data volume can be reduced and data can be compressed. The proposed ARHRLC coding technique which compares the group code based test vector and it’s duplicate. Data sequence can be decreased in terms volume and area overhead. From the analysis of ISCAS 89 benchmark circuit’s performance shows the proposed coding scheme outperforms the conventional test data compression methods. The Augmented Recurrence Hopping based Run-Length Coding procedure skips the repeated sequence in the test vector group and estimate conversely perfect example of a separate test information section or various test information fragment. The test result demonstrates that the compression proportion and calculation time is lessened.

5 citations

References
More filters
Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

3,528 citations


"Input Test Data Compression Based o..." refers methods in this paper

  • ...In order to minimize the number of bits required to encode the prefixes, a Huffman code [46] is used....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time.
Abstract: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time. The presented scheme is widely applicable and easy to deploy because it is based on the standard scan/ATPG methodology and adopts a very simple flow. It is nonintrusive as it does not require any modifications to the core logic such as the insertion of test points or logic bounding unknown states. The EDT scheme consists of logic embedded on a chip and a new deterministic test-pattern generation technique. The main contributions of the paper are test-stimuli compression schemes that allow us to deliver test data to the on-chip continuous-flow decompressor. In particular, it can be done by repeating certain patterns at the rates, which are adjusted to the requirements of the test cubes. Experimental results show that for industrial circuits with test cubes with very low fill rates, ranging from 3% to 0.2%, these schemes result in compression ratios of 30 to 500 times. A comprehensive analysis of the encoding efficiency of the proposed compression schemes is also provided.

529 citations

Journal ArticleDOI
TL;DR: A new test-data compression method and decompression architecture based on variable-to-variable-length Golomb codes that is especially suitable for encoding precomputed test sets for embedded cores in a system-on-a-chip (SoC).
Abstract: We present a new test-data compression method and decompression architecture based on variable-to-variable-length Golomb codes. The proposed method is especially suitable for encoding precomputed test sets for embedded cores in a system-on-a-chip (SoC). The major advantages of Golomb coding of test data include very high compression, analytically predictable compression results, and a low-cost and scalable on-chip decoder. In addition, the novel interleaving decompression architecture allows multiple cores in an SoC to be tested concurrently using a single automatic test equipment input-output channel. We demonstrate the effectiveness of the proposed approach by applying it to the International Symposium on Circuits and Systems' benchmark circuits and to two industrial production circuits. We also use analytical and experimental means to highlight the superiority of Golomb codes over run-length codes.

379 citations


"Input Test Data Compression Based o..." refers methods in this paper

  • ...Therefore we have four categories: fixed-to-fixed [2], [3], fixed-to-variable [4]–[13], variable-to-fixed [14]–[17], and variable-to-variable [18]–[32]....

    [...]

  • ...The implementation cost of the decoder of the methods [12], [16], [18]–[21], [23]–[27], [31], and [32] ranges from 296 to 769 eq....

    [...]

  • ...The methods of [12], [16], [18]–[21], [23]–[27], [31], and [32] have been proposed for IP cores with single scan chain...

    [...]

Proceedings ArticleDOI
30 Oct 2001
TL;DR: Techniques are presented in this paper that allow for substantial compression of Automatic Test Pattern Generation (ATPG) produced test vectors, allowing for a more than 10-fold reduction in tester scan buffer data volume on ATPG compacted tests.
Abstract: Rapid increases in the wire-able gate counts of ASICs stress existing manufacturing test equipment in terms of test data volume and test capacity. Techniques are presented in this paper that allow for substantial compression of Automatic Test Pattern Generation (ATPG) produced test vectors. We show compression efficiencies allowing a more than 10-fold reduction in tester scan buffer data volume on ATPG compacted tests. In addition, we obtain almost a 2/spl times/ scan test time reduction. By implementing these techniques for production testing of huge-gate-count ASICs, IBM will continue using existing automated test equipment (ATE)-avoiding costly upgrades and replacements.

368 citations


"Input Test Data Compression Based o..." refers methods in this paper

  • ...The test data compression techniques can be classified as code-based [1]–[32], linear-decompression-based [33]–[40] and broadcast-scan-based [41]–[45] schemes....

    [...]