Results indicate that the proposed scheme can provide test data compression nearly equal to that of an optimum Huffman code with much less area overhead for the decoder.
Abstract:
This paper presents a compression/decompression scheme based on selective Huffman coding for reducing the amount of test data that must be stored on a tester and transferred to each core in a system-on-a-chip (SOC) during manufacturing test. The test data bandwidth between the tester and the SOC is a bottleneck that can result in long test times when testing complex SOCs that contain many cores. In the proposed scheme, the test vectors for the SOC are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the cores. A small amount of on-chip circuitry is used to decompress the test vectors. Given the set of test vectors for a core, a modified Huffman code is carefully selected so that it satisfies certain properties. These properties guarantee that the codewords can be decoded by a simple pipelined decoder (placed at the serial input of the core's scan chain) that requires very small area. Results indicate that the proposed scheme can provide test data compression nearly equal to that of an optimum Huffman code with much less area overhead for the decoder.
TL;DR: This book is a comprehensive guide to new DFT methods that will show the readers how to design a testable and quality product, drive down test cost, improve product quality and yield, and speed up time-to-market and time- to-volume.
TL;DR: This article summarizes and categories hardware-based test vector compression techniques for scan architectures, which fall broadly into three categories: code-based schemes use data compression codes to encode test cubes; linear-decompression- based schemes decompress the data using only linear operations; and broadcast-scan-based scheme rely on broadcasting the same values to multiple scan chains.
TL;DR: A comprehensive guide to new DFT methods that will show the readers how to design a testable and quality product, drive down test cost, improve product quality and yield, and speed up time to market and time-to-volume as mentioned in this paper.
TL;DR: A new test-data compression technique that uses exactly nine codewords that provides significant reduction in test- data volume and test-application time and is flexible in utilizing both fixed- and variable-length blocks.
TL;DR: In this paper, the authors show that the already proposed encoding scheme is not optimal and present a new one, proving that it is optimal Moreover, they compare the two encodings theoretically and derive a set of conditions which show that, in practical cases, the proposed encoding always offers better compression in terms of hardware overhead.
TL;DR: A set of 31 digital sequential circuits described at the gate level that extend the size and complexity of the ISCAS'85 set of combinational circuits and can serve as benchmarks for researchers interested in sequential test generation, scan-basedtest generation, and mixed sequential/scan-based test generation using partial scan techniques.
TL;DR: This book provides a careful selection of essential topics on all three types of circuits, namely, digital, memory, and mixed-signal, each requiring different test and design for testability methods.
TL;DR: In this paper, two new algorithms, redundant vector elimination (RVE) and essential fault reduction (EFR), were proposed for generating compact test sets for combinational circuits under the single stuck at fault model.
Q1. What contributions have the authors mentioned in the paper "An efficient test vector compression scheme using selective huffman coding" ?
This paper presents a compression/decompression scheme based on selective Huffman coding for reducing the amount of test data that must be stored on a tester and transferred to each core in a system-on-a-chip ( SOC ) during manufacturing test. Results indicate that the proposed scheme can provide test data compression nearly equal to that of an optimum Huffman code with much less area overhead for the decoder.
Q2. Why is the serializer used in parallel?
The serializer is loaded in parallel by the decoder (allowing the decoder to generate multiple bits of data in a slower tester clock cycle) and serially shifted out into the scan chain at a faster clock rate.
Q3. What is the way to use the correlations in a test set?
To fully exploit the correlations in a test set, the number of bits in each scan vector should be a multiple of the fixed-length block size used for the statistical code.
Q4. What is the effect of transformations on the test vector set?
Such transformations increase the amount of compression that can be achieved on the transformed test set using statistical coding.
Q5. What is the limit for the encoding of test vectors?
In this case, the limit is to make each test vector a pattern which results in a large hardware overhead to regenerate the test vectors from the codewords.
Q6. What other techniques were used for filling the X’s?
For all the other techniques (Golomb, FDR, and VIHC), the zero-fill algorithm was used for filling the X’s as that maximizes the amount of compression using those methods.
Q7. How can a decoder be able to have the codewords shifted?
Note that if it is not possible to have the scan clock rate be faster then the tester clock rate, then an alternative solution (as previously described) is to make the scan clock rate be twice as fast as the “effective clock rate” as seen by the decoder by simply having the tester channel feed two scan chains so that the rate that the decoder receives data from the tester is half as fast as the rate at which data can be shifted into the scan chain.
Q8. How many times faster is the scan chain corresponding to each decoder?
the scan chain corresponding to each decoder is still clocked at the normal tester clock rate and, thus, its clock rate is n times faster than the decoder.
Q9. What is the scheme for coding scan vectors?
The compression/decompression scheme proposed here involves statistically coding the scan vectors and then placing an on-chip decoder at the serial input of the scan chain to decompress the vectors.
Q10. Why is it possible to clock the scan chain with a faster clock than the tester clock?
If it is not possible to clock the scan chain with a faster clock than the tester clock, then another approach is to have the tester channel rotate between n scan chains (each scan chain has its own decoder).