scispace - formally typeset
Search or ask a question
Conference

VLSI Test Symposium 

About: VLSI Test Symposium is an academic conference. The conference publishes majorly in the area(s): Automatic test pattern generation & Fault coverage. Over the lifetime, 1735 publications have been published by the conference receiving 34483 citations.


Papers
More filters
Proceedings ArticleDOI
Yervant Zorian1
06 Apr 1993
TL;DR: A BIST scheduling process that takes into consideration constraints is presented, and a new BIST control methodology is introduced, that implements the BIST schedule with a highly modular architecture.
Abstract: BIST is a viable approach to test today's digital systems. Constraints, such as power, noise, area overhead, and others, limit the possibilities of parallel BIST execution in complex VLSI devices. This paper presents a BIST scheduling process that takes into consideration such constraints, and introduces a new BIST control methodology, that implements the BIST schedule with a highly modular architecture. In fact, due to the uniformity of interface, the BIST control elements are independent of the BIST scheme used in the embedded blocks of a device. This BIST control architecture can provide block level diagnostic information. >

551 citations

Proceedings ArticleDOI
26 Apr 1999
TL;DR: This work uses time redundancy techniques to derive low cost soft-error tolerant implementations for logic networks in response to the increased operating frequencies, geometry shrinking and power supply reduction that accompany the process of very deep submicron scaling.
Abstract: The increased operating frequencies, geometry shrinking and power supply reduction that accompany the process of very deep submicron scaling, affect the reliable operation of very deep submicron ICs. The effects of various noise sources are becoming of great concern. In particularly, it is predicted that single event upsets induced by alpha particles and cosmic radiation will become a cause of unacceptable error rates in future very deep submicron and nanometer technologies. This problem, concerning in the past more often parts used in space, will affect future ICs at sea level. This challenging problem has to be solved otherwise technological progress will be blocked soon. Thus, fault tolerant design is becoming necessary, even for commodity applications. But economic constraints of commodity applications exclude the use of traditional, high-cost fault tolerant techniques. This work uses time redundancy techniques to derive low cost soft-error tolerant implementations for logic networks.

528 citations

Proceedings ArticleDOI
06 May 2007
TL;DR: Simulation results using 90nm and 65nm technologies demonstrate that a new sensor design integrated inside a flip-flop enables efficient circuit failure prediction at a low cost and can significantly improve system performance by enabling close to best- case design instead of traditional worst-case design.
Abstract: Circuit failure prediction predicts the occurrence of a circuit failure before errors actually appear in system data and states. This is in contrast to classical error detection where a failure is detected after errors appear in system data and states. Circuit failure prediction is performed during system operation by analyzing the data collected by sensors inserted at various locations inside a chip. We demonstrate this concept of circuit failure prediction for a dominant PMOS aging mechanism induced by negative bias temperature instability (NBTI). NBTI-induced PMOS aging slows down PMOS transistors over time. As a result, the speed of a chip can significantly degrade over time and can result in delay faults. The traditional practice is to incorporate worst-case speed margins to prevent delay faults during system operation due to NBTI aging. A new sensor design integrated inside a flip-flop enables efficient circuit failure prediction at a low cost. Simulation results using 90nm and 65nm technologies demonstrate that this technique can significantly improve system performance by enabling close to best-case design instead of traditional worst-case design.

473 citations

Proceedings ArticleDOI
30 Apr 2000
TL;DR: It is shown here that by carefully selecting the order in which pairs of test cubes are merged during static compaction, both average power and peak power for the final test set can be greatly reduced.
Abstract: Excessive switching activity during scan testing can cause average power dissipation and peak power during test to be much higher than during normal operation. This can cause problems both with heat dissipation and with current spikes. Compacting scan vectors greatly increases the power dissipation for the vectors (generally the power becomes several times greater). The compacted scan vectors often can exceed the power constraints and hence cannot be used. It is shown here that by carefully selecting the order in which pairs of test cubes are merged during static compaction, both average power and peak power for the final test set can be greatly reduced. A static compaction procedure is presented that can be used to find a minimal set of scan vectors that satisfies constraints on both average power and peak power. The proposed approach is simple yet effective and can be easily implemented in the conventional test vector generation flow used in industry today.

372 citations

Proceedings ArticleDOI
26 Apr 1999
TL;DR: A compression/decompression scheme based on statistical coding is presented for reducing the amount of test data that must be stored on a tester and transferred to each core in a core-based design.
Abstract: A compression/decompression scheme based on statistical coding is presented for reducing the amount of test data that must be stored on a tester and transferred to each core in a core-based design. The test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core. Given the set of test vectors for a core, a statistical code is carefully selected so that it satisfies certain properties. These properties guarantee that it can be decoded by a simple pipelined decoder (placed at the serial input of the core's scan chain) which requires very small area. Results indicate that the proposed scheme can use a simple decoder to provide test data compression near that of an optimal Huffman code. The compression results in a two-fold advantage since both test storage and test time are reduced.

290 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
202131
202040
201947
201833
201733
201655