scispace - formally typeset
Search or ask a question

Showing papers on "Design for testing published in 2003"


Book
15 May 2003
TL;DR: This book is the most comprehensive introduction available to the range of techniques and tools used in digital testing, including fault simulation, CMOS testing, design for testability, and built-in self test.
Abstract: From the Publisher: As the complexity of modern digital systems increases, so does the need for ever more rigorous testing at all levels, from individual chips up to complete system architectures. This book is the most comprehensive introduction available to the range of techniques and tools used in digital testing. It covers every key topic, including fault simulation, CMOS testing, design for testability, and built-in self test. Aimed at graduate students of electrical and computer engineering, the book is also the most up-to-date reference on the market for practicing engineers.

308 citations


Proceedings ArticleDOI
27 Apr 2003
TL;DR: This paper addresses the problem of compacting test responses in the presence of unknowns at the input of the compactor by exploiting the capabilities of well-known error detection and correction codes by using Saluja-Karpovsky Space Compactors.
Abstract: This paper addresses the problem of compacting test responses in the presence of unknowns at the input of the compactor by exploiting the capabilities of well-known error detection and correction codes. The technique, called i-Compact, uses Saluja-Karpovsky Space Compactors, but permits detection and location of errors in the presence of unknown logic (X) values with help from the ATE. The advantages of i-Compact are: 1. Small number of output pins front the compactors for a required error detection capability; 2. Small tester memory for storing expected responses; 3. Flexibility of choosing several different combinations of number of X values and number of bit errors for error detection without altering the hardware compactor; 4. Same hardware capable of identifying the line that produced an error in presence of unknowns; 5. Use of non-proprietary codes found in the literature of 1950s; and 6. Independent of the circuit and the test generator.

136 citations


Proceedings ArticleDOI
27 Apr 2003
TL;DR: An efficient technique for test data volume reduction based on the shared scan-in (Illinois Scan) architecture and the scan chain reconfiguration (Dynamic scan) architecture is defined and the results demonstrate the efficiency of the proposed architecture for real-industrial circuits.
Abstract: In this paper, an efficient technique for test data volume reduction based on the shared scan-in (Illinois Scan) architecture and the scan chain reconfiguration (Dynamic Scan) architecture is defined. The composite architecture is created with analysis that relies on the compatibility relation of scan chains. Topological analysis and compatibility analysis are used to maximize gains in test data volume and test application time. The goal of the proposed synthesis procedure is to test all detectable faults in broadcast test mode using minimum scan-chain configurations. As a result, more aggressive sharing of scan inputs can be applied for test data volume and test application time reduction. The experimental results demonstrate the efficiency of the proposed architecture for real-industrial circuits.

104 citations


Journal ArticleDOI
TL;DR: An overview of delay defect characteristics and the impact of delay defects on IC quality is presented and practical delay-testing strategy in terms of test pattern generation, test application speed, DFT, and test cost is discussed.
Abstract: Several factors influence production delay testing and corresponding DFT techniques: defect sources, design styles. ability to monitor process characteristics, test generation time. available test time, and tester memory. We present an overview of delay defect characteristics and the impact of delay defects on IC quality. We also discuss practical delay-testing strategy in terms of test pattern generation, test application speed, DFT, and test cost.

92 citations


Proceedings ArticleDOI
30 Sep 2003
TL;DR: A new technique is presented that allows to design power-optimized scan chains under a given routing constraint based on clustering and reordering of scan cells in the design and allows to reduce average power consumption during scan testing.
Abstract: Scan-based architectures, though widely used in modern designs, are expensive in power consumption. In this paper, we present a new technique that allows to design power-optimized scan chains under a given routing constraint. The proposed technique is a three-phase process based on clustering and reordering of scan cells in the design. It allows to reduce average power consumption during scan testing. Owing to this technique, short scan connections in scan chains are guaranteed and congestion problems in the design are avoided.

90 citations


Proceedings ArticleDOI
03 Mar 2003
TL;DR: This paper presents a new approach to automatic test program generation exploiting an evolutionary paradigm that overcomes the main limitations of previous methodologies and provides significantly better results.
Abstract: Microprocessor cores are a major challenge in the test arena: not only is their complexity always increasing, but also their specific characteristics intensify all difficulties. A microprocessor embedded inside a SOC is even harder to test since its input might be harder to control and its behavior may be harder to observe. Functional testing is an effective solution which consists in forcing the microprocessor to execute a suitable test program. This paper presents a new approach to automatic test program generation exploiting an evolutionary paradigm. It overcomes the main limitations of previous methodologies and provides significantly better results. Human intervention is limited to the enumeration of all assembly instructions. Also internal parameters of the optimizer are auto-adapted. Experimental results show the effectiveness of the approach.

88 citations


Proceedings ArticleDOI
01 Sep 2003
TL;DR: The state and accomplishments of the IEEE 1500 proposal for the test of non-mergeable cores are presented and various degrees of challenges depending on the mergeable or non-Mergeable nature of the core are presented.
Abstract: Design reuse has been a key enabler to efficient ,SystemOn-Chip creation, by allowing pre-designed functions to be leveraged, thereby reducing development cycles and time to market, The test of these pre-designed blocks, often referred to as cores, is a primordial factor to successful design reuse methodologies, and must be considered by anticipation with various degrees of challenges depending on the mergeable or non-mergeable nature of the core. This paper presents the state and accomplishments of the IEEE 1500 proposal for the test of non-mergeable cores.

83 citations


Journal ArticleDOI
TL;DR: On-chip compression and decompression techniques provide high fault coverage with low test times and are shown to be effective in deterministic test.
Abstract: You have probably heard that BIST takes too long and its fault coverage is low, and that deterministic test requires too many patterns. This article shows how on-chip compression and decompression techniques provide high fault coverage with low test times.

73 citations


Journal ArticleDOI
TL;DR: A metric that can be used to evaluate the effectiveness of procedures for reducing the scan data volume is proposed that compares the achieved compression to the compression which is intrinsic to the use of multiple scan chains.
Abstract: We consider issues related to the reduction of scan test data in designs with multiple scan chains. We propose a metric that can be used to evaluate the effectiveness of procedures for reducing the scan data volume. The metric compares the achieved compression to the compression which is intrinsic to the use of multiple scan chains. We also propose a procedure for modifying a given test set so as to achieve reductions in test data volume assuming a combinational decompressor circuit.

70 citations


Proceedings ArticleDOI
27 Apr 2003
TL;DR: A methodology for the determination of decompression hardware that guarantees complete fault coverage for a unified compaction/compression scheme is proposed and significant test volume and test application time reductions are delivered through the scheme.
Abstract: A methodology for the determination of decompression hardware that guarantees complete fault coverage for a unified compaction/compression scheme is proposed. Test cube information is utilized for the determination of a near optimal decompression hardware. The proposed scheme attains simultaneously high compression levels and reduced pattern counts through a linear decompression hardware. Significant test volume and test application time reductions are delivered through the scheme we propose while a highly cost effective hardware implementation is retained.

61 citations


Proceedings ArticleDOI
01 Sep 2003
TL;DR: Experimental results show that the new deterministic RTL techniques achieve several orders of magnitude reduction of test generation time without compromising fault coverage when compared to gatelevel ATPG tools.
Abstract: We present an efficient register-transfer level automatic test pattern generation (ATPG) algorithm. First, our ATPG generates a series of sequential justification and propagation paths for each RTL primitive via a deterministic branch-and-bound search process, called a test environment. Then the precomputed test vectors for the RTL primitives are plugged into the generated test environments to form gate-level test vectors. We augmenta 9-valuedalgebra to efficiently represent the justification and propagation objectives at the RT Level. Our ATPG automatically extracts any finite state machine (FSM) from the circuit, constructs the state transition graph (STG), and uses high-level information to guide the search process. We propose newstatic methodsto identifyembeddedcounterstructures, and we use implication-based techniques and static learning to find the FSM traversal sequences sufficient to control the counters. Finally, a simulation-based RTL extension is added to augment the deterministic test set in a few cases when there is additional room for the improvement in fault coverage. Experimental results show that our new deterministic RTL techniques achieve several orders of magnitude reduction of test generation time without compromising fault coverage when compared to gatelevel ATPG tools. Our ATPG also outperforms a recently reported simulation-based high-level ATPG tool in terms of both fault coverage and CPU time.

Journal ArticleDOI
TL;DR: A test pattern compression scheme for test data volume and application time reduction is proposed, where the increased number of internal scan chains due to an on-chip, fixed-rate decompressor reduces test application time proportionately.
Abstract: A test pattern compression scheme for test data volume and application time reduction is proposed. While compression reduces test data volume, the increased number of internal scan chains due to an on-chip, fixed-rate decompressor reduces test application time proportionately. Through on-chip decompression, both the number of virtual scan chains visible to the ATE and the functionality of the ATE are retained intact. Complete fault coverage is guaranteed by constructing the decompression hardware deterministically through analysis of the test pattern set.

Proceedings ArticleDOI
27 Apr 2003
TL;DR: The testable design and testing of a fully software-controllable lab-on-a-chip, including a fluidic array of FlowFETs, control and interface electronics is presented, which shows the effects of faults in the (combined) fluidic and electrical parts.
Abstract: The testable design and testing of a fully software-controllable lab-on-a-chip, including a fluidic array of FlowFETs, control and interface electronics is presented. Test hardware is included for detecting faults in the DMOS electro-fluidic interface and the digital parts. Multidomain fault modeling and simulation shows the effects of faults in the (combined) fluidic and electrical parts. The fault simulations also reveal important parameters of multi-domain test-stimuli, e.g. fluid velocity, for detecting both electrical and fluidic defects.

Proceedings ArticleDOI
30 Sep 2003
TL;DR: The proposed solution is based on a P1500-compliant wrapper that follows a programmable BIST approach and is able to support both testing and diagnosis and takes into account several constraints existing in an industrial environment.
Abstract: This paper addresses the issue of testing and diagnosing a memory core embedded in a complex SOC. The proposed solution is based on a P1500-compliant wrapper that follows a programmable BIST approach and is able to support both testing and diagnosis. Experimental results are provided allowing to evaluate the benefits and limitations of the adopted solution and to compare it with previously proposed ones. The solution takes into account several constraints existing in an industrial environment, such as minimizing the cost of test development, easing the reuse of the available architectures for test and diagnosis of different memory types and minimizing the cost of the external ATE.

Patent
18 Nov 2003
TL;DR: In this paper, a method and apparatus are provided that facilitate analysis of the intended flow of logical signals between key points in a design, and hardware design defects can be detected using a novel intent-driven verification process.
Abstract: A method and apparatus are provided that facilitate analysis of the intended flow of logical signals between key points in a design. According to one aspect of the present invention, hardware design defects can be detected using a novel Intent-Driven Verification process. First, a representation of a hardware design and information regarding the intended flow of logical signals among variables in the representation are received. Then, the existence of potential errors in the hardware design may be inferred based upon the information regarding the intended flow of logical signals by (1) translating the information regarding the intended flow of logical signals into a comprehensive set of checks that must hold true in order for the hardware design to operate in accordance with the intended flow of logical signals, and (2) determining if any of the checks can be violated during operation of circuitry represented by the hardware design.

Proceedings ArticleDOI
15 Dec 2003
TL;DR: ASC (A Stream Complier) simplifies exploration of hardware accelerators by transforming the hardware design task into a software design process using only 'gcc' and 'make' to obtain a hardware netlist.
Abstract: We consider speeding up general-purpose applications with hardware accelerators. Traditionally hardware accelerators are tediously hand-crafted to achieve top performance ASC (A Stream Complier) simplifies exploration of hardware accelerators by transforming the hardware design task into a software design process using only 'gcc' and 'make' to obtain a hardware netlist. ASC enables programmers to customize hardware accelarators at three levels of abstraction: the architecture level, the functional block level, and the bit level. All three customizations are based on one uniform representation: a single C++ program with custom types and operators for each level of abstraction. This representation allows ASC users to express and reason about the design space, extract parallelism at each level and quickly evaluate different design choices. In addition, since the user has full control over each gate-level resource in the entire design. ASC accelerator performance can always be equal to or better than hand-crafted designs, usually with much less effort. We present several ASC bench marks, including wavelet compression and Kasumi encryption.

Proceedings ArticleDOI
Subhasish Mitra1, Kee Sup Kim1
13 Oct 2003
TL;DR: XMAX is a novel test data compression architecture capable of achieving almost exponential reduction in scan test data volume and test time while allowing use of commercial automatic test pattern generation (ATPG) tools.
Abstract: XMAX is a novel test data compression architecture capable of achieving almost exponential reduction in scan test data volume and test time while allowing use of commercial automatic test pattern generation (ATPG) tools. It tolerates presence of sources of unknown logic values (also referred to as X's) without compromising test quality and diagnosis capability for most practical purposes. The XMAX architecture has been implemented in several industrial designs.

Journal ArticleDOI
TL;DR: The authors update the standard PLL architecture to allow simple digital testing, and the all-digital strategy yields catastrophic fault coverage as high as that of the classical functional test, plus it is fast, extremely simple to implement, and requires only standard digital test equipment.
Abstract: Traditional functional testing of mixed-signal ICs is slow and requires costly, dedicated test equipment. The authors update the standard PLL architecture to allow simple digital testing. The all-digital strategy yields catastrophic fault coverage as high as that of the classical functional test, plus it is fast, extremely simple to implement, and requires only standard digital test equipment.

Journal ArticleDOI
TL;DR: This work presents the first report of a design of reconfigurable core wrappers which allow for a dynamic change in the width of the TAM executing the core test, and derives a O(N/sub C//sup 2/B) time algorithm which can compute near optimal SoC test schedules.
Abstract: Testing of embedded core based system-on-chip (SoC) ICs is a well known problem, and the upcoming IEEE P1500 Standard on Embedded Core Test (SECT) standard proposes DFT solutions to alleviate it. One of the proposals is to provide every core in the SoC with test access wrappers. Previous approaches to the problem of wrapper design have proposed static core wrappers, which are designed for a fixed test access mechanism (TAM) width. We present the first report of a design of reconfigurable core wrappers which allow for a dynamic change in the width of the TAM executing the core test. Analysis of the corresponding scheduling problem indicates that good approximate schedules can be achieved without significant computational effort. Specifically, we derive a O(N/sub C//sup 2/B) time algorithm which can compute near optimal SoC test schedules, where N/sub C/ is the number of cores and B is the number of top level TAMs. Experimental results on benchmark SoCs are presented which improve upon integer programming based methods, not only in the quality of the schedule, but also significantly reduce the computation time.

Proceedings ArticleDOI
03 Sep 2003
TL;DR: This work proposes first a testability grid to make the relation between each pattern and the severity of the testability anti-patterns, and presents the solution, based on a definition of patterns at metalevel, to automate the instantiation of patterns constrained by testability criteria.
Abstract: We address not only the question of testability measurement of OO designs but also focuses on its practicability. While detecting testability weaknesses (called testability anti-patterns) of an OO design is a crucial task, one cannot expect from a non-specialist to make the right improvements, without guidance or automation. To overcome this limitation, we investigate solutions integrated to the OO process. We focus on the design patterns as coherent subsets in the architecture, and we explain how their use can provide a way for limiting the severity of testability weaknesses, and of confining their effects to the classes involved in the pattern. Indeed, design patterns appear both as a usual refinement instrument, and a cause of complex interactions into a class diagram-and more specifically of testability anti-patterns. To reach our objective of integrating the testability improvement to the design process, we propose first a testability grid to make the relation between each pattern and the severity of the testability anti-patterns, and we present our solution, based on a definition of patterns at metalevel, to automate the instantiation of patterns constrained by testability criteria.

Proceedings ArticleDOI
Mike J Tripp1, Tak M. Mak1, A. Meixner1
01 Sep 2003
TL;DR: This work summarizes the design for test (DFT) circuitry and test methods that enabled Intel to shift away from traditional functional testing of I/O's and indicates how this can be extended to cover the next generation high speed serial like interfaces.
Abstract: This work summarizes the design for test (DFT) circuitry and test methods that enabled Intel to shift away from traditional functional testing of I/O's. This shift was one of the key enablers for automatic test equipment (ATE) re-use and the move to lower capability (& cost) structural test platforms. Specific examples include circuit implementations from the Pentium/sup /spl reg// 4 processor, high volume manufacturing (HVM) data, and evolutionary changes to address key learnings. We close with indications of how this can be extended to cover the next generation high speed serial like interfaces.

Journal ArticleDOI
Yuejian Wu1, P. MacDonald1
TL;DR: A novel design for testability (DFT) technique is proposed to test ASICs with identical embedded cores that significantly reduces test application time, test data volume, and test generation effort.
Abstract: Predesigned cores and reusable modules are popularly used in the design of large and complex application specific integrated circuits (ASICs). As the size and complexity of ASICs increase, the test effort, including test development effort, test data volume, and test application time, has also significantly increased. This paper shows that this test effort increase can be minimized for ASICs that consist of multiple identical cores. A novel design for testability (DFT) technique is proposed to test ASICs with identical embedded cores. The proposed technique significantly reduces test application time, test data volume, and test generation effort.

Journal ArticleDOI
TL;DR: This paper proposes DFT modifications for cellular CLA adders to achieve complete CFM testability with special emphasis on the minimum impact in terms of area and performance, providing a practical solution.
Abstract: Cellular Carry Lookahead (CLA) adders are systematically implemented in arithmetic units due to their regular, well-balanced structure. In terms of testability and with respect to the classical Cell Fault Model (CFM), cellular CLA adders have poor testability by construction. Design-for-testability (DFT) modifications for cellular CLA adders have been proposed in the literature providing complete CFM testability making the adders either level-testable or C-testable. These designs impose significant area and performance overheads. In this paper, we propose DFT modifications for cellular CLA adders to achieve complete CFM testability with special emphasis on the minimum impact in terms of area and performance. Complete CFM testability is achieved without adding any extra inputs to the adder, with very small area and performance overheads, thus providing a practical solution. The proposed DFT scheme requires only 1 extra output and it is not necessary to put the circuit in a special test mode, while the earlier schemes require the addition of 2 extra inputs to set the circuit in test mode. A rigorous proof of the linear-testability of the adder is given and a sufficient linear-sized test set is provided that guarantees 100% CFM fault coverage. Surprisingly, the size of the proposed linear-sized test set is, in most practical cases, comparable or even smaller than a logarithmic-sized test set proposed in the literature.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: An efficient diagnosis algorithm is proposed to diagnose faulty scan chains with multiple faults per chain and experimental results show that the proposed algorithm achieves good diagnosis resolution in reasonable time.
Abstract: When VLSI design and process enter the stage of ultra deep submicron (UDSM), process variations, signal integrity (SI) and design integrity (DI) issues can no longer be ignored These factors introduce some new problems in VLSI design, test and diagnosis, which increase lime-to-market, time-to-volume and cost for silicon debug Intermittent scan chain hold-time fault is one of such problems we encountered in practice The fault sites have to be located to speedup silicon debug and improve yield Recent study of the problem proposed a statistical algorithm to diagnose the faulty scan chains if only one fault per chain Based on the previous work, in this paper, an efficient diagnosis algorithm is proposed to diagnose faulty scan chains with multiple faults per chain The presented experimental results on industrial designs show that the proposed algorithm achieves good diagnosis resolution in reasonable time

Proceedings ArticleDOI
24 Mar 2003
TL;DR: It is shown, that this technique allows the automatic lot condition adjustment of the evaluation comparators and can provide lot specific information to an automated test equipment that can be documented in the test results due to its diagnosis capability.
Abstract: The possibility of using window comparators for the on-chip evaluation of signals in the analogue circuit part has been demonstrated and is shortly summarized. One of the problems is the lot-to-lot variation of the comparator window. An automatic window repositioning technique is detailed that allows to compensate the window shift. The components for the implementation comprising a reference comparator and the evaluation comparators are described along with the implementation of the technique. It is shown, that this technique allows the automatic lot condition adjustment of the evaluation comparators. Furthermore the technique can provide lot specific information to an automated test equipment that can be documented in the test results due to its diagnosis capability.

Journal ArticleDOI
TL;DR: A low-cost and comprehensive built-in self-test (BIST) methodology for analog and mixed-signal circuits is described and a theoretical analysis of the oscillation is provided that explains why the amplitude measurement is essential.
Abstract: A low-cost and comprehensive built-in self-test (BIST) methodology for analog and mixed-signal circuits is described. We implement a time-division multiplexing (TDM) comparator to analyze the response of a circuit under test with minimum hardware overhead. The TDM comparator scheme is an effective signature analyzer for on-chip analog response compaction and pass/fail decision. We apply this scheme to an oscillation-test environment and implement a low-cost and comprehensive vectorless BIST methodology for high fault and yield coverage. Our scheme allows a tolerance in the output response, a feature necessary for analog circuits. Both oscillation frequency and oscillation amplitude are measured indirectly to increase the fault coverage. We provide a theoretical analysis of the oscillation that explains why the amplitude measurement is essential. Simulation results demonstrate that the proposed scheme can significantly reduce test time of the oscillation-test while achieving higher fault coverage.

Journal ArticleDOI
Stefan Rusu1, J. Stinson1, Simon M. Tam1, Justin Leung1, Harry Muljono1, B. Cherkauer1 
TL;DR: This paper reviews circuit design and package details, power delivery, the reliability, availability, and serviceability (RAS), design for test (DFT), and design for manufacturability (DFM) features, as well as an overview of the design and verification methodology.
Abstract: This 130-nm Itanium 2 processor implements the explicitly parallel instruction computing (EPIC) architecture and features an on-die 6-MB 24-way set-associative level-3 cache. The 374-mm/sup 2/ die contains 410 M transistors and is implemented in a dual-V/sub t/ process with six Cu interconnect layers and FSG dielectric. The processor runs at 1.5 GHz at 1.3 V and dissipates a maximum of 130 W. This paper reviews circuit design and package details, power delivery, the reliability, availability, and serviceability (RAS) features, design for test (DFT), and design for manufacturability (DFM) features, as well as an overview of the design and verification methodology. The fuse-based clock deskew circuit achieves 24-ps skew across the entire die, while the scan-based skew control further reduces it to 7 ps. The 128-bit front-side bus has a bandwidth of 6.4 GB/s and supports up to four processors on a single bus.

Journal ArticleDOI
TL;DR: The nonscan design for testability method based on the conflict measure can reduce many potential backtracks and make many hard-to-detect faults easy todetect; therefore, it can enhance actual testability of the circuit greatly.
Abstract: A testability measure called conflict, based on conflict analysis in the process of sequential circuit test generation is introduced to guide nonscan design for testability. The testability measure indicates the number of potential conflicts to occur or the number of clock cycles required to detect a fault. A new testability structure is proposed to insert control points by switching the extra inputs to primary inputs, using whichever extra inputs of all control points can be controlled by independent signals. The proposed design for testability approach is economical in delay, area, and pin overheads. The nonscan design for testability method based on the conflict measure can reduce many potential backtracks and make many hard-to-detect faults easy-to-detect; therefore, it can enhance actual testability of the circuit greatly. Extensive experimental results are presented to demonstrate the effectiveness of the method.

Proceedings ArticleDOI
07 May 2003
TL;DR: The proposed dictionary-based test data compression approach is especially suitable for a reduced pin-count and low-cost DFT test environment, where a narrow interface between the tester and the SOC is desirable.
Abstract: We present a dictionary-based test data compression approach for reducing test data volume and testing time in SOCs. The proposed method is based on the use of a small number of ATE channels to deliver compressed test patterns from the tester to the chip and to drive a large number of internal scan chains in the circuit under test. Therefore, it is especially suitable for a reduced pin-count and low-cost DFT test environment, where a narrow interface between the tester and the SOC is desirable. The dictionary-based approach not only reduces testing time but it also eliminates the need for additional synchronization and handshaking between the SOC and the ATE. The dictionary entries are determined during the compression procedure by solving a variant of the well-known clique partitioning problem from graph theory. Experimental results for the ISCAS-89 benchmarks and representative test data from IBM show that the proposed method outperforms a number of recently-proposed test data compression techniques.

Proceedings ArticleDOI
01 Sep 2003
TL;DR: Experimental results demonstrate that EDT, with no performance impact, little area overhead, and minimal impact to the flow, results in a signijicant reduction of scan test data volume and scan test time while maintaining the test quality levels.
Abstract: This paper discusses the adoption of Embedded Deterministic Test (EDT) at InJineon Technologies as a means to reduce the cost of manufacturing test without compromising test quality. The System-onChip (SoC) design flow and the changes necessary to successfully implement EDT are presented. Experimental results for three SoC designs targeted for automotive, wireless, and data communication applications are provided. These results demonstrate that EDT, with no performance impact, little area overhead, and minimal impact to theflow, results in a signijicant reduction of scan test data volume and scan test time while maintaining the test quality levels.