scispace - formally typeset
Search or ask a question

Showing papers on "Design for testing published in 2009"


Proceedings ArticleDOI
18 Dec 2009
TL;DR: This Embedded Tutorial provides an overview of the manufacturing steps of TSV-based 3D chips and their associated test challenges, and discusses the necessary flows for wafer-level and package-level tests, the challenges with respect to test contents and wader-level probe access, and the on-chip DfT infrastructure required for 3D-SICs.
Abstract: Today's miniaturization and performance requirements result in the usage of high-density integration and packaging technologies, such as 3D Stacked ICs (3D-SICs) based on Through-Silicon Vias (TSVs). Due to their advanced manufacturing processes and physical access limitations, the complexity and cost associated with testing this type of 3D-SICs are considered major challenges. This Embedded Tutorial provides an overview of the manufacturing steps of TSV-based 3D chips and their associated test challenges. It discusses the necessary flows for wafer-level and package-level tests, the challenges with respect to test contents and wafer-level probe access, and the on-chip DfT infrastructure required for 3D-SICs.

324 citations


Book
11 Mar 2009
TL;DR: EDA/VLSI practitioners and researchers in need of fluency in an "adjacent" field will find this an invaluable reference to the basic EDA concepts, principles, data structures, algorithms, and architectures for the design, verification, and test of VLSI circuits.
Abstract: This book provides broad and comprehensive coverage of the entire EDA flow. EDA/VLSI practitioners and researchers in need of fluency in an "adjacent" field will find this an invaluable reference to the basic EDA concepts, principles, data structures, algorithms, and architectures for the design, verification, and test of VLSI circuits. Anyone who needs to learn the concepts, principles, data structures, algorithms, and architectures of the EDA flow will benefit from this book. Covers complete spectrum of the EDA flow, from ESL design modeling to logic/test synthesis, verification, physical design, and test - helps EDA newcomers to get "up-and-running" quickly Includes comprehensive coverage of EDA concepts, principles, data structures, algorithms, and architectures - helps all readers improve their VLSI design competence Contains latest advancements not yet available in other books, including Test compression, ESL design modeling, large-scale floorplanning, placement, routing, synthesis of clock and power/ground networks - helps readers to design/develop testable chips or products Includes industry best-practices wherever appropriate in most chapters - helps readers avoid costly mistakes Table of Contents Chapter 1: Introduction Chapter 2: Fundamentals of CMOS Design Chapter 3: Design for Testability Chapter 4: Fundamentals of Algorithms Chapter 5: Electronic System-Level Design and High-Level Synthesis Chapter 6: Logic Synthesis in a Nutshell Chapter 7: Test Synthesis Chapter 8: Logic and Circuit Simulation Chapter 9:?Functional Verification Chapter 10: Floorplanning Chapter 11: Placement Chapter 12: Global and Detailed Routing Chapter 13: Synthesis of Clock and Power/Ground Networks Chapter 14: Fault Simulation and Test Generation.

200 citations


Journal ArticleDOI
TL;DR: A new on-chip continuous-flow decompressor that integrates seamlessly with test logic synthesis flow, and it fits well into various design paradigms, including modular design flow where blocks come with individual decompressors and compactors.
Abstract: This paper presents a new and comprehensive low-power test scheme compatible with a test compression environment. The key contribution of this paper is a flexible test-application framework that achieves significant reductions in switching activity during all phases of scan test: loading, capture, and unloading. In particular, we introduce a new on-chip continuous-flow decompressor. Its synergistic use with a power-aware scan controller allows a significant reduction of toggling rates when feeding scan chains with decompressed test patterns. While the proposed solution requires minimal modifications of the existing design for test logic, experiments indicate that its use results in a low switching activity which reduces power consumption to or below a level of a functional mode. It resolves problems related to power dissipation, voltage drop, and increased temperature. Our approach integrates seamlessly with test logic synthesis flow, and it does not compromise compression ratios. It fits well into various design paradigms, including modular design flow where blocks come with individual decompressors and compactors.

68 citations


Proceedings ArticleDOI
13 May 2009
TL;DR: This work proposed new test methods for designs partitioned at the circuits level, in which the gates and transistors of individual circuits could be split across multiple die layers, which represents the most difficult circuit-partitioned design to test pre-bond.
Abstract: 3D integration is an emerging technology that allows for the vertical stacking of multiple silicon die. These stacked die are tightly integrated with through-silicon vias and promise significant power and area reductions by replacing long global wires with short vertical connections. This technology necessitates that neighboring logical blocks exist on different layers in the stack. However, such functional partitions disable intra-chip communication pre-bond and thus disrupt traditional test techniques.Previous work has described a general test architecture that enables pre-bond testability of an architecturally partitioned 3D processor and provided mechanisms for basic layer functionality. This work proposes new test methods for designs partitioned at the circuits level,in which the gates and transistors of individual circuits could be split across multiple die layers. We investigated a bit-partitioned adder unit and a port-split register file, which represents the most difficult circuit-partitioned design to test pre-bond but which is used widely in many circuits. Two layouts of each circuit, planar and 3D, are produced. Our experiments verify the performance and power results and examine the test coverage achieved.

60 citations


Proceedings ArticleDOI
20 Apr 2009
TL;DR: A test access mechanism for Adaptive Scan that addresses the problem of reducing test data and test application time in a hierarchical and low pin count environment is discussed.
Abstract: Scan compression has emerged as the most successful solution to solve the problem of rising manufacturing test cost. Compression technology is not hierarchical in nature. Hierarchical implementations need test access mechanisms that keep the isolation between the different tests applied through the different compressors and decompressors. In this paper we discuss a test access mechanism for Adaptive Scan that addresses the problem of reducing test data and test application time in a hierarchical and low pin count environment. An active test access mechanism is used that becomes part of the compression schemes and unifies the test data for multiple CODEC implementations. Thus, allowing for hierarchical DFT implementations with flat ATPG.

39 citations


Proceedings ArticleDOI
20 Apr 2009
TL;DR: A test response compaction scheme and a corresponding diagnosis algorithm which are especially suited for BIST and multi-site testing are presented, showing, that test time and response data volume reduces significantly and the diagnostic resolution even improves with this scheme.
Abstract: During volume testing, test application time, test data volume and high performance automatic test equipment (ATE) are the major cost factors. Embedded testing including built-in self-test (BIST) and multi-site testing are quite effective cost reduction techniques which may make diagnosis more complex. This paper presents a test response compaction scheme and a corresponding diagnosis algorithm which are especially suited for BIST and multi-site testing. The experimental results on industrial designs show, that test time and response data volume reduces significantly and the diagnostic resolution even improves with this scheme. A comparison with X-Compact indicates, that simple parity information provides higher diagnostic resolution per response data bit than more complex signatures.

28 citations


Journal ArticleDOI
P. Emma1, E. Kursun1
TL;DR: This article presents the system design opportunities offered by 3D integration, and it discusses the design and test challenges for 3D ICs, with various new design-for-manufacture and DFT issues.
Abstract: This article presents the system design opportunities offered by 3D integration, and it discusses the design and test challenges for 3D ICs, with various new design-for-manufacture and DFT issues.

25 citations


Proceedings ArticleDOI
26 Jul 2009
TL;DR: This paper shows that illegal states in the system are mainly caused by multi-fanout nets in the circuit, and develops efficient and effective heuristics to identify them and demonstrates the effectiveness of this proposed systematic solution.
Abstract: The discrepancy between integrated circuits' activities in normal functional mode and that in structural test mode has an increasing adverse impact on the effectiveness of manufacturing test. Pseudo-functional testing tries to resolve this problem by identifying illegal states in functional mode and avoiding them during the test pattern generation process. Existing methods, however, can only extract a small set of illegal states in the system due to various limitations. In this paper, we first show that illegal states in the system are mainly caused by multi-fanout nets in the circuit, and we develop efficient and effective heuristics to identify them. Experimental results on benchmark circuits demonstrate the effectiveness of our proposed systematic solution.

25 citations


Proceedings ArticleDOI
23 Nov 2009
TL;DR: This testing method requires no design for test hardware as might be added to the circuit by some other methods and is shown to uncover several parametric faults causing deviations as small as 5% from the nominal values.
Abstract: A method of testing for parametric faults of analog circuits based on a polynomial representation of fault-free function of the circuit is presented. The response of the circuit under test (CUT) is estimated as a polynomial in the applied input voltage at relevant frequencies in addition to DC. Classification of CUT is based on a comparison of the estimated polynomial coefficients with those of the fault free circuit. This testing method requires no design for test hardware as might be added to the circuit by some other methods. The proposed method is illustrated for a benchmark elliptic filter. It is shown to uncover several parametric faults causing deviations as small as 5% from the nominal values.

25 citations


Journal ArticleDOI
TL;DR: A new concept of test and diagnosis in regular mesh-like network-on-a-chip (NoC) designs is proposed and a set of design-for-testability (DfT) techniques for the application of test patterns from the external boundary of a NoC are presented.
Abstract: The study proposes a new concept of test and diagnosis in regular mesh-like network-on-a-chip (NoC) designs. The method is based on functional fault models and it implements packet address driven test configurations. As it will be shown, such configurations can be applied for achieving near-100% structural fault coverage for the network switches. Additionally, a concept of functional switch faults, called link faults, is introduced. The approach is scalable (complexity grows linearly with respect to the number of switches) and it is capable of unambigously pinpointing the faulty links inside the switching network. Current paper also presents a set of design-for-testability (DfT) techniques for the application of test patterns from the external boundary of a NoC. The authors have implemented a parametrisable switching network and developed a set of DfT structures to support testing of network switches using external test configurations. The proposed structures include resource loopback for testing the crossbar multiplexer of the resource connection, a modification to the control part to force YX routing and a compact logic built-in self test (BIST) for the control unit. Experiments show that the proposed structures allow near-100% test coverage at the expense of less than 4% of extra switch area.

25 citations


Proceedings ArticleDOI
18 Dec 2009
TL;DR: This new approach performs test point insertion (TPI) to improve testability of random resistant nets by automating the test insertion process by integrating it in the physical synthesis flow and reduces design closure turn around time significantly.
Abstract: This paper presents a new approach to improve random test coverage during physical synthesis for high performance design. This new approach performs test point insertion (TPI) to improve testability of random resistant nets. Conventional test point insertion approaches add extra registers either as control points or observation points to improve controllability or observability. However, adding extra registers has many disadvantages in high performance design. It might degrade the design performance in term of power and timing. It might also end up with even worse testability. The new approach does not add any registers; instead it only uses existing signals as test points. The test points are selected from logic paths with the most timing slack and physical placement proximity. This saves silicon area, reduces power consumption, and minimizes the design changes. The new approach also automates the test insertion process by integrating it in the physical synthesis flow. The integration helps reduce design closure turn around time significantly. Production results show nearly zero performance degradation from this approach and better testability improvement as compared to a manual test point insertion approach. Furthermore, this paper proposes a novel idea of exploiting unreachable states for test point insertion. Preliminary results on this idea are also given.

Journal ArticleDOI
TL;DR: The design and implementation of a design-for-test (DfT) architecture is presented, which improves the testability of an asynchronous NoC architecture, and a simple method for generating test patterns for network routers is described.
Abstract: Asynchronous design offers an attractive solution to address the problems faced by networks-on-chip (NoC) designers such as timing constraints. Nevertheless, post-fabrication testing is a big challenge to bring the asynchronous NoCs to the market because of a lack of testing methodology and support. This study first presents the design and implementation of a design-for-test (DfT) architecture, which improves the testability of an asynchronous NoC architecture. Then, a simple method for generating test patterns for network routers is described. Test patterns are automatically generated by a custom program, given the network topology and the network size. Finally, we introduce a testing strategy for the whole asynchronous NoC. With the generated test patterns, the testing methodology presents high fault coverage (99.86%) for single stuck-at fault models.

Journal ArticleDOI
TL;DR: A novel decorrelating design-for-digital-testability (D3T) scheme for Sigma-Delta modulators to enhance the test accuracy of using digital stimuli and has the advantages of achieving high test accuracy, low circuit overhead, high fault observability, and the capability of conducting at-speed tests.
Abstract: This paper presents a novel decorrelating design-for-digital-testability (D3T) scheme for Sigma-Delta modulators to enhance the test accuracy of using digital stimuli. The input switched-capacitor network of the modulator under test is reconfigured as two or more subdigital-to-charge converters in the test mode. By properly designing the digital stimuli, the shaped noise power of the digital stimulus can be effectively attenuated. As a result, the shaped noise correlation as well as the modulator overload issues are alleviated, thus improving the test accuracy. A second-order Sigma-Delta modulator design is used as an example to demonstrate the effectiveness of the proposed scheme. The behavioral simulation results showed that, when the signal level of the stimulus tone is less than -5 dBFS, the signal-to-noise ratios obtained by the digital stimuli are inferior to those obtained by their analog counterparts of no more than 1.8 dB. Circuit-simulation results also demonstrated that the D3T scheme has the potential to test moderate nonlinearity. The proposed D3T scheme has the advantages of achieving high test accuracy, low circuit overhead, high fault observability, and the capability of conducting at-speed tests.

Proceedings ArticleDOI
03 May 2009
TL;DR: The concept of design-for-testability (DFT) for microfluidic biochips is introduced and a DFT method that incorporates a test plan into the fluidic operations of a target bioassay protocol is proposed that ensures a high level of testability.
Abstract: Testing is essential for digital microfluidic biochips that are used for safety-critical applications such as point-of-care health assessment, air-quality monitoring, and food-safety testing. However, the effectiveness of recently proposed test techniques for biochips is limited by the fact that current design methods do not consider testability. We introduce the concept of design-for-testability (DFT) for microfluidic biochips and propose a DFT method that incorporates a test plan into the fluidic operations of a target bioassay protocol. By using the testability-aware bioassay protocol as an input to the biochip design tool, the proposed DFT method ensures a high level of testability, defined as the percentage of the electrodes or functional units on the synthesized chip that can be effectively tested. We evaluate the DFT method using a representative multiplexed bioassay and the polymerase chain reaction

Proceedings ArticleDOI
07 Oct 2009
TL;DR: A simple and scalable 3D-SoC test thermal model is developed, a 3D test access architecture is constructed for efficient test access routing, and the limited test resources are partitioned to facilitate a thermal-aware test schedule while minimizing the overall test time.
Abstract: The rapid emergence of three dimensional integration using a ``Through-Silicon-Via'' (TSV) process calls for research activities on testing and design for testability. Compared to the traditional 2D designs, the 3D-SoC poses great challenges in testing, such as three dimensional placement of cores and test resources, severe chip overheating due to the nonuniform distribution of power density in 3D, and 3D test access routing. In this work, we propose an effective and efficient test access routing and resource partitioning scheme to tackle the 3D-SoC test challenges. We develop a simple and scalable 3D-SoC test thermal model for thermal compatibility analysis. We construct a 3-D test access architecture for efficient test access routing, and partition the limited test resources to facilitate a thermal-aware test schedule while minimizing the overall test time. The promising results are demonstrated by extensive simulation on ITC'02 benchmark SoCs.

01 Jan 2009
TL;DR: This work has implemented Universal asynchronous receiver transmitter (UART) with BIST capability using different LFSR techniques and compared these techniques for the logic utilization in SPARTAN3 XC3S200-4FT256 FPGA device.
Abstract: The increasing growth of sub-micron technology has resulted in the difficulty of VLSI testing. Test and design for testability are recognized today as critical to a successful design. Built-in-Self- Test (BIST) is becoming an alternative solution to the rising costs of external electrical testing and increasing complexity of devices Small increase in the cost of system reduces large testing cost. BIST is a design technique that allows a circuit to test itself Test pattern generator (TPG) using Linear Feedback Shift Resister (LFSR) is proposed which is more suitable for BIST architecture. We have implemented Universal asynchronous receiver transmitter (UART) with BIST capability using different LFSR techniques and compared these techniques for the logic utilization in SPARTAN3 XC3S200-4FT256 FPGA device.

Journal ArticleDOI
TL;DR: A built-in self-test (BIST) circuit to test setup and hold times of I/O registers or buffers for memory interfaces, which uses a delay-locked loop (DLL) to generate delayed clocks and evaluates its performance, together with test time and accuracy.
Abstract: A built-in self-test (BIST) circuit has been designed to test setup and hold times of I/O registers or buffers for memory interfaces. This method enables independent testing of setup and hold times without using an external tester, except to generate the reference clock. The circuit uses a delay-locked loop (DLL) to generate delayed clocks. It has been implemented with a 0.18-mum TSMC process (CM018). The accuracy in delay generation is within 40 ps, for delay measurements ranging from 300 to 700 ps. In order to achieve high accuracy, the BIST circuit requires frequency adjustment during test, combined with averaging over multiple test cycles. To do this in an efficient manner, the DLL in the BIST circuit has been designed for a wide lock range, from 150 to 400 MHz, and achieves lock in less than 0.05 mus. This paper describes the design in detail and evaluates its performance, together with test time and accuracy. It also shows how to use a low-resolution DLL to achieve high accuracy through frequency adjustment and averaging over multiple test cycles.

Journal ArticleDOI
TL;DR: A design-oriented testing approach is proposed to perform a simple and low-cost detection of variations in important design variables of cascaded SigmaDelta modulators to improve fault coverage and bring data for silicon debug.
Abstract: The test of SigmaDelta modulators is cumbersome due to the high performance that they reach. Moreover, technology scaling trends raise serious doubts on the intradie repeatability of devices. An increase of variability will lead to an increase in parametric faults that are difficult to detect. In this paper, a design-oriented testing approach is proposed to perform a simple and low-cost detection of variations in important design variables of cascaded SigmaDelta modulators. The digital tests could be integrated in a production test flow to improve fault coverage and bring data for silicon debug. A study is presented to tailor signature generation, with test-time minimization in mind, as a function of the desired measurement precision. The developments are supported by experimental results that validate the proposal.

Journal ArticleDOI
TL;DR: It is demonstrated that threshold testing can enhance yield and that it is practical in terms of test generation effort and test application costs, and opens the way for developing low-cost tools for threshold testing that will provide high threshold coverage for realistic faults and defects.
Abstract: Yields for digital very-large-scale-integration chips have been declining in the recent years, and the decline is accelerating as the technology moves deep into nanoscale. Recently, we have proposed the notion of error tolerance to improve yields for a wide range of high-performance digital applications, including audio, speech, video, graphics, visualization, games, and wireless communication. Error tolerance systematically codifies the fact that chips used in such applications can be acceptable despite having defects that produce erroneous outputs, provided that the errors are guaranteed to be of certain types and have severities within thresholds specified by the application. In this paper, we propose a new testing approach called threshold testing to practically exploit the notion of error tolerance for applications where errors with absolute numerical magnitudes lower than an application-specified threshold are acceptable. We propose a new automatic test pattern generator (ATPG) for threshold testing for single stuck-at faults. This test generator embodies several completely new techniques, including new approaches for directing the search for a test vector, new types of objectives, new types of necessary conditions, and new approaches to identify and exploit these conditions. We demonstrate that threshold testing can enhance yield and that it is practical in terms of test generation effort and test application costs. We also propose threshold fault simulators and ATPG for bridging and transition delay faults. We use these tools to show that the stuck-at-fault model is indeed a suitable model for threshold testing. This opens the way for developing low-cost tools for threshold testing that will provide high threshold coverage for realistic faults and defects and hence help provide higher yields in future nanoscale processes at low costs.

Journal ArticleDOI
TL;DR: The proposed technique performs test points (TPs) insertion using Sandia Controllability and Observability Program (SCOAP) analysis to enhance the controllability of feedback nets and observability for fault sites that are flagged unobservable.
Abstract: Due to the absence of a global clock and the presence of more state holding elements that synchronize the control and data paths, conventional Automatic Test Pattern Generation (ATPG) algorithms fail when applied to asynchronous circuits, leading to poor fault coverage. This paper presents a design for test (DFT) technique for a popular asynchronous design paradigm called NULL Convention Logic (NCL) aimed at making NCL designs testable using existing DFT tools with reasonable gate overhead. The proposed technique performs test points (TPs) insertion using Sandia Controllability and Observability Program (SCOAP) analysis to enhance the controllability of feedback nets and observability for fault sites that are flagged unobservable. An Automatic DFT Insertion Flow (ADIF) algorithm and a custom ATPG NCL primitive gates library are developed. The developed DFT technique has been verified on several NCL benchmark circuits

Proceedings ArticleDOI
20 Apr 2009
TL;DR: Compared to existing DfT solutions, the proposed technique offers many advantages: programmability, low area overhead, low test application time, and it does not require any modification of critical parts of the SRAM.
Abstract: Core-cell stability represents the ability of the core-cell to keep the stored data. With the rapid development of semiconductor memories, their test is becoming a major concern in VDSM technologies. It provides information about the SRAM design reliability, and its effectiveness is therefore mandatory for safety applications. Existing core-cell stability Design-for-Test (DfT) techniques consist in controlling the voltage levels of bit lines to apply a weak write stress on the core-cell under test. If the core-cell is weak, the weak write stress induces the faulty swap of the core-cell. However, these solutions are costly in terms of area and test application time, and generally require modifications of critical parts of the SRAM (core-cell array and/or the structure generating the internal auto-timing). In this paper, we present a new DfT technique for stability fault detection. It consists in modulating the word line activation in order to perform an adjustable weak write stress on the targeted core-cell for stability fault detection. Compared to existing DfT solutions, the proposed technique offers many advantages: programmability, low area overhead, low test application time. Moreover, it does not require any modification of critical parts of the SRAM.

Journal ArticleDOI
TL;DR: A design methodology to reduce the time to market by taking core test data from the design environment and automatically generating DfT structures that can easily be integrated into the SoC.
Abstract: A system-on-a-chip (SoC) built with embedded intellectual property (IP) cores offers attractive methodology design reuse, reconfigurability, and customizability. However, the integration of the design-for-testability (DfT) structures of the IP cores in these complex SoCs presents daunting challenges to designers and ultimately affects the time-to-market goals. In this paper, we introduce a design methodology to reduce the time to market by taking core test data from the design environment and automatically generating DfT structures that can easily be integrated into the SoC. A novel automated synthesis methodology to generate an SoC built-in self-test (BIST) to test the IP and custom logic cores is proposed. The proposed technique, i.e., NonExclusive Xor Test of 2D linear feedback shift register (NEXT 2D LFSR), is modeled after the principle of configurable 2D LFSR design, which generates a deterministic sequence of test vectors for random-vector-resistant faults and then random test vectors for random-vector-detectable faults. The basis of this method is to explore the design solution space for optimal 2D LFSR while embedding the test patterns by nonexclusively considering xor gates as the conventional LFSR does but including a simple logic solution for minimal hardware optimization. Moreover, the proposed approach is capable of optimizing the 2D LFSRs with consideration of the don't-care bits in the incompletely specified test patterns.

Proceedings ArticleDOI
28 Apr 2009
TL;DR: A power and noise aware scan test method that reduces IR-drop for both shift and capture mode in scan test using power-aware DFT and ATPG based on the preliminary power/noise estimation for test.
Abstract: Issues on power consumption and IR-drop in testing become serious problems. Some troubles, such as tester fails due to too much power consumption or IR-drop, test escapes due to slowed clock cycle, and so on, can happen in test floors. In this paper, we propose a power and noise aware scan test method. In the method, power-aware DFT and power-aware ATPG are executed based on the preliminary power/noise estimation for test. Experimental results illustrate the effect of reducing IR-drop for both shift and capture mode in scan test.

Proceedings ArticleDOI
15 Apr 2009
TL;DR: This paper faces the increasingly common situation of SoCs adopting the IEEE 1149.1 and 1500 standards for the test of the internal cores, and explores the idea of storing the test program on the tester in a compressed form, and decompressing it on-the-fly during test application.
Abstract: Reducing the cost of test (in particular by reducing its duration and the cost of the required ATE) is a common goal which has largely been pursued in the past, mainly by introducing suitable on chip Design for Testability (DfT) circuitry. Today, the increasing popularity of sophisticated DfT architectures and the parallel emergence of new ATE families allow the identification of innovative solutions effectively facing that goal. In this paper we face the increasingly common situation of SoCs adopting the IEEE 1149.1 and 1500 standards for the test of the internal cores, and explore the idea of storing the test program on the tester in a compressed form, and decompressing it on-the-fly during test application.

Journal ArticleDOI
TL;DR: This paper proposes a low-cost design-for-testability (DFT) structure for a classical charge-pump phase-locked loop (CP-PLL) circuit to allow simple digital testing, ensuring that the characteristics of CP- PLL are unaltered and that a suitable on-chip design can be developed using the proposed CP-P LL DFT structure.
Abstract: This paper proposes a low-cost design-for-testability (DFT) structure for a classical charge-pump phase-locked loop (CP-PLL) circuit to allow simple digital testing. The proposed CP-PLL DFT structure uses the existing charge-pump circuit and voltage-controlled oscillator (VCO) as a stimulus generator and a measuring device, respectively. Thus, no extra test stimulus or measured instruments are required during testing. The primary advantage is that the analog blocks of the CP-PLL are unchanged and that the test output is purely digital, ensuring that the characteristics of CP-PLL are unaltered and that a suitable on-chip design can be developed using the proposed CP-PLL DFT structure. Fault simulation results indicate that the proposed CP-PLL DFT structure possesses high fault coverage (97.9%). In addition, the physical chip design is presented to show low area overhead (4.48%) and little degradation in performance.

Patent
18 May 2009
TL;DR: In this article, a design for testability (DFT) algorithm using both gradient descent and linear programming (LP) algorithms to insert test points (TPs) and/or scanned flip-flops (SFFs) into large circuits to make them testable is described.
Abstract: Design for testability (DFT) algorithms, which use both gradient descent and linear programming (LP) algorithms to insert test points (TPs) and/or scanned flip-flops (SFFs) into large circuits to make them testable are described. Scanning of either all flip-flops or a subset of flip-flops is supported. The algorithms measure testability using probabilities computed from logic simulation, Shannon's entropy measure (from information theory), and spectral analysis of the circuit in the frequency domain. The DFT hardware inserter methods uses toggling rates of the flip-flops (analyzed using digital signal processing (DSP) methods) and Shannon entropy measures of flip-flops to select flip-flops for scan. The optimal insertion of the DFT hardware reduces the amount of DFT hardware, since the gradient descent and linear program optimizations trade off inserting a TP versus inserting an SFF. The linear programs find the optimal solution to the optimization, and the entropy measures are used to maximize information flow through the circuit-under-test (CUT). The methods limit the amount of additional circuit hardware for test points and scan flip-flops.

Proceedings ArticleDOI
20 Apr 2009
TL;DR: This work proposes a generic framework for reducing scan capture power in test compression environment using the entropy of the test set to measure the impact of capture power-aware X-filling on the potential test compression ratio, and shows efficacy on benchmark circuits.
Abstract: Growing test data volume and overtesting caused by excessive scan capture power are two of the major concerns for the industry when testing large integrated circuits. Various test data compression (TDC) schemes and low-power X-filling techniques were proposed to address the above problems. These methods, however, exploit the very same “don't-care” bits in the test cubes to achieve different objectives and hence may contradict to each other. In this work, we propose a generic framework for reducing scan capture power in test compression environment. Using the entropy of the test set to measure the impact of capture power-aware X-filling on the potential test compression ratio, the proposed holistic solution is able to keep capture power under a safe limit with little compression ratio loss for any fixed-length symbol-based TDC method. Experimental results on benchmark circuits demonstrate the efficacy of the proposed approach.

Proceedings ArticleDOI
30 Oct 2009
TL;DR: All five design issues are explored in the context of a high-resolution memory-on-logic Synthetic Aperture Radar (SAR) processor, chosen specifically as it requires a significant amount of memory bandwidth that is best met with the high I/O bandwidth afforded by a 3D process.
Abstract: When designing 3DICs there are five major issues that differ from 2D that must receive special attention: power delivery, thermal density, design for test, clock tree design and floorplanning. Power delivery in 3D must receive special attention as 3D designs have larger supply currents flowing through the package power delivery pins, along with a longer power delivery path than in comparable 2D system. Thermal density is an issue as 3D integrated chips will have more heat density and less capacity to remove heat than a comparable 2D chip. 3D clock tree distribution is much more difficult than in 2D because the most commonly used methodologies and design tools are geared towards 2D designs and process variation between the different tiers makes it harder to keep skew, jitter and power consumption down. Design for test is harder in 3D because 3D vias provide another point of failure and post fabrication repairs such as Focused Ion Beam are more difficult to perform in 3D. Finally, floorplanning is drastically different in 3D than in 2D, and the four aforementioned issues must all be taken into account during 3D floorplanning. In this paper, all five design issues are explored in the context of a high-resolution memory-on-logic Synthetic Aperture Radar (SAR) processor. The SAR processor is chosen specifically as it requires a significant amount of memory bandwidth that is best met with the high I/O bandwidth afforded by a 3D process. The issues are examined in the context of two implementations for two different 3D integration processes. The first implementation was done in MIT Lincoln Laboratory's 3D FDSOI 1.5 V three tier process and is currently in fabrication. The second design is currently in the design stage, and will be fabricated in two tiers of Chartered Semiconductor's 130 nm process 3D integrated with two tiers of high bandwidth DRAM using Tezzaron Semiconductor's vertical interconnection technology.

Proceedings ArticleDOI
18 Dec 2009
TL;DR: This work proposes a new approach that exploits the readily-available test-measurement data in chip-failure log files and uses analysis results from existing tests to reduce the cost of evaluating new test metrics, fault models, DFT techniques, etc.
Abstract: Test metrics and fault models continue to evolve to keep up with defect characteristics associated with ever-changing fabrication processes. Understanding the relative effectiveness of current and proposed metrics and models is therefore important for selecting the best mix of methods for achieving a desired level of quality at reasonable cost. Test-metric and fault model evaluation traditionally relies on large, time-consuming silicon-based test experiments. Specifically, tests generated for some specific metric/model are applied to real chips, and unique chip-fail detections are used as relative measures of effectiveness. To reduce the cost of evaluating new test metrics, fault models, DFT techniques, etc., this work proposes a new approach that exploits the readily-available test-measurement data in chip-failure log files. The new approach does not require the generation and application of new patterns but uses analysis results from existing tests. We demonstrate the method by comparing several metrics and models that include: (i) stuck-at, (ii) N-detect, (iii) PAN-detect (physically-aware N-detect), (iv) bridge fault models, and (v) the input pattern fault model (also more recently referred to as the gate-exhaustive metric).

Proceedings ArticleDOI
05 Jan 2009
TL;DR: Application of the WOR-BIST scheme to large ISCAS and ITC benchmark circuits shows that the scheme is superior in area, power and performance to the conventional multiple serial scan.
Abstract: A complete Built-In Self-Test (BIST) solution based on word oriented Random Access Scan architecture (WOR-BIST), is proposed. Our WOR-BIST scheme reduces the test power consumption significantly due to reduced switching activity during scan operations. We also provide a greedy algorithm to reduce the test data volume and test application time. We performed logic simulation of the test vectors to show its impact on the average and peak power during testing. We implemented the scheme to demonstrate its impact on the chip area and timing performance. Application of our scheme to large ISCAS and ITC benchmark circuits shows that our scheme is superior in area, power and performance to the conventional multiple serial scan.