scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Design & Test of Computers in 2006"


Journal ArticleDOI
TL;DR: This article summarizes and categories hardware-based test vector compression techniques for scan architectures, which fall broadly into three categories: code-based schemes use data compression codes to encode test cubes; linear-decompression- based schemes decompress the data using only linear operations; and broadcast-scan-based scheme rely on broadcasting the same values to multiple scan chains.
Abstract: Test data compression consists of test vector compression on the input side and response, compaction on the output side This vector compression has been an active area of research This article summarizes and categories these techniques The focus is on hardware-based test vector compression techniques for scan architectures Test vector compression schemes fall broadly into three categories: code-based schemes use data compression codes to encode test cubes; linear-decompression-based schemes decompress the data using only linear operations (that is LFSRs and XOR networks) and broadcast-scan-based schemes rely on broadcasting the same values to multiple scan chains

429 citations


Journal ArticleDOI
TL;DR: A taxonomy for ESL tools and methodologies is presented that combines UC Berkeley's platform-based design terminologies with Dan Gajski's Y-chart work to help stem the tide of confusion in the ESL world.
Abstract: This article presents a taxonomy for ESL tools and methodologies that combines UC Berkeley's platform-based design terminologies with Dan Gajski's Y-chart work. This is timely and necessary because in the ESL world we seem to be building tools without first establishing an appropriate design flow or methodology, thereby creating a lot of confusion. This taxonomy can help stem the tide of confusion

173 citations


Journal ArticleDOI
TL;DR: The article examines various features of C and their mapping to hardware, and makes a cogent argument that vanilla C is not the right language for hardware description if synthesis is the goal.
Abstract: This article presents one side of an ongoing debate on the appropriateness of C-like languages as hardware description languages. The article examines various features of C and their mapping to hardware, and makes a cogent argument that vanilla C is not the right language for hardware description if synthesis is the goal. C-like languages are far more compelling for these tasks, and one in particular, SystemC, is now widely used, as are many ad hoc variants

122 citations


Journal ArticleDOI
TL;DR: This article presents a broad vision of a new cohesive architecture, ElastIC, which can provide a pathway to successful design in unpredictable silicon and incorporates several novel concepts in these areas.
Abstract: ElastIC must deal with extremes a multiple core processor subjected to huge process variations, transistor degradations at varying rates, and device failures. In this article, we present a broad vision of a new cohesive architecture, ElastIC, which can provide a pathway to successful design in unpredictable silicon. ElastIC is based on aggressive run-time self-diagnosis, adaptivity, and self-healing. It incorporates several novel concepts in these areas and brings together research efforts from the device, circuit, testing, and microarchitecture domains. Architectures like ElastIC will become vital in extremely scaled CMOS technologies (such as 22 nm); ideally, they will target applications such as multimedia, Web services, and transaction processing

117 citations


Journal ArticleDOI
TL;DR: A set of on-chip testing techniques and their application to integrated wireless RF transceivers are described to reduce final product cost and accelerate time to market by providing means of testing the entire transceiver system as well as its major building blocks without using off-chip analog or RF instrumentation.
Abstract: This article describes a set of on-chip testing techniques and their application to integrated wireless RF transceivers. The objective is to reduce final product cost and accelerate time to market by providing means of testing the entire transceiver system as well as its major building blocks without using off-chip analog or RF instrumentation. On-chip test devices fabricated in a standard CMOS process and experimentally evaluated support the proposed test strategy

68 citations


Journal ArticleDOI
TL;DR: Automated source-level debugging and a new and novel debugging model allow for source- level debugging of large VHDL designs at the granularity of statements and expressions.
Abstract: Recent achievements in formal verification techniques allow for fault detection even in large real-world designs. Tool support for localizing the faulty statements is critical, because it reduces development time and overall project costs. Automated source-level debugging and a new and novel debugging model allow for source-level debugging of large VHDL designs at the granularity of statements and expressions. This technique is fully automated and does not require that an engineer be familiar with formal verification techniques.

58 citations


Journal ArticleDOI
TL;DR: The proposed methodology can eliminate an expensive mechanical test for a commercially available accelerometer with little error and it's possible to completely eliminate the error (for failing parts) using specification guard banding.
Abstract: In this work, we use binary decision trees (BDTs) for statistical test compaction, because they have the following properties. First, decision trees require no assumption on the type of correlation (if any) that exists between Tred and Tkept. This makes it possible to derive a more accurate representation of Fi(Tkept) from the collected test data. Also, deriving a decision tree model for Fi(Tkept) simply involves partitioning the Tkept hyperspace into hypercubes, which is a polynomial time process of complexity O(n2 timesk3), where n is the number of tests in Tkept, and k is the number of parts in the collected data. Therefore, the computation time required for creating a decision tree can be considerably less than the time required for training a neural network. Our Proposed methodology can eliminate an expensive mechanical test for a commercially available accelerometer with little error. Moreover, it's possible to completely eliminate the error (for failing parts) using specification guard banding. But the same result could not be achieved for the equivalent mechanical test executed at an elevated temperature. Techniques such as specification guard banding and drift removal can reduce error, but more research is needed. More importantly, techniques are needed for incorporating this and similar methodologies into a production test flow

54 citations


Journal ArticleDOI
TL;DR: High-voltage stress testing (HVST) is common in IC manufacturing, but publications comparing it with other test and burn-in methods are scarce, and this article shows that the use of HVST can dramatically reduce the amount of required burn- in.
Abstract: To guarantee an industry standard of reliability in ICs, manufacturers incorporate special testing techniques into the circuit manufacturing process. For most electronic devices, the specific reliability required is quite high, often producing a lifespan of several years. Testing such devices for reliability under normal operating conditions would require a very long period of time to gather the data necessary for modeling the device's failure characteristics. Under this scenario, a device might become obsolete by the time the manufacturer could guarantee its reliability. High-voltage stress testing (HVST) is common in IC manufacturing, but publications comparing it with other test and burn-in methods are scarce. This article shows that the use of HVST can dramatically reduce the amount of required burn-in.

46 citations


Journal ArticleDOI
TL;DR: This methodology advocates testing dies for process variation by monitoring parameter variations across a die and analyzing the data that the monitoring devices provide, and uses ring oscillators (ROs) to map parameter variations into the frequency domain.
Abstract: Ring oscillators are not new, but the authors of this article use them in a novel, unconventional way to monitor process variation at different regions of a die in the frequency domain. Measuring the variation of each design or fabrication parameter is infeasible from a circuit designer's perspective. Therefore, we propose a methodology that approaches PV from a test perspective. This methodology advocates testing dies for process variation by monitoring parameter variations across a die and analyzing the data that the monitoring devices provide. We use ring oscillators (ROs) to map parameter variations into the frequency domain. Our use of ROs is far more rigorous than in standard practices. To keep complexity and overhead low, we neither employ analog channels nor use zero-crossing counters. Instead, we use a frequency domain analysis because it allows compacting RO signals using digital adders (thereby also reducing the number of wires), and decoupling frequencies to identify high PVs and problematic regions. Our PV test methodology includes defining the PV fault model; deciding on types, numbers, and positions of a small distributed network of frequency-sensitive sensors (RO) and designing an efficient, fully digital communication channel with sufficient bandwidth to transfer sensor information to an analysis point. With this methodology, users can trade off cost and accuracy by choosing the number or frequency of sensors and regions on the die to monitor

43 citations


Journal ArticleDOI
TL;DR: This article compares and contrasts the acceleration effects of various extrinsic defects found in 130- and 90-nm CMOS technology products.
Abstract: A difficulty in reliability modeling is how to capture the impact of all of the various reliability defect types. The general approach to optimizing burn-in that we describe in this article addresses a multiple-defect environment. The approach has four main parts: (i) modeling the product's failure rate distribution, (ii) establishing the Pareto distribution of reliability defects, (iii) assessing the kinetic information of each reliability defect, and (iv) estimating the DPPM under product use conditions. This article compares and contrasts the acceleration effects of various extrinsic defects found in 130- and 90-nm CMOS technology products.

34 citations


Journal ArticleDOI
Shekhar Borkar1
TL;DR: Every discipline needs to cooperate and make the VLSI system reliable in the presence of variability and the resulting inherent unreliability of components.
Abstract: Variability and reliability will be the barriers to future technology scaling. Every discipline, from fabrication to software, needs to cooperate and make the VLSI system reliable in the presence of variability and the resulting inherent unreliability of components.

Journal ArticleDOI
Eric S. Fetzer1
TL;DR: This case study discusses how to use adaptive circuits in a big dual-core microprocessor to combat process variation to prevent continuous design updates or multiple design optimizations.
Abstract: This case study discusses how to use adaptive circuits in a big dual-core microprocessor to combat process variation. The large die size also makes it suffer more on-die process variation. To prevent continuous design updates or multiple design optimizations, designs incorporate adaptive techniques that achieve the highest performance possible. Although adaptive techniques are not new, having been implemented to some degree for generations (for example, self-calibrating I/O), they have taken significant new roles in many design aspects. As adaptive designs proliferate, increasing amounts of effort go into testing them. This article presented two types of adaptive systems: the silicon-optimizing active deskew system and the silicon-monitoring power measurement and cache latent-error detection system. However, these adaptive circuits are the tip of a growing iceberg. As variability increasingly affects designs, designers will likely use more adaptive circuits to achieve the highest performance and reliability possible. New scaling issues, such as erratic bits, will make these adaptations even more necessary to the design's fundamental operation. With increasing use of adaptive circuits, designers will need to develop new test techniques to ensure high part quality and reliability

Journal ArticleDOI
TL;DR: Test strategies for known good die and known good substrate in the SiP are provided and case studies prove feasibility using the IEEE 1500 test structure.
Abstract: System-in-package integrates multiple dies in a common package. Therefore, testing SiP technology is different from system-on-chip, which integrates multiple vendor parts. This article provides test strategies for known good die and known good substrate in the SiP. Case studies prove feasibility using the IEEE 1500 test structure

Journal ArticleDOI
TL;DR: An XML-based standard for describing electronic intellectual property - that is, blocks of electronic logic suitable for inclusion in complex integrated circuits, commonly know as systems on chips (SoCs) is developed.
Abstract: The paper aims to develop an XML-based standard for describing electronic intellectual property - that is, blocks of electronic logic suitable for inclusion in complex integrated circuits, commonly know as systems on chips (SoCs). This work, which is based on the Spirit Consortium's IP-XACT specification, has transferred to the IEEE for standardization. The IP-XACT specification provides a metadata schema for describing IP, enabling it to be compatible with automated integration techniques, and an API for tool access to this schema. Tools that implement the standard would be able to automatically interpret, configure, integrate, and manipulate IP blocks that are delivered with metadata that conforms to the proposed IP metadata description, and the IP-XACT APIs provides a standard method for linking multiple tools through a single exchange-metadata format. This automatic integration of tools and IP from multiple vendors creates an IP-XACT-enabled environment

Journal ArticleDOI
TL;DR: The impact of within-die thermal gradients on clock skew is analyzed, considering temperature's effect on active devices and the interconnect system, and a dual-VDD clocking strategy is proposed that reduces temperature-related clock skew effects during test.
Abstract: In this article, we analyze the impact of within-die thermal gradients on clock skew, considering temperature's effect on active devices and the interconnect system. This effect, along with the fact that the test-induced thermal map can differ from the normal-mode thermal map, motivates the need for a careful consideration of the impact of temperature gradients on delay during test. After our analysis, we propose a dual-VDD clocking strategy that reduces temperature-related clock skew effects during test. Clock network design is a critical task in developing high-performance circuits because circuit performance and functionality depend directly on this subsystem's performance. When distributing the clock signal over the chip, clock edges might reach various circuit registers at different times. The difference in clock arrival time between the first and last registers receiving the signal is called clock skew. With tens of millions of transistors integrated on the chip, distributing the clock signal with near-zero skew introduces important constraints in the clock distribution network's physical implementation and affects overall circuit power and area

Journal ArticleDOI
TL;DR: A sensor-based BIT scheme that involves designing sensors for each module directly into the device under test (DUD and capturing sensor outputs that are low-frequency DC signals) mitigate any issues related to signal integrity and diversity in the test response capture process.
Abstract: In this article, we propose a sensor-based BIT scheme By using sensors, we mitigate any issues related to signal integrity and diversity in the test response capture process Also, BIT can provide a test framework to estimate specifications during production testing for various modules in a heterogeneous SoC or SiP This scheme involves designing sensors for each module directly into the device under test (DUD and capturing sensor outputs that are low-frequency DC signals A low-frequency mixed-signal tester can capture these sensor responses, analyze them to infer each specific module's performance, and determine the overall pass-fail decision for the DUT The embedded sensors perform the necessary signal conditioning of the DUT output signals, thereby significantly reducing the ATE's response capture and analysis overhead As an example, it's possible to test a digital module for rise time by incorporating an integrator at the output node as a sensor As the output node voltage increases, the integrator's output capacitance charges to a DC value The ATE samples the capacitor's DC voltage at a specific time, and the DC voltage would be proportional to the DUT's rise time In this case, there would be no need to sample the rising waveform, and the ATE's digitizer requirements could be significantly relaxed This example indicates that during production testing, carefully chosen sensors can effectively simplify the overall test procedure

Journal ArticleDOI
P. Rickert1, W. Krenik1
TL;DR: This article presents trade-offs among system-in-package, system-on-chip, and package- on-package integration for mobile phone applications.
Abstract: Engineers must make many cost-effective decisions during a product's design cycle. One challenge is deciding on the best packaging for their products. This article presents trade-offs among system-in-package, system-on-chip, and package-on-package integration for mobile phone applications

Journal ArticleDOI
TL;DR: Gezel consists of a simple but extendable hardware description language (HDL) and an extensible simulation-and-refinement kernel that can be used to create a system by designing, integrating, and programming a set of programmable components.
Abstract: In this article, we present Gezel, a component-based, electronic system-level (ESL) design environment for heterogeneous designs. Gezel consists of a simple but extendable hardware description language (HDL) and an extensible simulation-and-refinement kernel. Our approach is to create a system by designing, integrating, and programming a set of programmable components. These components can be processor models or hard ware simulation kernels. Using Gezel, designers can clearly distinguish between component design, plat form integration, and platform programming, thus separating the roles of component builder, platform builder, and platform user. Embedded applications have driven the development of this ESL design environment. To demonstrate the broad scope of our component-based approach, we discuss three applications that use our environment; all are from the field of embedded security

Journal ArticleDOI
TL;DR: A design environment that provides an interface for user- written SystemC modules that model application software to make calls to a real-time operating system (RTOS) kernel and cosimulate with user-written SystemC hardware modules to facilitate successive refinement through three abstraction layers for hardware-software codesign suitable for embedded-system design.
Abstract: This article presents a design environment that provides an interface for user-written SystemC modules that model application software to make calls to a real-time operating system (RTOS) kernel and cosimulate with user-written SystemC hardware modules. The environment also facilitates successive refinement through three abstraction layers for hardware-software codesign suitable for embedded-system design.

Journal ArticleDOI
TL;DR: Test considerations for scaled CMOS circuits in the nanometer regime are explored and possible solutions to many of these challenges are described, including statistical timing and delay test, I/sub DDQ/ test under exponentially increasing leakage, and power or thermal management architectures.
Abstract: The exponential increase in leakage, the device parameter variations, and the aggressive power management techniques will severely impact IC testing methods. Test technology faces new challenges as faults with increasingly complex behavior become predominant. Design approaches aimed at fixing some of the undesirable effects of nanometric technologies could jeopardize current test approaches. In this article, we explore test considerations for scaled CMOS circuits in the nanometer regime and describe possible solutions to many of these challenges, including statistical timing and delay test, I/sub DDQ/ test under exponentially increasing leakage, and power or thermal management architectures.

Journal ArticleDOI
TL;DR: This article focuses on the analysis of mismatch in MOS transistors resulting from random fluctuations of the dopant concentration, first studied by Keyes, and recognizes these fluctuations as the main causeof mismatch in bulk CMOS transistor.
Abstract: Digital and analog ICs generally rely on the concept of matched behavior between identically designed devices. Time-independent variations between identically designed transistors, called mismatch, affect the performance of most analog and even digital MOS circuits. This article focuses on the analysis of mismatch in MOS transistors resulting from random fluctuations of the dopant concentration, first studied by Keyes. Today, we recognize these fluctuations as the main cause of mismatch in bulk CMOS transistors.

Journal ArticleDOI
TL;DR: This approach presents a system that overcomes the obstacle of silicon area overhead by using available wafer sort test results to measure critical-area yield model parameters with no additional silicon area.
Abstract: Defect density and size distributions (DDSDs) are important parameters for characterizing spot defects in a process. This article addresses random spot defects, which affect all processes and currently require a heavy silicon investment to characterize and a new approach is proposed for characterizing such defects. This approach presents a system that overcomes the obstacle of silicon area overhead by using available wafer sort test results to measure critical-area yield model parameters with no additional silicon area. The results of the experiment on chips fabricated in silicon confirm the results of the simulation experiment that DDSDs measurement characterizes a process in ordinary digital circuits using only slow, structural test results from the product

Journal ArticleDOI
Dong Gun Kam1, Joungho Kim1, Jiheon Yu2, Ho Choi2, Kicheol Bae2, Choonheung Lee2 
TL;DR: This article addresses problems with wire bonding in high-frequency SiP packages and proposes design methodologies to reduce these discontinuities.
Abstract: System-in-package provides highly integrated packaging with high-speed performance Many SiP packages contain low-cost 3D stacked chips interconnected by fine wire bonds In a high-frequency spectrum, these wire bonds can cause discontinuities causing signal degradation This article addresses problems with wire bonding in high-frequency SiP packages and proposes design methodologies to reduce these discontinuities

Journal ArticleDOI
TL;DR: A novel DFT technique is presented to test sets of ADCs and DACs embedded in a complex SiP to provide fully digital testing on the converters to significantly reduce the cost of testing.
Abstract: Testing mixed-signal circuits remains one of the most difficult challenges within the semiconductor industry. In this article, the authors present a novel DFT technique to test sets of ADCs and DACs embedded in a complex SiP. The technique provides fully digital testing on the converters to significantly reduce the cost of testing

Journal ArticleDOI
TL;DR: This article presents a modeling methodology and supporting data, demonstrating that yield and reliability defects can be directly linked in a unified model.
Abstract: A key productivity metric in semiconductor manufacturing is wafer test yield - the fraction of dies deemed functional following wafer probe testing. Wafer test yield is directly related to semiconductor manufacturing profitability: The higher the yield, the lower the cost of producing a functional chip, and therefore the greater the potential profit. Because wafer test yield is such a critical variable in a products profit potential, accurate yield projection models are essential to semiconductor manufacturers economic success. It is important to understand the correlation between defects causing yield loss and defects causing reliability failures. This article presents a modeling methodology and supporting data, demonstrating that yield and reliability defects can be directly linked in a unified model.

Journal ArticleDOI
TL;DR: This article presents a modular approach for testing multigigahertz, multilane digital devices with source-synchronous I/O buses, suitable for integration with existing ATE and can provide more than 100 independent differential-pair signals.
Abstract: This article presents a modular approach for testing multigigahertz, multilane digital devices with source-synchronous I/O buses. This approach is suitable for integration with existing ATE and can provide more than 100 independent differential-pair signals. We describe a specific application with 32 lanes of PCI Express, running at 2.5 gigabits per second (Gbps) per lane, and 32 data channels of HyperTransport, at 1.6 Gbps per channel. The differential source-synchronous nature of these buses presents difficulties for traditional (single-ended, synchronous) ATE. We solve these problems by using true-differential driver and receiver test modules tailored for the specific I/O protocols. We satisfy a further requirement for jitter tolerance testing by incorporating a novel digitally synthesized jitter injection technique in the driver modules. The modular nature of our approach permits customization of the test system hardware and optimization for specific DUT test requirements.

Journal ArticleDOI
TL;DR: This article proposes a "statistical testing" framework that combines testing, analysis, and optimization to identify latent-defect signatures and discusses the required characteristics of statistical testing to isolate the embedded-outlier population.
Abstract: The expanded role of test demands a significant change in mind-set of nearly every engineer involved in the screening of semiconductor products. The issues to consider range from DFT and ATE requirements, to the design and optimization of test patterns, to the physical and statistical relationships of different tests, and finally, to the economics of reducing test time and cost. The identification of outliers to isolate latent defects will likely increase the role of statistical testing in present and future technologies. An emerging opportunity is to use statistical analysis of parametric measurements at multiple test corners to improve the effectiveness and efficiency of testing and reliability defect stressing. In this article, we propose a "statistical testing" framework that combines testing, analysis, and optimization to identify latent-defect signatures. We discuss the required characteristics of statistical testing to isolate the embedded-outlier population; test conditions and test application support for the statistical-testing framework; and the data modeling for identifying the outliers.

Journal ArticleDOI
TL;DR: This article analyzes the possibility of extending traditional methods of logic device design-to-reticle flow into the OPC stage and introduces new post-tapeout RET methods for improving printability.
Abstract: The steps that create physical shape data in a typical logic device design-to-reticle flow are cell layout, place and route, tapeout, OPC or RET, data fracture, and reticle build. Here, we define OPC as the transformation of reticle data to compensate for lithographic and process distortions so that the final wafer pattern is as close to the target pattern-the designed layout-as possible. We define RETs as the general class of transformations for reticle data that aim to improve the patterning process window; therefore, OPC is a subset of RET. DFM is traditionally considered to be implemented at the cell layout or routing stages of this flow. Examples include the optimization of a layout based on critical-defect area, the addition of redundant contacts and vias, wire spreading, upsizing of metal landing pads, and the addition of dummy metal tiles to improve the planarity after chemical-mechanical planarization (CMP). We presented a detailed analysis of these techniques in an earlier work. In contrast, this article analyzes the possibility of extending these traditional methods into the OPC stage and introduces new post-tapeout RET methods for improving printability.

Journal ArticleDOI
TL;DR: A fault-injection environment to study the effects of soft errors in CAN networks is devised and an FPGA board is used to emulate the network backbone module, enabling cycle-accurate simulations of the entire network's behavior with very low speed penalties.
Abstract: Many safety-critical applications today rely on computer-based systems in which several computing nodes communicate through a network backbone. As the complexity of the systems under analysis grows, designers must devise fault-injection models that strike a balance between two conflicting requirements: On the one hand, models should be as close as possible to a system's physical implementation to reflect precisely the effects of real faults. On the other hand, abstract, easily manageable models minimize the time required for the fault-injection experiments, letting designers analyze sets of faults wide enough to provide statistically meaningful information. In addressing this issue, we have devised a fault-injection environment to study the effects of soft errors in CAN networks. Our cosimulation environment consists of two modules. The first, a traffic generator module implemented in software, emulates the applications running in each node of the network. The second, a network backbone module implemented in hardware, simulates the activities involved in information exchange between network nodes, in compliance with the CAN protocol specification. To allow evaluation of complex workloads as well as large fault lists, we use an FPGA board to emulate the network backbone module. This enables cycle-accurate simulations of the entire network's behavior with very low speed penalties.

Journal ArticleDOI
TL;DR: A seven-die SiP design is demonstrated that implements a chip-and-package codesign platform using available EDA tools to easily combine the two entities.
Abstract: Design engineers are challenged with two separate entities: the chip and package designs. Because system-in-package integrates multiple dies into a package, design engineers should have a tool to easily combine the two entities. This article demonstrates a seven-die SiP design that implements a chip-and-package codesign platform using available EDA tools