scispace - formally typeset
Search or ask a question

Showing papers on "Design for testing published in 2011"


Journal ArticleDOI
TL;DR: This paper proposes a scheme of layout-aware as well as coverage-driven ILS design, where the partitioning of the flip-flops into ILS segments is determined by their geometric locations, whereas the set of the flips to be placed in parallel are determined by the minimum incompatibility relations among the corresponding bits of a test set.
Abstract: The Illinois Scan Architecture (ILS) consists of several scan path segments and is useful in reducing test application time and test data volume for high density chips. In this paper, we propose a scheme of layout-aware as well as coverage-driven ILS design. The partitioning of the flip-flops into ILS segments is determined by their geometric locations, whereas the set of the flip-flops to be placed in parallel is determined by the minimum incompatibility relations among the corresponding bits of a test set, to enhance fault coverage in broadcast mode. As a result, the number of serial test patterns also reduces.

113 citations


Proceedings ArticleDOI
20 Nov 2011
TL;DR: A framework of defect oriented testing in hybrid memory based on electrical simulation is presented and it is shown that in addition to the occurrence of conventional semiconductor memories faults, new unique faults take place, e.g., faults that cause the cell to hold an undefined state.
Abstract: Hybrid CMOS/memristor memory (hybrid memory)technology is one of the emerging memory technologies potentially to replace conventional non-volatile flash memory. Existing research on such novel circuits focuses mainly on the integration between CMOS and non-CMOS, fabrication techniques and reliability improvement. However, research on defect analysis for yield and quality improvement is still in its infancy stage. This paper presents a framework of defect oriented testing in hybrid memory based on electrical simulation. First, a classification and definition of defects is introduced. Second, a simulation model for defect injection and circuit simulation is proposed. Third, a case study to illustrate how the proposed approach can be used to analyze the defects and translate their electrical faulty behavior into fault models - in order to develop the appropriate tests and design for testability schemes - is provided. The simulation results show that in addition to the occurrence of conventional semiconductor memories faults, new unique faults take place, e.g., faults that cause the cell to hold an undefined state. These new unique faults require new test approaches (e.g., DfT) in order to be able to detect them.

80 citations


Proceedings ArticleDOI
23 May 2011
TL;DR: This paper shows that even in the presence of response compactors the scan-based attack is still possible and it requires low complexity computation, and gives some perspectives concerning the techniques that can be used to increase the scanned attack complexity without affecting the testability of the device.
Abstract: The conflict between security and testability is still a concern of hardware designers. While secure devices must protect confidential information from unauthorized users, quality testing of these devices requires the controllability and observability of a substantial quantity of embedded information, and thus may jeopardize the data confidentiality. Several attacks using the test infrastructures (and in particular scan chains) have been described. More recently it has been shown how test response compaction structures provide a natural counter-measure against this type of attack. However, in this paper, we show that even in the presence of response compactors the scan-based attack is still possible and it requires low complexity computation. We then give some perspectives concerning the techniques that can be used to increase the scan-based attack complexity without affecting the testability of the device.1

58 citations


Journal ArticleDOI
TL;DR: It is demonstrated that compression ratios can be order of magnitude higher, if the cube merging continues despite conflicts on certain positions, and that test clusters make it possible to deliver test patterns in a flexible power-aware fashion.
Abstract: The embedded deterministic test-based compression uses cube merging to reduce a pattern count, the amount of test data, and test time. It gradually expands a test pattern by incorporating compatible test cubes. This paper demonstrates that compression ratios can be order of magnitude higher, if the cube merging continues despite conflicts on certain positions. Our novel solution produces test clusters, each comprising a parent pattern and a number of its derivatives obtained by imposing extra bits on it. In order to load scan chains with patterns that feature original test cubes, only data necessary to recreate parent patterns as well as information regarding locations and values of the corresponding conflicting bits are required. A test controller can then deliver tests by repeatedly applying the same parent pattern, every time using a different control pattern to decide whether a given scan chain receives data from the parent pattern, or another pattern is used instead to recover content of the original test cube. Compression of incompatible test cubes preserves all benefits of continuous flow decompression and offers compression ratios of order 1000× with encoding efficiency much higher than 1.0. We also demonstrate that test clusters make it possible to deliver test patterns in a flexible power-aware fashion. This framework achieves significant reductions in switching activity during scan loading as well as additional test data volume reductions due to encoding algorithms employed to compress parent and control vectors.

45 citations


Proceedings ArticleDOI
01 May 2011
TL;DR: This work presents the design and implementation details of a time-division demultiplexing/multiplexing based scan architecture using serializer/deserializer implemented on NVIDIA's Fermi family GPU (Graphic Processing Unit) chips.
Abstract: We present the design and implementation details of a time-division demultiplexing/multiplexing based scan architecture using serializer/deserializer. This is one of the key DFT features implemented on NVIDIA's Fermi family GPU (Graphic Processing Unit) chips. We provide a comprehensive description on the architecture and specifications. We also depict a compact serializer/deserializer module design, test timing consideration, design rule and test pattern verification. Finally, we show silicon data collected from Fermi GPUs.

29 citations


Proceedings ArticleDOI
23 May 2011
TL;DR: A novel flow to determine the functional power to be used as test power (upper and lower) limits during at-speed delay testing and comparison purpose between the above-mentioned test scheme and power consumption during the functional operation mode of a given circuit is proposed.
Abstract: High power consumption during test may lead to yield loss and premature aging. In particular, excessive peak power during at-speed delay fault testing represents an important issue. In the literature, several techniques have been proposed to reduce peak power consumption during at-speed LOC or LOS delay testing. On the other hand, some experiments have proved that too much test power reduction might lead to test escape and reliability problems. So, in order to avoid any yield loss and test escape due to power issues during test, test power has to map the power consumed during functional mode. In literature, some techniques have been proposed to apply test vectors that mimic functional operation from the switching activity point of view. The process consists of shifting-in a test vector (at low speed) and then applying several successive at-speed clock cycles before capturing the test response. In this paper, we propose a novel flow to determine the functional power to be used as test power (upper and lower) limits during at-speed delay testing. This flow is also used for comparison purpose between the above-mentioned test scheme and power consumption during the functional operation mode of a given circuit. The proposed methodology has been validated on an Intel MC8051 micro controller synthesized in a 65nm industrial technology.

27 citations


Proceedings ArticleDOI
14 Mar 2011
TL;DR: A methodology to avoid power droop during scan capture without compromising at-speed test coverage is presented, based on the use of a low area overhead hardware controller to control the clock gates.
Abstract: Excessive power dissipation caused by large amount of switching activities has been a major issue in scan-based testing. For large designs, the excessive switching activities during launch cycle can cause severe power droop, which cannot be recovered before capture cycle, rendering the at-speed scan testing more susceptible to the power droop. In this paper, we present a methodology to avoid power droop during scan capture without compromising at-speed test coverage. It is based on the use of a low area overhead hardware controller to control the clock gates. The methodology is ATPG (Automatic Test Pattern Generation)-independent, hence pattern generation time is not affected and pattern manipulation is not required. The effectiveness of this technique is demonstrated on several industrial designs.

22 citations


Proceedings ArticleDOI
23 May 2011
TL;DR: This is the first study that analyzes recently proposed DFT solutions for testing power switches through SPICE simulations on a number ofISCAS benchmarks synthesized with a 90-nm gate library and shows an improvement in discharge time of at least 28-times.
Abstract: Power switches are used as part of power-gating technique to reduce leakage power of a design. To the best of our knowledge this is the first study that analyzes recently proposed DFT solutions for testing power switches through SPICE simulations on a number of ISCAS benchmarks and presents the following contributions. It provides evidence of long discharge time when power switches are turned-off, when testing power switches using available DFT solutions. This may either lead to false test (false-fail or false-pass) or long test time. This problem is addressed through a simple and effective DFT solution to reduce the discharge time. The proposed DFT solution has been validated through SPICE simulation and shows an improvement in discharge time of at least 28-times, based on a number of ISCAS benchmarks synthesized with a 90-nm gate library.

22 citations


Proceedings ArticleDOI
09 Oct 2011
TL;DR: This paper develops an algorithm employing the Best Fit Decreasing and Kernighan-Lin Partitioning heuristics to produce 3D wrappers that minimize test time, maximize reuse of routing resources across test modes, and allow for different TAM bus widths in different test modes.
Abstract: 3D integration is a promising new technology for tightly integrating multiple active silicon layers into a single chip stack. Both the integration of heterogeneous tiers and the partitioning of functional units across tiers leads to significant improvements in functionality, area, performance, and power consumption. Managing the complexity of 3D design is a significant challenge that will require a system-on-chip approach, but the application of SOC design to 3D necessitates extensions to current test methodology. In this paper, we propose extending test wrappers, a popular SOC DFT technique, into the third dimension. We develop an algorithm employing the Best Fit Decreasing and Kernighan-Lin Partitioning heuristics to produce 3D wrappers that minimize test time, maximize reuse of routing resources across test modes, and allow for different TAM bus widths in different test modes. On average the two variants of our algorithm reuse 93% and 92% of the test wrapper wires while delivering test times of just 0.06% and 0.32% above the minimum.

20 citations


Proceedings ArticleDOI
03 Oct 2011
TL;DR: Experimental results show that the proposed technique can achieve an improvement of up to 58% in quantum cost and 99% in garbage outputs in average, compared to the previous work.
Abstract: This paper presents a simple technique to convert an ESOP-based reversible circuit into an online testable circuit. The technique does not require redesigning the whole circuit for integrating the testability feature, and no new garbage outputs are produced other than the garbage outputs needed for the ESOP-circuit. With a little extra hardware cost, the resultant circuit can detect online any single-bit errors. Experimental results show that the proposed technique can achieve an improvement of up to 58% in quantum cost and 99% in garbage outputs in average, compared to the previous work.

19 citations


Journal ArticleDOI
TL;DR: This paper proposes a new approach, MEeasuring Test Effectiveness Regionally (METER), which exploits the readily available test-measurement data that is generated from chip failures and uses analysis results from existing tests to be more than sufficient for performing a thorough evaluation of any model or metric of interest.
Abstract: Researchers from both academia and industry continually propose new fault models and test metrics for coping with the ever-changing failure mechanisms exhibited by scaling fabrication processes. Understanding the relative effectiveness of current and proposed metrics and models is vitally important for selecting the best mix of methods for achieving a desired level of quality at reasonable cost. Evaluating metrics and models traditionally relies on actual test experiments, which is time-consuming and expensive. To reduce the cost of evaluating new test metrics, fault models, design-for-test techniques, and others, this paper proposes a new approach, MEeasuring Test Effectiveness Regionally (METER). METER exploits the readily available test-measurement data that is generated from chip failures. The approach does not require the generation and application of new patterns but uses analysis results from existing tests, which we show to be more than sufficient for performing a thorough evaluation of any model or metric of interest. METER is demonstrated by comparing several metrics and models that include: 1) stuck-at; 2) N-detect; 3) PAN-detect (physically-aware N-detect); 4) bridge fault models; and 5) the input pattern fault model (also more recently referred to as the gate-exhaustive metric). We also provide in-depth discussion on the advantages and disadvantages of METER, and contrast its effectiveness with those from the traditional approaches involving the test of actual integrated circuits.

Proceedings Article
29 Dec 2011
TL;DR: In this article, the authors present a method of test pattern generation using 2-D LFSR structures that generate a pre-computed test vector followed by random patterns, and a high-level test synthesis algorithm for operation scheduling and data path allocation.
Abstract: Summary form only given. This session contains papers from different areas in ASIC design that focus on design for testability issues. The session starts with an invited paper by Christopher A. Ryan, Texas Instruments, presenting an embedded core test strategy and built-in self-test for systems on a chip. With the increasing number of embedded cores on a single chip the problems to make each core testable also increases. The author presents a core JTAG strategy with BIST and gives examples of its application in an industrial automotive microcontroller. The regular pa.per part starts with a presentation on Built-in Self-Test. It presents a method of test pattern generation using 2-D LFSR structures that generate a pre-computed test vector followed by random patterns. Another paper deals with integrated scheduling and allocation of high-level test synthesis. The authors present a high-level test synthesis algorithm for operation scheduling and data path allocation. Contrary to other works the approach integrates scheduling and allocation by performing them simultaneously. The next pape:r presents a CMOS Low-power mixed A/D ASIC. It describes the principle of radiation detection, the circuit architecture, low power issues, and then addresses built-in test analog subcircuits that have been implemented in the ASIC along with a JTAG module. Finally, test results are presented showing that all the specifications are satisfied. The session ends with a paper on current-testable high-frequency CMOS operational amplifiers. Current-based test stimuli allow detection of some faults which are difficult to detect or are unstable when using conventional voltage-based stimuli. With the presented approach test stimuli selection is simpler since faulty behaviors are observable in the whole frequency band. The impact of the test circuitry, on circuit performance

Proceedings ArticleDOI
20 Nov 2011
TL;DR: A low-power test scheme compatible with both test compression and built-in self-test environments is discussed, showing that a simple power-aware controller may allow significant reductions of toggling rates when feeding scan chains with either decompressed test patterns or pseudorandom vectors.
Abstract: This paper discusses a low-power test scheme compatible with both test compression and built-in self-test environments. The key contribution is a detailed analysis showing that a simple power-aware controller may allow significant reductions of toggling rates when feeding scan chains with either decompressed test patterns or pseudorandom vectors. While the proposed solution requires minimal modifications of existing DFT logic, its use results in a low switching activity during all phases of scan test: loading, capture, and unloading. It reduces power consumption to or below a level of a functional mode, thus helping to resolve problems related to power dissipation, voltage drop, and increased temperature.

Journal ArticleDOI
TL;DR: This paper proposes an efficient memory diagnosis and repair scheme based on fail-pattern identification that reduces the amount of data that need to be transmitted from the chip under test to the automatic test equipment (ATE) without losing fault information.
Abstract: With the advent of deep-submicrometer VLSI technology, the capacity and performance of semiconductor memory chips is increasing drastically. This advantage also makes it harder to maintain good yield. Diagnostics and redundancy repair methodologies thus are getting more and more important for memories, including embedded ones that are popular in system chips. In this paper, we propose an efficient memory diagnosis and repair scheme based on fail-pattern identification. The proposed diagnosis scheme can distinguish among row, column, and word faults, and subsequently apply the Huffman compression method for fault syndrome compression. This approach reduces the amount of data that need to be transmitted from the chip under test to the automatic test equipment (ATE) without losing fault information. It also simplifies the analysis that has to be performed on the ATE. The proposed redundancy repair scheme is assisted by fail-pattern identification approach and a flexible redundancy structure. The area overhead for our built-in self-repair (BISR) design is reasonable. Our repair scheme uses less redundancy than other redundancy schemes under the same repair rate requirement. Experimental results show that the area overhead of the BISR design is only 4.1% for an 8 K × 64 memory and is in inverse proportion to the memory size.

Proceedings ArticleDOI
01 May 2011
TL;DR: The scan chain and power delivery network synthesis for pre-bond testing of 3D ICs is studied and the impact of scan-chain Through-Silicon-Vias (TSVs) on power consumption and voltage drop is investigated.
Abstract: Pre-bond testing of 3D ICs improves yield by preventing bad dies and/or wafers from being used in the final 3D stack. However, pre-bond testing is challenging because it requires special scan chains and power delivery mechanism. Any 3D scan chains that traverse multiple dies will be fragmentized in each individual die during pre-bond testing. In this paper we study the scan chain and power delivery network synthesis for pre-bond testing of 3D ICs. The testing of individual dies is facilitated by the addition of dedicated probe pads for power delivery and scan IO as a form of design-for-testing. We investigate the impact of scan-chain Through-Silicon-Vias (TSVs) on power consumption and voltage drop. We also study the requirements of power probe pads for power delivery during pre-bond structural test.

Proceedings ArticleDOI
20 Nov 2011
TL;DR: A survey of test challenges for 3D ICs is presented and recent innovations on various aspects of 3D testing and DfT are described, including pre-bond testing (BIST and TSV probing), optimizations for post bond testing, and cost modeling for3D integration and associated test flows.
Abstract: Technology scaling for higher performance and lower power consumption is being hampered today by the bottleneck of interconnect lengths. 3D integrated circuits (3DICs) based on through-silicon vias (TSVs) have emerged as a promising solution for overcoming the interconnect bottleneck. However, testing of 3D ICs remains a significant challenge, and breakthroughs in test technology are needed to make 3Dintegration commercially viable. This paper presents a survey of test challenges for 3D ICs and describes recent innovations on various aspects of 3D testing and DfT. Topics covered include pre-bond testing (BIST and TSV probing), optimizations for post bond testing, and cost modeling for 3D integration and associated test flows.

Proceedings ArticleDOI
25 Jan 2011
TL;DR: A self-testing and calibration method for the embedded successive approximation register (SAR) analog-to-digital converter (ADC) by developing a fully-digital missing code calibration technique.
Abstract: This paper presents a self-testing and calibration method for the embedded successive approximation register (SAR) analog-to-digital converter (ADC). We first propose a low cost design-for-test (DfT) technique which tests a SAR ADC by characterizing its digital-to-analog converter (DAC) capacitor array. Utilizing DAC major carrier transition testing, the required analog measurement range is just 4 LSBs; this significantly lowers the test circuitry complexity. Then, we develop a fully-digital missing code calibration technique that utilizes the proposed testing scheme to collect the required calibration information. Simulation results are presented to validate the proposed technique.

Proceedings ArticleDOI
20 Nov 2011
TL;DR: This work observes that by varying the connection orders of wrapper chain components, e.g., scan chains and I/O cells, the TSVs consumed vary significantly, and formulate and propose novel heuristic to tackle this problem and show results that can save on average 33.2% amount of TSVs when compared to a prior intuitive method.
Abstract: Three dimensional (3D) System-on-Chips (SoCs) that typically employ through-silicon vias (TSVs) as vertical interconnects, emerge as a promising solution to continue Moore's law. Whereas, it also brings challenging problems, one of which is the test wrapper chain design and optimization, especially for circuit-partitioned 3D SoCs in which scan chains can cross among layers. Test time is the primary goal for wrapper chain design, both for 2D and 3D SoCs. The 3D SoC wrapper chain design problem can be converted into the well-studied2D one by projecting wrapper chain components of all layers to one virtual layer. Thereafter, we can leverage 2D optimization algorithms to determine the composition of wrapper chains and thus guarantee minimal testing time for 3D SoCs. One specific thing for circuit-partitioned 3D SoCs is that TSVs are needed to connect cross-layer wrapper structures to form the wrapper chains. As TSVs occupy planar chip area and will aggravate the routing congestion problem, it is necessary to reduce TSVs for test purpose as much as possible. In this work, we observe that by varying the connection orders of wrapper chain components, e.g., scan chains and I/O cells, the TSVs consumed vary significantly. Based on the above, we formulate this problem and propose novel heuristic to tackle it. Experimental results show that the proposed solution can save on average 33.2% amount of TSVs when compared to a prior intuitive method.

Journal ArticleDOI
TL;DR: This paper presents various examples of built-in measurements that have been demonstrated in wireless transceivers offered by Texas Instruments in recent years, based on the digital-RF processor (DRPTM) technology, and highlights the importance of the various types presented.
Abstract: Digital RF solutions have been shown to be advantageous in various design aspects, such as accurate modeling, design reuse, and scaling when migrating to the next CMOS process node. Consequently, the majority of new low-cost and feature cell phones are now based on this approach. However, another equally important aspect of this approach to wireless transceiver SoC design, which is instrumental in allowing fast and low-cost productization, is in creating the inherent capability to assess performance and allow for low-cost built-in calibration and compensation, as well as characterization and final-testing. These internal capabilities can often rely solely on the SoCs existing processing resources, representing a zero cost adder, requiring only the development of the appropriate algorithms. This paper presents various examples of built-in measurements that have been demonstrated in wireless transceivers offered by Texas Instruments in recent years, based on the digital-RF processor (DRPTM) technology, and highlights the importance of the various types presented; built-in self-calibration and compensation, built-in self-characterization, and built-in self-testing (BiST). The accompanying statistical approach to the design and productization of such products is also discussed, and fundamental terms related with these, such as ‘soft specifications’, are defined.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: The difficult challenges and potential solutions for test are discussed and the solutions to meet these challenges must begin at design with the incorporation of on-chip and on-package design for test (DFT) and built in self test (BIST) infrastructure.
Abstract: The impact of increased transistor count, higher frequency, and greater complexity presents many difficult challenges for test. The current trend toward 3D IC integration, driven in part by the need to increase circuit density as Moore's Law scaling slows, makes testing even more difficult. The use of the third dimension, the incorporation of new structures such as Through Silicon Vias (TSV) and new processes developed for thinning and bonding layers for stacked 3D structures all present new challenges for test technology. Test cost may be the most difficult of these many challenges. The solutions to meet these challenges must begin at design with the incorporation of on-chip and on-package design for test (DFT) and built in self test (BIST) infrastructure. The difficult challenges and potential solutions for test are discussed.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: This paper proposes a new statistical path selection method based on a generalized path criticality metric which properties allow efficient pruning, and results show that the proposed method achieves 47% better quality of results on average, and up to 361x speedup compared to Statistical path selection followed by test generation.
Abstract: In the face of large-scale process variations, statistical timing methodology has advanced significantly over the last few years, and statistical path selection takes advantage of it in at-speed testing. In deterministic path selection, the separation of path selection and test generation is known to require time consuming iteration between the two processes. This paper shows that in statistical path selection, this is not only the case, but also the quality of results can be severely degraded even after the iteration. To deal with this issue, we consider testability in the first place by integrating a SAT solver, and this necessitates a new statistical path selection method. Our proposed method is based on a generalized path criticality metric which properties allow efficient pruning. Our experimental results show that the proposed method achieves 47% better quality of results on average, and up to 361x speedup compared to statistical path selection followed by test generation.

Proceedings ArticleDOI
25 Apr 2011
TL;DR: ZiMH a trigger generator that builds trigger unit that facilitates failure localization and root-cause analysis by keeping the trace of interaction that leads to the failure and provides resourceful trace information for root cause analysis is proposed.
Abstract: The post-silicon debugging process is aimed at locating design errors and electrical errors that concealed themselves during the whole process of pre-silicon verification. Although during post-silicon validation engineers can exploit the high speed of hardware prototype to exercise huge amount of test vectors, low level of real-time observability and controllability of signals inside the prototype is too big an issue for them. Various DFD techniques have come to improve observability of signals and expedite root cause analysis. Recently, typical practical DFD approaches are based on the Embedded Logic Analysis ELA. Since ELA has limitation in terms of the amount of data that can acquire in a debug experiment, we have to either increase the size of trace buffer or try to use trigger unit that can effectively control when to acquire the debug data. In this paper, we propose ZiMH a trigger generator that builds trigger unit. Additionally, it provides resourceful trace information for root cause analysis. Major advantages of generated trigger unit over traditional trigger units are: 1) it facilitates failure localization and root-cause analysis by keeping the trace of interaction that leads to the failure 2) it can be tuned for specific location to avoid the huge cost related to interfacing with trace signals 3) it can get parameterized to generate several trigger units that can be placed inside the limited area.

Proceedings ArticleDOI
20 Nov 2011
TL;DR: This tutorial would explore the unique challenges and testing solutions to detect/prevent such malicious modifications in integrated circuits, and whether the IC can be certified to be free of malicious, hard to detect circuitry.
Abstract: Cryptographic algorithms are routinely used toper form computationally intense operations over increasingly larger volumes of data, and in order to meet the high throughput requirements of the applications, are often implemented by VLSI designs. The high complexity of such implementations raises concern about their reliability. In order to improve upon the testability of sequential circuits, both at fabrication time and also in the field, Design For Testability (DFT) techniques are commonly employed. However conventional DFT methodologies for digital circuits have been found to compromise the security of the cryptographic hardware. In this tutorial we first discuss the challenges and potential attacks on cipher hardware through standard DFT techniques, and then potential solutions against them. Also, as the electronic design industry has grown globally, economic reasons dictate the widespread participation of external agents in modern design and manufacture of integrated circuits(ICs), which decreases the control that the IC design houses used to traditionally have over their own designs. This issue raises the question of ensuring Trust in an integrated circuit, and whether the IC can be certified to be free of malicious, hard-to detect circuitry, commonly referred to as Hardware Trojans. In this tutorial, we would explore the unique challenges and testing solutions to detect/prevent such malicious modifications.

Proceedings Article
23 May 2011
TL;DR: The implementation of the OBT method to a second order notch cell realized with one operational amplifier is described and results obtained confirm the hypothesis of usefulness of OBT.
Abstract: The Oscillation Based Testing (OBT) method represents an effective and simple solution to the testing problem of discrete continuous time analog electronic filters. Its implementation, however, still imposes need for knowledge of the circuit under test (CUT) behaviour and the simulation algorithms of oscillators in the time domain. In this paper we describe the implementation of the OBT method to a second order notch cell realized with one operational amplifier. For simulation we used LTspice. A realistic model for simulation of the operational amplifier in time domain was used. The results obtained confirm the hypothesis of usefulness of OBT. Single soft and catastrophic faults are considered with more detail while double soft faults are exemplified only. Ideas for future work are suggested.

Journal ArticleDOI
TL;DR: Design-for-testability (DFT) circuitry that reduces testing time and thus cost of testing DC linearity of SAR ADCs, as well as measurements that verify its effectiveness.
Abstract: This brief paper describes design-for-testability (DFT) circuitry that reduces testing time and thus cost of testing DC linearity of SAR ADCs. We present here the basic concepts, an actual SAR ADC chip design employing the proposed DFT, as well as measurements that verify its effectiveness. Since the DFT circuit overhead is small, it is practicable.

Journal ArticleDOI
01 Feb 2011
TL;DR: The paper gives the analysis of the design of system software and system hardware, economic benefit, and basic ideas and steps of the installation and the connection of the system and shows the analysis and prospect of the application of photovoltaic technology in outer space, solar lamps, freeways and communications.
Abstract: Solar modules, power electronic equipments which include the charge-discharge controller, the inverter, the test instrumentation and the computer monitoring, and the storage battery or the other energy storage and auxiliary generating plant make up of the photovoltaic system which is shown in the thesis. PV system design should follow to meet the load supply requirements, make system low cost, seriously consider the design of software and hardware, and make general software design prior to hardware design in the paper. To take the design of PV system for an example, the paper gives the analysis of the design of system software and system hardware, economic benefit, and basic ideas and steps of the installation and the connection of the system. It elaborates on the information acquisition, the software and hardware design of the system, the evaluation and optimization of the system. Finally, it shows the analysis and prospect of the application of photovoltaic technology in outer space, solar lamps, freeways and communications.

Proceedings ArticleDOI
21 Jul 2011
TL;DR: Experimental results obtained from ISCAS'89 circuits are compared with existing techniques prove that the proposed ATPG methodology is suitable to test the scan based system-on-chip architecture with reduced testing power.
Abstract: This paper proposes a novel approach for reducing the shifting and capture power by a fan-out aware modified adjacent X-filling technique. This approach reduces the time complexity and number of iterations in addition to the reduction of test power. Experimental results obtained from ISCAS'89 circuits are compared with existing techniques prove that the proposed ATPG methodology is suitable to test the scan based system-on-chip architecture with reduced testing power.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: This paper presents a DfT technique using a new design analog checker circuit to assure the detection of defects occurring in nano-CMOS analog integrated circuits (ICs) and is implemented in full-custom 65nm CMOS technology at 1 V power supply.
Abstract: In this paper, we focus on safety-critical applications based on the System-on-Chip (SoC) approach design and using the nano-CMOS (Complementary Metal Oxide Semiconductor) technology. These systems are present in diverse areas in our life from consumer electronic products to automobile, aerospace, medical, nuclear and military applications. These products could cause injury or loss of human life if they fail or encounter errors. In fact, the malfunctioning of these equipments can be much dangerous which needs special attention to ensure the functionality, quality and dependability of the product. Thus, dependability must be considered from the beginning when designing the system. Therefore, testing should even be considered earlier, intertwined with the design process. The process of designing for better testability is called design for testability (DfT). The first part of the paper presents a DfT technique using a new design analog checker circuit to assure the detection of defects occurring in nano-CMOS analog integrated circuits (ICs). The checker is implemented in full-custom 65nm CMOS technology at 1 V power supply. SPICE simulations of the post-layout extracted CMOS checker, which includes all parasitic, are used to validate the technique and demonstrate the acceptable electrical behaviour of the checke

Patent
22 Mar 2011
TL;DR: In this paper, systems and methods incorporating these features to test partially completed three-dimensional ICs may result in saved time and effort, and less scraped material, as the partial device was not built any further when a bad partial device is detected.
Abstract: Systems and methods are provided for testing partially completed three-dimensional ICs. Example methods may incorporate one or more of the following features: design for testing (DFT); design for partial wafer test; design for partial probing; partial IC probecards; partial IC test equipment; partial IC quality determinations; partial IC test optimization; and partial test optimization. Other aspects may also be included. Systems and methods incorporating these features to test partially completed three-dimensional ICs may result in saved time and effort, and less scraped material, as the partial device is not built any further when a bad partial device is detected. This results in lower costs and higher yield.

Proceedings ArticleDOI
03 Oct 2011
TL;DR: This paper proposes a hierarchical trigger generator that builds a trigger unit that provides resourceful and compact trace information for root cause analysis and facilitates the process of failure localization and root-cause analysis of errors.
Abstract: Post-silicon debugging process is aimed at locating errors not detected during the process of pre-silicon verification Although in the post-silicon validation engineers can exploit the high speed of hardware prototype to exercise huge amount of test vectors, low level of real-time observability and controllability of signals inside the prototype is a big issue Various Design for Debug (DFD) techniques aim to improve the observability of signals and expedite the root cause analysis of errors Typical practical DFD approaches are based on the Embedded Logic Analysis (ELA), using a trigger unit that can effectively control when to acquire the debug data In this paper, we propose a hierarchical trigger generator that builds a trigger unit Additionally, it provides resourceful and compact trace information for root cause analysis Major advantages over traditional trigger units are: 1) by keeping the trace of interactions that leads to the failure, it facilitates the process of failure localization and root-cause analysis 2) it can be tuned for the specific location of a design to avoid the huge cost related to interfacing with trace signals 3) it can get parameterized to generate several units that can be placed inside the limited area in multiple debug rounds using a time-multiplex fashion