scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Testing in 2007"


Journal ArticleDOI
TL;DR: A theoretical assessment of the representation power of the D-matrix is provided and a surprising result relative to the difficulty of generating optimal diagnostic strategies from D-Matrices is proved.
Abstract: As new approaches and algorithms are developed for system diagnosis, it is important to reflect on existing approaches to determine their strengths and weaknesses. Of concern is identifying potential reasons for false pulls during maintenance. Within the aerospace community, one approach to system diagnosis--based on the D-matrix derived from test dependency modeling--is used widely, yet little has been done to perform any theoretical assessment of the merits of the approach. Past assessments have been limited, largely, to empirical analysis and case studies. In this paper, we provide a theoretical assessment of the representation power of the D-matrix and suggest algorithms and model types for which the D-matrix is appropriate. We also prove a surprising result relative to the difficulty of generating optimal diagnostic strategies from D-matrices. Finally, we relate the processing of the D-matrix with several diagnostic approaches and suggest how to extend the power of the D-matrix to take advantage of the power of those approaches.

173 citations


Journal ArticleDOI
TL;DR: A testing and diagnosis methodology to detect catastrophic faults and locate faulty regions is presented and the proposed method is evaluated using a biochip performing real-life multiplexed bioassays.
Abstract: Microfluidics-based biochips are soon expected to revolutionize biosensing, clinical diagnostics and drug discovery. Robust off-line and on-line test techniques are required to ensure system dependability as these biochips are deployed for safety-critical applications. Due to the underlying mixed-technology and mixed-energy domains, biochips exhibit unique failure mechanisms and defects. We first relate some realistic defects to fault models and observable errors. We next set up an experiment to evaluate the manifestations of electrode-short faults. Motivated by the experimental results, we present a testing and diagnosis methodology to detect catastrophic faults and locate faulty regions. The proposed method is evaluated using a biochip performing real-life multiplexed bioassays.

77 citations


Journal ArticleDOI
Tad Hogg1, Greg Snider1
TL;DR: This work identifies reliability thresholds in the ability of defective crossbars to implement boolean logic, allowing molecular circuit designers to trade-off reliability, circuit area, crossbar geometry and the computational complexity of locating functional components.
Abstract: Crossbar architectures are one approach to molecular electronic circuits for memory and logic applications. However, currently feasible manufacturing technologies for molecular electronics introduce numerous defects so insisting on defect-free crossbars would give unacceptably low yields. Instead, increasing the area of the crossbar provides enough redundancy to implement circuits in spite of the defects. We identify reliability thresholds in the ability of defective crossbars to implement boolean logic. These thresholds vary among different implementations of the same logical formula, allowing molecular circuit designers to trade-off reliability, circuit area, crossbar geometry and the computational complexity of locating functional components. We illustrate these choices for binary adders. For instance, one adder implementation yields functioning circuits 90% of the time with 30% defective crossbar junctions using an area only 1.8 times larger than the minimum required for a defect-free crossbar. We also describe an algorithm for locating a combination of functional junctions that can implement an adder circuit in a defective crossbar.

62 citations


Journal ArticleDOI
TL;DR: Different circuits of Quantum-dot Cellular Automata are proposed for the so-called coplanar crossing to allow a robust crossing of wires on the Cartesian plane and a Bayesian Network based simulator is utilized for evaluation.
Abstract: In this paper, different circuits of Quantum-dot Cellular Automata (QCA) are proposed for the so-called coplanar crossing. Coplanar crossing is one of the most interesting features of QCA because it allows for mono-layered interconnected circuits, whereas CMOS technology needs different levels of metalization. However, the characteristics of the coplanar crossing make it prone to malfunction due to thermal noise or defects. The proposed circuits exploit the majority voting properties of QCA to allow a robust crossing of wires on the Cartesian plane. This is accomplished using enlarged lines and voting. A Bayesian Network (BN) based simulator is utilized for evaluation; results are provided to assess robustness in the presence of cell defects and thermal effects. The BN simulator provides fast and reliable computation of the signal polarization versus normalized temperature. Simulation of the wire crossing circuits at different operating temperatures is provided with respect to defects and a quantitative metric for performance under temperature variations is proposed and assessed.

58 citations


Journal ArticleDOI
TL;DR: A statistical model of a circuit is used for setting test limits under process deviations as a trade-off between test metrics calculated at the design stage, obtained from a Monte Carlo circuit simulation, assuming Gaussian probability density functions (PDFs) for the parameter and performance deviations.
Abstract: The estimation of test metrics such as defect level, test yield or yield loss is important in order to quantify the quality and cost of a test approach. For design-for-test purposes, this is important in order to select the best test measurements but this must be done at the design stage, before production test data is made available. In the analogue domain, previous works have considered the estimation of these metrics for the case of single faults, either catastrophic or parametric. The consideration of single parametric faults is sensitive for a production test technique if the design is robust. However, in the case that production test limits are tight, test escapes resulting from multiple parametric deviations may become important. In addition, aging mechanisms result in field failures that are often caused by multiple parametric deviations. In this paper, we will consider the estimation of analogue test metrics under the presence of multiple parametric deviations (or process deviations) and under the presence of faults. A statistical model of a circuit is used for setting test limits under process deviations as a trade-off between test metrics calculated at the design stage. This model is obtained from a Monte Carlo circuit simulation, assuming Gaussian probability density functions (PDFs) for the parameter and performance deviations. After setting the test limits considering process deviations, the test metrics are calculated under the presence of catastrophic and parametric single faults for different potential test measurements. We will illustrate the technique for the case of a fully differential operational amplifier, proving the validity in the case of this circuit of the Gaussian PDF.

52 citations


Journal ArticleDOI
TL;DR: This paper proposes to merge security and testability requirements in a control-oriented design for security scan technique, which induces an adaptation of two main aspects of testability technique design: protection at protocol level and at scan path level.
Abstract: The design of secure ICs requires fulfilling means conforming to many design rules in order to protect access to secret data. On the other hand, designers of secure chips cannot neglect the testability of their chip since high quality production testing is primordial to a good level of security. However, security requirements may be in conflict with test needs and testability improvement techniques that increase both observability and controllability. In this paper, we propose to merge security and testability requirements in a control-oriented design for security scan technique. The proposed security scan design methodology induces an adaptation of two main aspects of testability technique design: protection at protocol level and at scan path level. Without loss of generality, the proposed solution is evaluated on a simple crypto chip in terms of security and design cost.

41 citations


Journal ArticleDOI
TL;DR: It is shown that novel features of PBW are possible due to the spatial redundancy of the cells in the tiles that permits to retain at high probability the fault free function in the presence of defects.
Abstract: Among emerging technologies, Quantum-dot Cellular Automata (QCA) relies on innovative computational paradigms. For nano-scale implementation, the so-called processing-by-wire (PBW) paradigm in QCA is very effective as processing takes place, while signal communication is accomplished. This paper analyzes the defect tolerance properties of PBW for manufacturing tiles by molecular QCA cells. Based on a 3?×?3 grid and various input/output arrangements in QCA cells, different tiles are analyzed and simulated using a coherence vector engine. The functional characterization and polarization level of these tiles for undeposited cell defects are reported and detailed profiles are provided. It is shown that novel features of PBW are possible due to the spatial redundancy of the cells in the tiles that permits to retain at high probability the fault free function in the presence of defects. Moreover, it is shown that QCA tiles are robust and inherently tolerant to cell defects (by logic equivalence, also additional cell defects can be accommodated).

40 citations


Journal ArticleDOI
TL;DR: This paper describes the approach for mapping circuits onto CMOS using principles of probabilistic computation and demonstrates how Markov random field elements may be built in CMOS and used to design combinational circuits running at ultra low supply voltages.
Abstract: As devices and operating voltages are scaled down, future circuits will be plagued by higher soft error rates, reduced noise margins and defective devices. A key challenge for the future is retaining high reliability in the presence of faulty devices and noise. Probabilistic computing offers one possible approach. In this paper we describe our approach for mapping circuits onto CMOS using principles of probabilistic computation. In particular, we demonstrate how Markov random field elements may be built in CMOS and used to design combinational circuits running at ultra low supply voltages. We show that with our new design strategy, circuits can operate in highly noisy conditions and provide superior noise immunity, at reduced power dissipation. If extended to more complex circuits, our approach could lead to a paradigm shift in computing architecture without abandoning the dominant silicon CMOS technology.

36 citations


Journal ArticleDOI
TL;DR: This is the first systematic study on the relationship between overtesting prevention and test compression and results emphasize the severity of overtesting in scan-based delay test.
Abstract: We present an approach to prevent overtesting in scan-based delay test. The test data is transformed with respect to functional constraints while simultaneously keeping as many positions as possible unspecified in order to facilitate test compression. The method is independent of the employed delay fault model, ATPG algorithm and test compression technique, and it is easy to integrate into an existing flow. Experimental results emphasize the severity of overtesting in scan-based delay test. Influence of different functional constraints on the amount of the required test data and the compression efficiency is investigated. To the best of our knowledge, this is the first systematic study on the relationship between overtesting prevention and test compression.

27 citations


Journal ArticleDOI
TL;DR: For the first time, March test algorithms of minimum length are proposed for two-operation single-cell and two-cell dynamic faults and the previously known March test algorithm of length 100N for detection of two- operation two- cell dynamic faults is improved by 30N.
Abstract: The class of dynamic faults has been recently shown to be an important class of faults for the new technologies of Random Access Memories (RAM) with significant impact on defect-per-million (DPM) levels. Very little research has been done in the design of memory test algorithms targeting dynamic faults. Two March test algorithms of complexity 11N and 22N, N is the number of memory cells, for subclasses of two-operation single-cell and two-cell dynamic faults, respectively, were proposed recently [Benso et al., Proc., ITC 2005] improving the length of the corresponding tests proposed earlier [Hamdioui et al., Proc. of IEEE VLSI Test Symposium, pp. 395---400, 2002]. Also, a March test of length 100N was proposed [Benso et al., Proc. ETS 2005, Tallinn, pp. 122---127, 2005] for detection of two-cell dynamic faults with two fault-sensitizing operations both applied on the victim or aggressor cells. In this paper, for the first time, March test algorithms of minimum length are proposed for two-operation single-cell and two-cell dynamic faults. In particular, the previously known March test algorithm of length 100N for detection of two-operation two-cell dynamic faults is improved by 30N.

23 citations


Journal ArticleDOI
TL;DR: A new neural network-based analog fault diagnosis strategy is introduced and significantly better generalization performance is achieved by the ensemble as compared to any of its individual neural nets.
Abstract: A new neural network-based analog fault diagnosis strategy is introduced. Ensemble of neural networks is constructed and trained for efficient and accurate fault classification of the circuit under test (CUT). In the testing phase, the outputs of the individual ensemble members are combined to isolate the actual CUT fault. Prominent techniques for producing the ensemble are utilized, analyzed and compared. The created ensemble exhibit high classification accuracy even if the CUT has overlapping fault classes which cannot be isolated using a unitary neural network. Each neural classifier of the ensemble focuses on a particular region in the CUT measurement space. As a result, significantly better generalization performance is achieved by the ensemble as compared to any of its individual neural nets. Moreover, the selection of the proper architecture of the neural classifiers is simplified. Experimental results demonstrate the superior performance of the developed approach.

Journal ArticleDOI
TL;DR: Assessment of the reliability of a multiplier, a digital FIR filter, and an 8051 microprocessor implemented in SRAM-based FPGA’s is evaluated by means of extensive fault-injection experiments, assessing the capability provided by different design techniques of tolerating SEUs within the FPGAs configuration memory.
Abstract: The latest SRAM-based FPGA devices are making the development of low-cost, high-performance, re-configurable systems feasible, paving the way for innovative architectures suitable for mission- or safety-critical applications, such as those dominating the space or avionic fields. Unfortunately, SRAM-based FPGAs are extremely sensitive to Single Event Upsets (SEUs) induced by radiation. SEUs may alter the logic value stored in the memory elements the FPGAs embed. A large part of the FPGA memory elements is dedicated to the configuration memory, whose content dictates how the resources inside the FPGA have to be used to implement any given user circuit, SEUs affecting configuration memory cells can be extremely critics. Facing the effects of SEUs through radiation-hardened FPGAs is not cost-effective. Therefore, various fault-tolerant design techniques have been devised for developing dependable solutions, starting from Commercial-Off-The-Shelf (COTS) SRAM-based FPGAs. These techniques present advantages and disadvantages that must be evaluated carefully to exploit them successfully. In this paper we mainly adopted an empirical analysis approach. We evaluated the reliability of a multiplier, a digital FIR filter, and an 8051 microprocessor implemented in SRAM-based FPGA's, by means of extensive fault-injection experiments, assessing the capability provided by different design techniques of tolerating SEUs within the FPGA configuration memory. Experimental results demonstrate that by combining architecture-level solutions (based on redundancy) with layout-level solutions (based on reliability-oriented place and route) designers may implement reliable re-configurable systems choosing the best solution that minimizes the penalty in terms of area and speed degradation.

Journal ArticleDOI
TL;DR: An abstract model and a new test pattern generation method of signal integrity problems on interconnects are presented and it is shown that the proposed signal integrity fault model is more exact and more powerful for long interConnects than previous approaches.
Abstract: Unacceptable loss of signal integrity may cause permanent or intermittent harm to the functionality and performance of SoCs. In this paper, we present an abstract model and a new test pattern generation method of signal integrity problems on interconnects. This approach is achieved by considering the effects for testing inputs and parasitic RLC elements of interconnects. We also develop a framework to deal with arbitrary interconnection topology. Experimental results show that the proposed signal integrity fault model is more exact and more powerful for long interconnects than previous approaches.

Journal ArticleDOI
TL;DR: This work examines the performance of a semi-infinite QCA shift register as a function of both clock period and temperature and focuses on the issue of robustness in the presence of disorder and thermal fluctuations.
Abstract: The computational paradigm known as quantum-dot cellular automata (QCA) encodes binary information in the charge configuration of Coulomb-coupled quantum-dot cells. Functioning QCA devices made of metal-dot cells have been fabricated and measured. We focus here on the issue of robustness in the presence of disorder and thermal fluctuations. We examine the performance of a semi-infinite QCA shift register as a function of both clock period and temperature. The existence of power gain in QCA cells acts to restore signal levels even in situations where high speed operation and high temperature operation threaten signal stability. Random variations in capacitance values can also be tolerated.

Journal ArticleDOI
TL;DR: In this work a strategy for testing analog networks, known as Transient Response Analysis Method, is applied to test the Configurable Analog Blocks of Field Programmable Analog Arrays (FPAAs), where the transient response of these blocks to known input stimuli is analyzed.
Abstract: In this work a strategy for testing analog networks, known as Transient Response Analysis Method, is applied to test the Configurable Analog Blocks (CABs) of Field Programmable Analog Arrays (FPAAs). In this method the Circuit Under Test (CUT) is programmed to implement first and second order blocks and the transient response of these blocks to known input stimuli is analyzed. Taking advantage of the inherent programmability of the FPAAs, a BIST-based scheme is used in order to obtain an error signal representing the difference between fault-free and faulty CABs. Two FPAAs from different manufacturers and distinct architectures are considered as CUT. For one of the devices there is no detailed information about its structural implementation. For this reason, a functional fault model based on high-level parameters of the transfer function of the programmed blocks is adopted, and then, the relationship between these parameters and CAB component deviations is investigated. The other considered device allows a structural programming in which the designer can directly modify the values of programmable components. This way, faults can be injected by modifying the values of these components in order to emulate a defective behavior. Therefore, it is possible to estimate the fault coverage and test application time of the proposed functional test method when applied to both considered devices.

Journal ArticleDOI
TL;DR: A methodology to test high-speed A/D converters using low-frequency resources is described, based on the alternate testing approach, which does not require the tester resources running at a frequency higher than the device-under-test (DUT).
Abstract: Testing high-speed A/D converters for dynamic specifications needs test equipment running at high frequency. In this paper, a methodology to test high-speed A/D converters using low-frequency resources is described. It is based on the alternate testing approach. In the proposed methodology, models are built to map the signatures of an initial set of devices, obtained on the proposed low-cost test set-up, to the dynamic specifications of the same devices, obtained using high-precision test equipment. During production testing, the devices are tested on the low-cost test set-up. The dynamic specifications of the devices are estimated by capturing their signatures on the low cost test set-up and processing them with the pre-developed models. As opposed to the conventional method of dynamic specification testing of data converters, the proposed approach does not require the tester resources running at a frequency higher than the device-under-test (DUT). The test methodology was verified in simulations as well as in hardware with specification estimation error of less than 5%.

Journal ArticleDOI
TL;DR: A novel oscillation ring (OR) test scheme and architecture for testing interconnects in SOC is proposed and demonstrated and can also detect delay faults and crosstalk glitches, which are otherwise very difficult to be tested under the traditional test schemes.
Abstract: A novel oscillation ring (OR) test scheme and architecture for testing interconnects in SOC is proposed and demonstrated. In addition to stuck-at and open faults, this scheme can also detect delay faults and crosstalk glitches, which are otherwise very difficult to be tested under the traditional test schemes. IEEE Std. 1500 wrapper cells are modified to accommodate the test scheme. An efficient algorithm is proposed to construct ORs for SOC based on a graph model. Experimental results on MCNC benchmark circuits have been included to show the effectiveness of the algorithm. In all experiments, the scheme achieves 100% fault coverage with a small number of tests.

Journal ArticleDOI
TL;DR: General conditions for the oscillation based test of switched-capacitor biquad filter stages are explored and conditions for achieving oscillation by internal transformation of the filter stage are explored.
Abstract: In this paper, we explore general conditions for the oscillation based test of switched-capacitor biquad filter stages. Expressions describing the characteristics of a filter stage put into oscillation are derived and conditions for achieving oscillation by internal transformation of the filter stage are explored. Reconfiguration scheme based on the transformation of the biquad filter stage to a quadratic oscillator is studied. Theoretically the circuit can be put into oscillation by de-activating a single capacitor. Simulations, however, show that in practice a carefully designed low feed-back loop is required to achieve acceptable oscillation test mode.

Journal ArticleDOI
TL;DR: A design methodology is presented that allows a dramatic reduction of the dependency on process variation, yielding to a new version of this BICS that has a peak-to-peak dispersion lower than 10% of its output full-scale range.
Abstract: In this paper we present a design methodology that allows a dramatic reduction of the dependency on process variation, yielding to a new version of this BICS. Taking advantage of a 130 nm VLSI CMOS technology, the proposed BICS has a peak-to-peak dispersion lower than 10% of its output full-scale range. It makes it more suitable to implement the test functionality while maintaining the initial BICS intrinsic performances. The built-in self-test methodology is illustrated by monitoring the supply current of Low-Noise Amplifiers (LNAs). Measurements confirm the BICS's transparency relative to the circuit-under-test (CUT) and its accuracy.

Journal ArticleDOI
Dana Henry Brown1, J. Ferrario1, Randy L. Wolf1, Jing Li1, Jayendra Bhagat1, Mustapha Slamani1 
TL;DR: A more universal test structure utilizing RF building blocks is proposed and a global positioning system (GPS) device is used as an example to illustrate how to develop the RF test plan with this usage.
Abstract: In this paper, testing of radio frequency (RF) devices with mixed-signal testers is discussed. General purpose automatic test equipment (ATE) will be used. In this paper, a more universal test structure utilizing RF building blocks is proposed. A global positioning system (GPS) device is used as an example to illustrate how to develop the RF test plan with this usage. The test plan developed includes fast, cost-effective and dedicated circuitry.

Journal ArticleDOI
TL;DR: The design, layout, and testability analysis of delay-insensitive circuits on cellular arrays for nanocomputing system design and potential physical implementation of cellular arrays and its area overhead are discussed.
Abstract: This paper presents the design, layout, and testability analysis of delay-insensitive circuits on cellular arrays for nanocomputing system design. In delay-insensitive circuits the delay on a signal path does not affect the correctness of circuit behavior. The combination of delay-insensitive circuit style and cellular arrays is a useful step to implement nanocomputing systems. In the approach proposed in this paper the circuit expressions corresponding to a design are first converted into Reed---Muller forms and then implemented using delay-insensitive Reed---Muller cells. The design and layout of the Reed---Muller cell using primitives has been described in detail. The effects of stuck-at faults in both delay-insensitive primitives and gates have been analyzed. Since circuits implemented in Reed---Muller forms constructed by the Reed---Muller cells can be easily tested offline, the proposed approach for delay-insensitive circuit design improves a circuit's testability. Potential physical implementation of cellular arrays and its area overhead are also discussed.

Journal ArticleDOI
TL;DR: This article describes an emulation-based method for locating stuck-at faults in combinational and synchronous sequential circuits based on automatically designing a circuit which implements a closest-match fault location algorithm specialized for the circuit under diagnosis (CUD).
Abstract: This article describes an emulation-based method for locating stuck-at faults in combinational and synchronous sequential circuits The method is based on automatically designing a circuit which implements a closest-match fault location algorithm specialized for the circuit under diagnosis (CUD) This method allows designers to perform dynamic fault location of stuck-at faults in large circuits, and eliminates the need for large storage required by a software-based fault dictionary In fact, the approach is a pure hardware solution to fault diagnosis We demonstrate the feasibility of the method in terms of hardware resources and diagnosis time by experimenting with ISCAS85 and ISCAS89 circuits The emulation-based diagnosis method speeds up the diagnosis process by an order of magnitude compared to the software-based fault diagnosis This speed-up is important, especially, when the on-line diagnosis of safety---critical systems is of concern

Journal ArticleDOI
TL;DR: It is demonstrated that the test of URWFs is more effective in terms of resistive defect detection than that of URRFs and the necessary test conditions to detect them are listed.
Abstract: In this paper, we present an exhaustive study on the influence of resistive-open defects in pre-charge circuits of SRAM memories. In SRAM memories, the pre-charge circuits operate the pre-charge and equalization at a certain voltage level, in general Vdd, of all the couples of bit lines of the memory array. This action is essential in order to ensure correct read operations. We have analyzed the impact of resistive-opens placed in different locations of these circuits. Each defect studied in this paper disturbs the pre-charge circuit in a different way and for different resistive ranges, but the produced effect on the normal memory action is always the perturbation of the read operations. This faulty behavior can be modeled by Un-Restored Write Faults (URWFs) and Un-Restored Read Faults (URRFs), because there is an incorrect pre-charge/equalization of the bit lines after a write or read operation that disturbs the following read operation. In the last part of the paper, we demonstrate that the test of URWFs is more effective in terms of resistive defect detection than that of URRFs and we list the necessary test conditions to detect them.

Journal ArticleDOI
TL;DR: The proposed BIST procedure is well suited for regular and dense architectures that have high defect densities and shows that a large fraction of defect-free blocks can be recovered using a small number of BIST configurations.
Abstract: We propose a built-in self-test (BIST) procedure for nanofabrics implemented using chemically assembled electronic nanotechnology. Several fault detection configurations are presented to target stuck-at faults, shorts, opens, and connection faults in nanoblocks and switchblocks. The detectability of multiple faults in blocks within the nanofabric is also considered. We present an adaptive recovery procedure through which we can identify defect-free nanoblocks and switchblocks in the nanofabric-under-test. The proposed BIST, recovery, and defect tolerance procedures are based on the reconfiguration of the nanofabric to achieve complete fault coverage for different types of faults. We show that a large fraction of defect-free blocks can be recovered using a small number of BIST configurations. We also present simple bounds on the recovery that can be achieved for a given defect density. Simulation results are presented for various nanofabric sizes, different defect densities, and for random and clustered defects. The proposed BIST procedure is well suited for regular and dense architectures that have high defect densities.

Journal ArticleDOI
TL;DR: This paper improves a recent per-test technique by applying additional diagnosis on the outputs of the circuit by incorporating more evidence to support the true defective sites by using both failing tests and failing outputs information.
Abstract: Per-test fault diagnosis has become an effective methodology for the identification of complex defects. In this paper, we improve a recent per-test technique by applying additional diagnosis on the outputs of the circuit. The new method does not require additional information than the existing method, but incorporates more evidence to support the true defective sites by using both failing tests and failing outputs information, hence diagnosis quality can be improved. We present the procedure of the new method and give a theoretical analysis. We show that this method can very well address several drawbacks of the previous work. The experimental results on benchmark circuits demonstrate that the new method can significantly improve diagnostic quality compared to other recent results.

Journal ArticleDOI
TL;DR: Simulation data confirm that the proposed computational model achieves the goal of providing flexible fault tolerance under a wide range of fault occurrence rates, while at the same time guaranteeing high system performance and efficient utilization of hardware resources.
Abstract: In this paper, we focus on reliability, one of the most fundamental and important challenges, in the nanoelectronics environment. For a processor architecture based on the unreliable nanoelectronic devices, fault tolerance schemes are required so as to ensure the basic correctness of any computation. Since any fault tolerance approach demands redundancy either in the form of time or hardware, reliability needs to be considered in conjunction with the performance and hardware tradeoffs. We propose a new computational model for the nanoelectronics based processor architectures, that provides flexible fault tolerance to deal with the high and time varying faults. The model guarantees the correctness of instruction executions, while dynamically balancing hardware and performance overheads. The correctness of every instruction is confirmed by multiple execution instances through a hybrid hardware-time redundancy approach. To achieve high system performance, multiple unconfirmed computation branches are exploited in a speculative manner. Hardware resource growth that these speculative computations entail is controlled so that the utilization of hardware is balanced between the two competing goals of performance and fault tolerance. In addition, we examine the impact on the proposed computational model of other nanoelectronic characteristics such as the necessity for localization of interconnections and the regularity of nanofabric structures on the proposed computational model. We set up an experimental framework to validate the effectiveness of the proposed scheme as well as to investigate multiple tradeoff points within the proposed approach. Simulation data confirm that the proposed computational model achieves the goal of providing flexible fault tolerance under a wide range of fault occurrence rates, while at the same time guaranteeing high system performance and efficient utilization of hardware resources.

Journal ArticleDOI
TL;DR: A fully-settled linear behavior plus noise (FSLB+N) model for the DfDT Σ-Δ modulator is presented to improve both the accuracy and the speed of the behavioral simulations.
Abstract: Evaluating the digital stimuli used in the design-for-digital-testability (DfDT) Σ-Δ modulator is a time-consuming task due to its oversampling and non-linear nature. Although behavioral simulations can substantially improve the simulation speed, conventional behavioral models fail to provide accurate enough signal-to-noise ratio (SNR) predictions for this particular application. In this paper, a fully-settled linear behavior plus noise (FSLB+N) model for the DfDT Σ-Δ modulator is presented to improve both the accuracy and the speed of the behavioral simulations. The model includes the following parameters: the finite open-loop gains, the offsets, the finite output swings, the flicker noise of the operational amplifiers (OPAMPs), as well as the thermal noises of the switched capacitors, the OPAMPs, and the reference supplies. With the proposed model, the behavioral simulation results demonstrate a high correlation with the measurement data. On average, the SNR difference between the simulation and the measurement is ---1.1 dB with a maximum of 0.05 dB and a minimum of ---2.2 dB. Comparing with the circuit-level simulation using HSPICE, the behavioral simulation with the FSLB+N model is 1,190,000 times faster. The proposed model not only can be used for evaluating the digital stimulus candidates, but also can be applied to system-level simulations of the mixed-signal design with an embedded DfDT Σ-Δ modulator.

Journal ArticleDOI
TL;DR: This work presents a method to improve the loopback test used in RF transceivers, targeted to the System-On-Chip environment, being able to reuse system resources in order to minimize the test overhead.
Abstract: This work presents a method to improve the loopback test used in RF transceivers. The approach is targeted to the System-On-Chip environment, being able to reuse system resources in order to minimize the test overhead. An RF sampler is used during loopback operation, allowing observation of spectral characteristics of the RF signal. While able to improve the overall observability of the RF signal path, faster diagnosis than conventional loopback tests is achieved thanks to a large reduction in the number of transmitted symbols. Theoretical analysis and practical results for a prototype transceiver operating at 846 MHz are presented. It is shown that a significant test time reduction is achievable considering bit error rate tests for common digital modulation schemes.

Journal ArticleDOI
TL;DR: A module to perform a built-in self-test in CMOS RF receivers is presented, associated with a test strategy consisting of measuring the main performance parameters of the single building blocks individually.
Abstract: A module to perform a built-in self-test in CMOS RF receivers is presented. The module is associated with a test strategy consisting of measuring the main performance parameters of the single building blocks individually. Circuitry overhead, however, is kept low by using some blocks as part of the test set-up and reusing them. The test overhead has also been reduced by replacing direct determination of performance parameters with their estimation. The test methodology has been applied to a mixer in the first down conversion stage of a GSM receiver, estimating its conversion gain, 1dB compression and third-order interception points. Using the output of the IF amplifier as the only testing point, the rms errors in the estimation of the above mentioned parameters are 1.5, 3.0 and 2.7%, respectively.

Journal ArticleDOI
TL;DR: A measurement principle for calculating component values from measurements conducted under less than optimal conditions, as is the case in the IEEE Std 1149.4 environment is described.
Abstract: This paper describes a measurement principle for calculating component values from measurements conducted under less than optimal conditions, as is the case in the IEEE Std 1149.4 environment. Also presented are equations that take into account switch resistances on the signal paths, the output resistance of the signal generator, and the loading effect caused by the input impedance of the voltmeter together with the pin capacitances in parallel to the voltmeter. In addition, the paper presents characterization methods to determine values for these impedances. The inaccuracies achieved in the impedance range from k? to M? are of the order of few percent.