scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Testing in 2014"


Journal ArticleDOI
TL;DR: This paper will present all types of counterfeits, the defects present in them, and their detection methods, and the effectiveness and limitations of these anti-counterfeiting techniques.
Abstract: The counterfeiting of electronic components has become a major challenge in the 21st century. The electronic component supply chain has been greatly affected by widespread counterfeit incidents. A specialized service of testing, detection, and avoidance must be created to tackle the worldwide outbreak of counterfeit integrated circuits (ICs). So far, there are standards and programs in place for outlining the testing, documenting, and reporting procedures. However, there is not yet enough research addressing the detection and avoidance of such counterfeit parts. In this paper we will present, in detail, all types of counterfeits, the defects present in them, and their detection methods. We will then describe the challenges to implementing these test methods and to their effectiveness. We will present several anti-counterfeit measures to prevent this widespread counterfeiting, and we also consider the effectiveness and limitations of these anti-counterfeiting techniques.

210 citations


Journal ArticleDOI
TL;DR: A comprehensive framework has been developed to find an optimum set of detection methods considering test time, test cost, and application risks, and an assessment of all the detection methods based on the newly introduced metrics – counterfeit defect coverage, under- covered defects, and not-covered defects.
Abstract: The increasing threat of counterfeit electronic components has created specialized service of testing, detection, and avoidance of such components. However, various types of counterfeit components --- recycled, remarked, overproduced, defective, cloned, forged documentation, and tampered --- pose serious threats to supply chain. Over the past few years, standards and programs have been put in place throughout the supply chain that outline testing, documenting, and reporting procedures. However, there is little uniformity in the test results among the various entities. Currently, there are no metrics for evaluating these counterfeit detection methods. In this paper, we have developed a detailed taxonomy of defects present in counterfeit components. Based on this taxonomy, a comprehensive framework has been developed to find an optimum set of detection methods considering test time, test cost, and application risks. We have also performed an assessment of all the detection methods based on the newly introduced metrics --- counterfeit defect coverage, under-covered defects, and not-covered defects.

71 citations


Journal ArticleDOI
TL;DR: A scalable solution for multi-level access management in reconfigurable scan networks using a sequence filter that allows only a precomputed set of scan-in access sequences and causes no access time penalty.
Abstract: Scan infrastructures based on IEEE Std. 1149.1 (JTAG), 1500 (SECT), and P1687 (IJTAG) provide a cost-effective access mechanism for test, reconfiguration, and debugging purposes. The improved accessibility of on-chip instruments, however, poses a serious threat to system safety and security. While state-of-the-art protection methods for scan architectures compliant with JTAG and SECT are very effective, most of these techniques face scalability issues in reconfigurable scan networks allowed by the upcoming IJTAG standard. This paper describes a scalable solution for multi-level access management in reconfigurable scan networks. The access to protected instruments is restricted locally at the interface to the network. The access restriction is realized by a sequence filter that allows only a precomputed set of scan-in access sequences. This approach does not require any modification of the scan architecture and causes no access time penalty. Therefore, it is well suited for core-based designs with hard macros and 3D integrated circuits. Experimental results for complex reconfigurable scan networks show that the area overhead depends primarily on the number of allowed accesses, and is marginal even if this number exceeds the count of registers in the network.

23 citations


Journal ArticleDOI
TL;DR: In order to estimate the remaining useful performance (RUP) of analog circuits precisely in real time, an analog circuit fault prognostics framework is proposed in the paper.
Abstract: In order to estimate the remaining useful performance (RUP) of analog circuits precisely in real time, an analog circuit fault prognostics framework is proposed in the paper. Output voltages are extracted from circuit responses as features to calculate cosine distance which can reflect the health condition of analog circuits. Relevance vector machine (RVM) which has been improved by particle swarm optimization (PSO) algorithm is applied to estimate the RUP. Twelve case studies involving bandpass filter, highpass filter and nonlinear circuit have validated the predict performance of the approach. Simulation results demonstrate that the proposed approach has higher prediction precision.

23 citations


Journal ArticleDOI
TL;DR: An electronic control gear circuit for fluorescent lamp is designed for on-line condition monitoring of target aluminum electrolytic capacitors and MOSFET at an accelerated aging condition and remotely monitor the health of life limiting devices and perform condition based shut down of the circuit.
Abstract: This paper presents on-line technique for condition based maintenance of power electronic converters. The wear-out condition for high failure rate components is obtained based on parametric degradation. As per MIL Handbook 217 F, electrolytic capacitors and switching transistors together constitutes more than 90 % failures of power electronic systems. An electronic control gear circuit for fluorescent lamp is designed for on-line condition monitoring of target aluminum electrolytic capacitors and MOSFET at an accelerated aging condition. A low cost microcontroller board is programmed for data acquisition and test circuit control. Data values are serially communicated to National Instruments LabVIEW software, installed on the host computer for algorithm implementation and condition based maintenance of circuit. Using web publishing tool, the control of running state front panel VI is continuously transferred from local host to the client as an HTML file that is accessed in standard web browsers. Operator can remotely monitor the health of life limiting devices and can perform condition based shut down of the circuit. Also parametric data values of target devices are stored on hard disk of host computer in MS Excel file.

21 citations


Journal ArticleDOI
TL;DR: A method to classify faults into permanent, intermittent and transient faults based on some intermediate signatures during embedded test or built-in self-test is presented, applicable to large digital circuits.
Abstract: With increasing transient error rates, distinguishing intermittent and transient faults is especially challenging. In addition to particle strikes relatively high transient error rates are observed in architectures for opportunistic computing and in technologies under high variations. This paper presents a method to classify faults into permanent, intermittent and transient faults based on some intermediate signatures during embedded test or built-in self-test. Permanent faults are easily determined by repeating test sessions. Intermittent and transient faults can be identified by the amount of failing test sessions in many cases. For the remaining faults, a Bayesian classification technique has been developed which is applicable to large digital circuits. The combination of these methods is able to identify intermittent faults with a probability of more than 98 %.

17 citations


Journal ArticleDOI
A. Ceratti1, T. Copetti1, L. Bolzani1, Fabian Vargas1, R.D.R. Fagundes1 
TL;DR: The proposed hardware-based approach able to monitor SRAMs’ aging during the SoC’s lifetime based on the insertion of On-Chip Aging Sensors (OCASs) is presented and the results demonstrate the sensors’ capacity to detect early aging states and therefore, guaranteeing high SRAM reliability.
Abstract: The increasing need to store more and more information has resulted in the fact that Static Random Access Memories (SRAMs) occupy the greatest part of Systems-on-Chip (SoCs). Therefore, SRAM's robustness is considered crucial in order to guarantee the reliability of such SoCs over lifetime. In this context, one of the most important phenomena that degrades Nano-Scale SRAMs reliability is related to Negative-Bias Temperature Instability (NBTI), which causes the memory cells aging. The main goal of this paper is to present a hardware-based approach able to monitor SRAMs' aging during the SoC's lifetime based on the insertion of On-Chip Aging Sensors (OCASs). In more detail, the proposed strategy is based on the connection of one OCAS to every SRAM column, each periodically monitoring write operations on the SRAM cells. It is important to note that in order to prevent the OCAS from aging and to reduce leakage power dissipation, the OCAS circuitry is powered-off during its idle periods. The proposed hardware-based approach has been evaluated throughout SPICE simulations using 65 nm CMOS technology and the results demonstrate the sensor's capacity to detect early aging states and therefore, guaranteeing high SRAM reliability. To conclude, a complete analysis of the sensor's overheads is presented.

16 citations


Journal ArticleDOI
TL;DR: It is shown that for a given test the minimum test application time is achieved when the total energy is dissipated evenly at the rate of the maximum allowable power for the device under test, leading to the test time theorem.
Abstract: Power dissipated during test is a constraint when it comes to test time reduction. In this work, we show that for a given test the minimum test application time is achieved when the total energy is dissipated evenly at the rate of the maximum allowable power for the device under test. This result, the test time theorem, leads to two alternatives for reducing test time. In the first alternative, we scale the supply voltage down to reduce power, which in turn allows us to increase the clock frequency, of course within the limit imposed by the critical path. Thus, optimum voltage and frequency can be found to minimize the test time of a fixed frequency synchronous test. In the other alternative, which also benefits from the reduced voltage, the clock period is dynamically varied so that each cycle dissipates the maximum allowable power. This test, termed aperiodic clock test, according to the theorem achieves the lower bound on test time. An illustrative example of an ISCAS'89 benchmark circuit shows a test time reductionof 71 %.

15 citations


Journal ArticleDOI
TL;DR: This paper has taken pains to design and stimulate the proposed address generator by means of Xilinx ISE tools and contrasted it with the switching activities of the conventional LFSR and BS-LFSR (Bit Swapping Linear Feedback Shift Register).
Abstract: In the ongoing high-speed, high-tech sophistication in the technology of VLSI designs, Built-in Self-Test (BIST) is emerging as the essential element of the memory, which can be treated as the most essential ingredient of the System on Chip. The market is flooded with diverse algorithms exclusively intended for investigating the memory locations. LFSRs (Linear Feedback Shift Register) are employed extensively for engendering the memory addresses, so that they can be consecutively executed on the memory cores under experimentation. What we have attempted to put forward through this paper is a proposed LFSR based address generator with significant decrease in switching process for low power MBIST (Memory Built in Self Test). In this novel technique, the address models are produced by a blend of LFSR and a 2-bit pattern generator (Modified LFSR) and two distinct clock signals. With the efficient employ of the adapted architecture switching activity is considerably cut down. As the switching activity is in direct proportion to the power consumed scaling down the switching process of the address generator inevitably leads to the reduction in power consumption of the MBIST. In this paper we have taken pains to design and stimulate the proposed address generator by means of Xilinx ISE tools and contrasted it with the switching activities of the conventional LFSR and BS-LFSR (Bit Swapping Linear Feedback Shift Register). The encouraging outcomes illustrate a significant reduction in switching activity, to the tune of 90 % plus of the entire dynamic power in relation to the traditional LFSR.

15 citations


Journal ArticleDOI
TL;DR: The proposed design solution for scan power alleviation, permits the efficient exploitation of X-filling techniques for capture power reduction or the use of extreme (power independent) compression techniques for test data volume reduction.
Abstract: Power consumption during scan testing operations can be significantly higher than that expected in the normal functional mode of operation in the field. This may affect the reliability of the circuit under test (CUT) and/or invalidate the testing process increasing yield loss. In this paper, a scan chain partitioning technique and a scan hold mechanism are combined for low power scan operation. Substantial power reductions can be achieved, without any impact on the test application time or the fault coverage and without the need to use scan cell reordering or clock and data gating techniques. Furthermore, the proposed design solution for scan power alleviation, permits the efficient exploitation of X-filling techniques for capture power reduction or the use of extreme (power independent) compression techniques for test data volume reduction.

14 citations


Journal ArticleDOI
TL;DR: A novel method for single and multiple soft fault diagnosis of analog circuits, which employs the information contained in the frequency response function (FRF) measurements and focuses on finding models of the circuit under test as exact as possible.
Abstract: This paper provides a novel method for single and multiple soft fault diagnosis of analog circuits. The method is able to locate the faulty elements and evaluate their parameters. It employs the information contained in the frequency response function (FRF) measurements and focuses on finding models of the circuit under test (CUT) as exact as possible. Consequently, the method is capable of getting different sets of the parameters which are consistent with the diagnostic test, rather than only one specific set. To fulfil this purpose, the local plolynomial approach is applied and the associated normalized FRF is developed.The proposed method is especially suitable at the pre-production stage, where corrections of the technological design are important and the diagnostic time is not crucial. Two experimental examples are presented to clarify the proposed method and prove its efficiency.

Journal ArticleDOI
TL;DR: This paper proposes a method of analog circuit fault diagnosis by using high-order cumulants and information fusion to integrate voltage with current as fault eigenvectors, which is then used to improve Error Back Propagation neural network for fault diagnosis.
Abstract: This paper proposes a method of analog circuit fault diagnosis by using high-order cumulants and information fusion. We extract the original voltage and current signals from output terminal of the circuit under test, and determine corresponding kurtosis and skewness as fault eigenvectors, which are then used to improve Error Back Propagation (BP) neural network for fault diagnosis. With respect to fault eigenvectors consider more about the information which are sometimes ignored by principal component analysis (PCA) using second order statistics. By employing information fusion to integrate voltage with current as fault eigenvectors, eigenvectors can be used to express fault information better. Diagnosis examples are used to illustrate that our fault eigenvectors own higher recognition rate and diagnosis accuracy.

Journal ArticleDOI
TL;DR: The Count Compatible Pattern Run-Length coding compression method is proposed to further improve the compression ratio and the experimental results show that the average compression ratio achieved is up to 71.73 %.
Abstract: The Count Compatible Pattern Run-Length (CCPRL) coding compression method is proposed to further improve the compression ratio. Firstly, a segment of pattern in the test set is retained. Secondly, don't-care bits are filled so as to make subsequent patterns compatible with the retained pattern for as many times as possible until it can no longer be made compatible. Thirdly, the compatible patterns are represented by symbol "0" (equal) and symbol "1" (contrary) in the codeword. In addition, the number of consecutive compatible patterns is counted and expanded into binary which indicates when the codeword ends. At last, the six largest ISCAS'89 benchmark circuits verify the proposed method, the experimental results show that the average compression ratio achieved is up to 71.73 %.

Journal ArticleDOI
TL;DR: An analytical examination of the results using one-way ANOVA acknowledged the statistical significance of difference between the algorithm-designing-techniques in terms of resiliency at 95 % level of confidence.
Abstract: A potential peculiarity of software systems is that a large number of soft-errors are inherently derated (masked) at the software level. The rate of error-deration may depend on the type of algorithms and data structures used in the software. This paper investigates the effects of the underlying algorithms of programs on the rate of error-deration. Eight different benchmark programs were used in the study; each of them was implemented by four different algorithms, i.e. divide-and-conquer, dynamic, backtracking and branch-and-bound. About 10,000 errors were injected into each program in order to quantify and analyze the error-derating capabilities of different algorithm-designing-techniques. The results reveal that about 40.0 % of errors in the dynamic algorithm are derated; this figure for backtracking, branch-and-bound and divide-and-conquer algorithms are 39.5 %, 38.1 % and 28.8 %, respectively. These results can enable software designers and programmers to select the most efficient algorithms for developing inherently resilient programs. Furthermore, an analytical examination of the results using one-way ANOVA acknowledged the statistical significance of difference between the algorithm-designing-techniques in terms of resiliency at 95 % level of confidence.

Journal ArticleDOI
TL;DR: A framework to automatically scale down the SAT falsification complexity by utilizing the decision ordering based learning from decomposed sub-properties, which combines the advantages of both property decomposition and property clustering to reduce the overall test generation time.
Abstract: SAT-based Bounded Model Checking (BMC) is promising for automated generation of directed tests. Due to the state space explosion problem, SAT-based BMC is unsuitable to handle complex properties with large SAT instances or large bounds. In this paper, we propose a framework to automatically scale down the SAT falsification complexity by utilizing the decision ordering based learning from decomposed sub-properties. Our framework makes three important contributions: i) it proposes learning-oriented decomposition techniques for complex property falsification, ii) it proposes an efficient approach to accelerate the complex property falsification using the learning from decomposed sub-properties, and iii) it combines the advantages of both property decomposition and property clustering to reduce the overall test generation time. The experimental results using both software and hardware benchmarks demonstrate the effectiveness of our framework.

Journal ArticleDOI
TL;DR: This paper proposes a small chip area stochastic calibration for TDC linearity and input range, and analyzes it with FPGA, finding that both the periods of the external clock and the ring oscillator are preferred as short as possible under more than twice of the range of measurement of TDC.
Abstract: This paper proposes a small chip area stochastic calibration for TDC linearity and input range, and analyzes it with FPGA. The proposed calibration estimates the absolute values of the delay of the buffers and the range of measurement statistically. The hardware implementation of the proposed calibration requires single counter to construct the histogram, so that the extra area for the proposed calibration is smaller. Because the implementation is fully digital, it is easily implemented on digital LSIs such as FPGA, micro-processor, and SoC. Experiments with Xilinx Virtex-5 LX FPGA ML501 reveal that both the periods of the external clock and the ring oscillator are preferred as short as possible under more than twice of the range of measurement of TDC when the oscillation period of the ring oscillator is wider than that of the external clock for fast convergence. The required time for the proposed calibration is 0.08 ms, and the required hardware resources LUTs and FFs for the implementation on FPGA are 24.1% and 22.2% of the conventional implementation, respectively.

Journal ArticleDOI
TL;DR: The proposed error correction code uses triplication error correction scheme as crosstalk avoidance code (CAC) and a parity bit is added to it to enhance the error correction capability to make the on-chip interconnects robust against errors.
Abstract: As the technology scales down, shrinking geometry and layout dimension, on- chip interconnects are exposed to different noise sources such as crosstalk coupling, supply voltage fluctuation and temperature variation that cause random and burst errors. These errors affect the reliability of the on-chip interconnects. Hence, error correction codes integrated with noise reduction techniques are incorporated to make the on-chip interconnects robust against errors. The proposed error correction code uses triplication error correction scheme as crosstalk avoidance code (CAC) and a parity bit is added to it to enhance the error correction capability. The proposed error correction code corrects all the error patterns of one bit error, two bit errors. The proposed code also corrects 7 out of 10 possible three bit error patterns and detects burst errors of three. Hybrid Automatic Repeat Request (HARQ) system is employed when burst errors of three occurs. The performance of the proposed codec is evaluated for residual flit error rate, codec area, power, delay, average flit latency and link energy consumption. The proposed codec achieves four magnitude order of low residual flit error rate and link energy minimization of over 53 % compared to other existing error correction schemes. Besides the low residual flit error rate, and link energy minimization, the proposed codec also achieves up to 4.2 % less area and up to 6 % less codec power consumption compared to other error correction codes. The less codec area, codec power consumption, low link energy and low residual flit error rate make the proposed code appropriate for on chip interconnection link.

Journal ArticleDOI
TL;DR: The results demonstrate an evident reduction of the recovery time due to fast error detection time and selective partial reconfiguration of faulty domains, and the methodology drastically reduces Cross-Domain Errors in Look-Up Tables and routing resources.
Abstract: The rapid adoption of FPGA-based systems in space and avionics demands dependability rules from the design to the layout phases to protect against radiation effects. Triple Modular Redundancy is a widely used fault tolerance methodology to protect circuits against radiation-induced Single Event Upsets implemented on SRAM-based FPGAs. The accumulation of SEUs in the configuration memory can cause the TMR replicas to fail, requiring a periodic write-back of the configuration bit-stream. The associated system downtime due to scrubbing and the probability of simultaneous failures of two TMR domains are increasing with growing device densities. We propose a methodology to reduce the recovery time of TMR circuits with increased resilience to Cross-Domain Errors. Our methodology consists of an automated tool-flow for fine-grain error detection, error flags convergence and non-overlapping domain placement. The fine-grain error detection logic identifies the faulty domain using gate-level functions while the error flag convergence logic reduces the overwhelming number of flag signals. The non-overlapping placement enables selective domain reconfiguration and greatly reduces the number of Cross-Domain Errors. Our results demonstrate an evident reduction of the recovery time due to fast error detection time and selective partial reconfiguration of faulty domains. Moreover, the methodology drastically reduces Cross-Domain Errors in Look-Up Tables and routing resources. The improvements in recovery time and fault tolerance are achieved at an area overhead of a single LUT per majority voter in TMR circuits.

Journal ArticleDOI
TL;DR: This paper proposes an alternative solution, based on a functional approach, in which the test is performed by forcing the processor to execute a specially written test program, and checking the resulting behavior of the processor.
Abstract: Superscalar processors have the ability to execute instructions out-of-order to better exploit the internal hardware and to maximize the performance. To maintain the in-order instruction commitment and to guarantee the correctness of the final results (as well as precise exception management), the Reorder Buffer (ROB) may be used. From the architectural point of view, the ROB is a memory array of several thousands of bits that must be tested against hardware faults to ensure a correct behavior of the processor. Since it is deeply embedded within the microprocessor circuitry, the most straightforward approach to test the ROB is through Built-In Self-Test solutions, which are typically adopted by manufacturers for end-of-production test. However, these solutions may not always be used for the test during the operational phase (in-field test) which aims at detecting possible hardware faults arising when the electronic systems works in its target environment. In fact, these solutions require the usage of test infrastructures that may not be accessible and/or documented, or simply not usable during the operational phase. This paper proposes an alternative solution, based on a functional approach, in which the test is performed by forcing the processor to execute a specially written test program, and checking the resulting behavior of the processor. This approach can be adopted for in-field test, e.g., at the power-on, power-off, or during the time slots unused by the system application. The method has been validated resorting to both an architectural and a memory fault simulator.

Journal ArticleDOI
TL;DR: A differential temperature sensor for on-chip signal and DC power monitoring is presented for built-in testing and calibration applications and has a simulated sensitivity that is tunable up to 210 mV/°C with a corresponding dynamic range of 13 °C.
Abstract: A differential temperature sensor for on-chip signal and DC power monitoring is presented for built-in testing and calibration applications. The amplifiers in the sensor are designed with class AB output stages to extend the dynamic range of the temperature/power measurements. Two high-gain amplification stages are used to achieve high sensitivity to temperature differences at points close to devices under test. Designed in 0.18 μm CMOS technology, the sensor has a simulated sensitivity that is tunable up to 210 mV/°C with a corresponding dynamic range of 13 °C. The sensor consumes 2.23 mW from a 1.8 V supply. A low-power version of the sensor was designed that consumes 1.125 mW from a 1.8 V supply, which has a peak sensitivity of 185.7 mV/°C over a 8 °C dynamic range.

Journal ArticleDOI
TL;DR: A novel reseeding architecture containing a net-selection logic module and an LFSR with some inversion logic is presented to generate all the required seeds on-chip in real time with no external or internal storage requirement.
Abstract: LFSR reseeding techniques are widely adopted in logic BIST to enhance fault detectability and shorten test-application time for integrated circuits. In order to achieve complete fault coverage, previous reseeding methods often need a prohibitive amount of memory to store all required seeds. In this paper, a new LFSR reseeding technique is presented, which employs the responses of internal nets of the circuit itself as the control signals for changing LFSR states. A novel reseeding architecture containing a net-selection logic module and an LFSR with some inversion logic is presented to generate all the required seeds on-chip in real time with no external or internal storage requirement. Experimental results on ISCAS and large ITC circuits show that the presented technique can achieve 100 % fault coverage with short test time by using only 0.23 ---2.75 % of internal nets and with 2.35 ---4.56 % gate area overhead on average for reseeding control without degrading the original circuit performance.

Journal ArticleDOI
TL;DR: This paper discusses multiple methods of Single-Event Transient measurements on a commercial DC/DC Pulse Width Modulator (PWM) and the correlations between the heavy ion, pulsed laser and proton data are analyzed and presented.
Abstract: This paper discusses multiple methods of Single-Event Transient (SET) measurements on a commercial DC/DC Pulse Width Modulator (PWM). Heavy ion, proton, and pulsed laser are used in the experiments. The correlations between the heavy ion, pulsed laser and proton data are analyzed and presented. A proton cross-section model is used to derive proton cross-section from heavy ion test data. The calculated result is close to the real proton data, which means the heavy ion and proton data fit well. The relationship between pulsed laser and proton are also analyzed through heavy as a medium.

Journal ArticleDOI
TL;DR: This paper proposes an intra-cell diagnosis method based on the “Effect-Cause” paradigm aiming at locating the root cause of the observed failures inside a logic cell using the Critical Path Tracing here applied at transistor level.
Abstract: The diagnosis is the process of isolating possible sources of observed failures in a defective circuit. Today, manufacturing defects appear not only in the cell interconnection, but also inside the cell itself (intra-cell defect). State of the art diagnosis approaches can identify the defect location at gate level (i.e., one or more standard cells and/or inter-connections can be provided as possible defect location). Some approaches have been developed to target the intra-cell defects. In this paper, we propose an intra-cell diagnosis method based on the "Effect-Cause" paradigm aiming at locating the root cause of the observed failures inside a logic cell. It is based on the Critical Path Tracing (CPT) here applied at transistor level. The main characteristic of our approach is that it exploits the analysis of the faulty behavior induced by the actual defect. In other word, we locate the defect by simply analyzing the effect induced by the defect itself. The advantage is the fact that we are defect independent (i.e., we do not have to explicitly consider the type and the size of the defect). Moreover, since the complexity of a single cell in terms of transistor number is low, the proposed intra-cell diagnosis approach requires a negligible computational time. The efficiency of the proposed approach has been evaluated by means of experimental results carried out on both simulations-based and industrial silicon data case studies.

Journal ArticleDOI
TL;DR: A new time-domain algorithm is proposed to estimate the three aforementioned mismatch errors, and then a calibration method is put forward to calibrate the mismatch errors.
Abstract: In data acquisition systems, with help of time-interleaved analog-to-digital converter (TIADC) architecture, the maximum sample rate of the whole system can be increased efficiently. However, inevitable offset mismatch, gain mismatch, and timing error between time-interleaved channels degrade the sampling performance. In order to develop the mismatched TIADC structure, this paper first proposes a new time-domain algorithm to estimate the three aforementioned mismatch errors, and then puts forward a calibration method to calibrate the mismatch errors. Finally, numerical simulations are presented to verify the proposed estimation and calibration algorithm.

Journal ArticleDOI
TL;DR: In this paper, a low-cost concurrent error detection (CED) scheme for 7 authenticated encryption (AE) architectures is presented. But the proposed technique explores idle cycles of the AE mode architectures, and the performance overhead can be lower than 100 % for all architectures depending on the workload.
Abstract: In many applications, encryption alone does not provide enough security. To enhance security, dedicated authenticated encryption (AE) mode are invented. Galios Counter Mode (GCM) and Counter with CBC-MAC mode (CCM) are the AE modes recommended by the National Institute of Standards and Technology. To support high data rates, AE modes are usually implemented in hardware. However, natural faults reduce its reliability and may undermine both its encryption and authentication capability. We present a low-cost concurrent error detection (CED) scheme for 7 AE architectures. The proposed technique explores idle cycles of the AE mode architectures. Experimental results shows that the performance overhead can be lower than 100 % for all architectures depending on the workload. FPGA implementation results show that the hardware overhead in the 0.1---23.3 % range and the power overhead is in the 0.2---23.2 % range. ASIC implementation results show that the hardware overhead in the 0.1---22.8 % range and the power overhead is in the 0.3---12.6 % range. The underlying block cipher and hash module need not have CED built in. Thus, it allows system designers to integrate block cipher and hash function intellectual property from different vendors.

Journal ArticleDOI
TL;DR: A new hybrid fault-tolerant architecture for robustness improvement of digital CMOS circuits and systems that combines different types of redundancies: information redundancy for error detection, temporal redundancy for soft error correction and hardware redundancy for hard error correction is presented.
Abstract: This paper presents a new hybrid fault-tolerant architecture for robustness improvement of digital CMOS circuits and systems It targets all kinds of errors in combinational part of logic circuits and thus, can be combined with advanced SEU protection techniques for sequential elements while reducing the power consumption The proposed architecture combines different types of redundancies: information redundancy for error detection, temporal redundancy for soft error correction and hardware redundancy for hard error correction Moreover, it uses a pseudo-dynamic comparator for SET and timing errors detection Besides, the proposed method also aims to reduce power consumption of fault-tolerant architectures while keeping a comparable area overhead compared to existing solutions Results on the largest ISCAS'85 and ITC'99 benchmark circuits show that our approach has an area cost of about 3 % to 6 % with a power consumption saving of about 33 % compared to TMR architectures

Journal ArticleDOI
TL;DR: To improve the DFT efficiency, a DFT process based on test point allocation is proposed, in which the set of optimal test points will be automatically allocated according to the signal reachability under the constraints of testability criteria.
Abstract: Traditional design for testability (DFT) is arduous and time-consuming because of the iterative process of testability assessment and design modification. To improve the DFT efficiency, a DFT process based on test point allocation is proposed. In this process, the set of optimal test points will be automatically allocated according to the signal reachability under the constraints of testability criteria. Thus, the iterative DFT process will be completed by computer and the test engineers will be released to concentrate on the system design rather than the repetitive modification process. To perform test point allocation, the dependency matrix of signal to potential test point (SP-matrix) is defined based on multi-signal flow graph. Then, genetic algorithm (GA) is adopted to search for the optimal test point allocation solution based on the SP-matrix. At last, experiment is carried out to evaluate the effectiveness of the algorithm.

Journal ArticleDOI
TL;DR: A novel manipulation scheme of wafer named n-sector symmetry and cut (SSCn) is proposed, in which wafers with rotational symmetry are cut into n identical sectors, where n is a suitably chosen integer, to improve the yield.
Abstract: Three-dimensional IC (3D IC) exhibits various advantages over traditional two-dimensional IC (2D IC), including heterogeneous integration, reduced delay and power dissipation, compact device dimension, etc. Wafer-on-wafer stacking offers practical advantages in 3D IC fabrication, but it suffers from low compound yield. To improve the yield, a novel manipulation scheme of wafer named n-sector symmetry and cut (SSCn) is proposed. In this method, wafers with rotational symmetry are cut into n identical sectors, where n is a suitably chosen integer. The sectors are then used to replenish repositories. The SSCn method is combined with best-pair matching algorithm for compound yield evaluation. Simulation of wafers with nine different defect distributions shows that previously known plain rotation of wafers offers only a trivial benefits in yield. A cut number four is optimal for most of the defect models. The SSC4 provides significantly higher yield and the advantage becomes more obvious with increase of the repository size and the number of stacked layers. Cost model of SSCn is analyzed and the cost-effectiveness of SSC4 is established. Observations made are: 1) Cost benefits of SSC4 become larger as the manufacturing overhead of SSC4 become smaller, 2) cost improvement of SSC4 over conventional basic method increases as the number of stacked layers increases and 3) for most defect models, SSC4 largely reduces the cost even when manufacturing overhead of SSC4 is considered to be very large.

Journal ArticleDOI
TL;DR: A scalable current-based dynamic method is presented to estimate both IR and Ldi / dt drop caused by simultaneous switching activity and use the technique to predict the increase in path delay, which can be integrated with existing ATPG tools.
Abstract: Power-supply noise is one of the major contributing factor for yield loss in sub-micron designs. Excessive switching in test mode causes supply voltage to droop more than in functional mode leading to failures in delay tests that would not occur otherwise under normal operation. Thus, there exists a need to accurately estimate on-chip supply noise early in the design phase to meet power requirements in normal mode and during test to prevent overstimulation during the test cycle and avoid false failures. Simultaneous switching activity (SSA) of several logic components is one of the main sources of power-supply noise (PSN) which results in reduction of supply voltages at the power-supplies of the logic gates. Most existing techniques and tools predict static IR-drop, which accounts for only part of the total voltage drop on the power grid. To our knowledge, inductive drop is not included in current noise analysis for simplification. The power delivery networks in today's very deep-submicron chips are susceptible to slight variations and cause sudden large current spikes leading to higher Ldi � dt than resistive drop essentiating the need to account for this drop. Power-supply noise also impacts circuit operation incurring a significant increase in path delays. However, it is infeasible to carry out full-chip SPICE-level simulations on a design to validate the ATPG generated test patterns. Accurate and efficient techniques are required to quantify supply noise and its impact on path delays to ensure reliable operation in both mission mode and during test. We present a scalable current-based dynamic method to estimate both IR and Ldi / dt drop caused by simultaneous switching activity and use the technique to predict the increase in path delay. Our technique uses simulations of individual extracted paths in comparison to time-consuming full-chip simulations and thus it can be integrated with existing ATPG tools. The method uses these path simulations and a convolution-based technique to estimate power-supply noise and path delays. Simulation results for combinational and sequential benchmark circuits are presented demonstrating the effectiveness of our techniques.

Journal ArticleDOI
TL;DR: Three techniques of hardening dynamic logic—layout manipulation using charge sharing, addition of a feedback capacitor across the static inverter, and dual-rail domino logic with differential keepers are presented and evaluated.
Abstract: Dynamic logic families are commonly used in high speed applications, but they are susceptible to single event errors. This paper presents and evaluates three techniques of hardening dynamic logic--layout manipulation using charge sharing, addition of a feedback capacitor across the static inverter, and dual-rail domino logic with differential keepers. The layout-based design has better single event tolerance by sharing charge between NFET devices of the dynamic and static inverters; the design with a feedback capacitor makes the keeper more effective in recovering the hit node because of the increased propagation delay; the differential-keeper structure shows superior SET performance because the hit node could recover through the restoring path in the case of charge loss. These proposed designs along with the reference traditional keeper-based design were fabricated in a 130 nm technology node as shift register chains and then irradiated by heavy ion particles. Experimental results verified the mechanisms and effectiveness of these proposed designs.