scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Testing in 1999"


Journal ArticleDOI
TL;DR: New techniques for speeding up deterministic test pattern generation for VLSI circuits by reducing number of backtracks with a low computational cost are presented and incorporated into an advanced ATPG system for combinational circuits called ATOM.
Abstract: This paper presents new techniques for speeding up deterministic test pattern generation for VLSI circuits. These techniques improve the PODEM algorithm by reducing number of backtracks with a low computational cost. This is achieved by finding more necessary signal line assignments, by detecting conflicts earlier, and by avoiding unnecessary work during test generation. We have incorporated these techniques into an advanced ATPG system for combinational circuits, called ATOM. The performance results for the ISCAS85 and full scan version of the ISCAS89 benchmark circuits demonstrated the effectiveness of these techniques on the test generation performance. ATOM detected all the testable faults and proved all the redundant faults to be redundant with a small number of backtracks in a short amount of time.

131 citations


Journal ArticleDOI
TL;DR: It is shown that statistical encoding of precomputed test sequences can be combined with low-cost pattern decoding to provide deterministic BIST with practical levels of overhead and higher fault coverage than pseudorandom testing.
Abstract: We present a new pattern generation approach for deterministic built-in self testing (BIST) of sequential circuits. Our approach is based on precomputed test sequences, and is especially suited to sequential circuits that contain a large number of flip-flops but relatively few controllable primary inputs. Such circuits, often encountered as embedded cores and as filters for digital signal processing, are difficult to test and require long test sequences. We show that statistical encoding of precomputed test sequences can be combined with low-cost pattern decoding to provide deterministic BIST with practical levels of overhead. Optimal Huffman codes and near-optimal Comma codes are especially useful for test set encoding. This approach exploits recent advances in automatic test pattern generation for sequential circuits and, unlike other BIST schemes, does not require access to a gate-level model of the circuit under test. It can be easily automated and integrated with design automation tools. Experimental results for the ISCAS 89 benchmark circuits show that the proposed method provides higher fault coverage than pseudorandom testing with shorter test application time and low to moderate hardware overhead.

52 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of testing the RAM mode of the LUT/RAM modules of configurable SRAM-based Field Programmable Gate Arrays (FPGAs) using a minimum number of test configurations and proposes a unique test configuration called ‘pseudo shift register’ for an m × m array of modules.
Abstract: This paper addresses the problem of testing the RAM mode of the LUT/RAM modules of configurable SRAM-based Field Programmable Gate Arrays (FPGAs) using a minimum number of test configurations. A model of architecture for the LUT/RAM module with N inputs and 2N memory cells is proposed taking into account the LUT and RAM modes. Targeting the RAM mode, we demonstrate that a unique test configuration is required for a single module. The problem is shown equivalent to the test of a classical SRAM circuit allowing to use existing algorithms such as the March tests. We also propose a unique test configuration called ‘pseudo shift register’ for an m × m array of modules. In the proposed configuration, the circuit operates as a shift register and an adapted version of the MATS++ algorithm called ‘shifted MATS++’ is described.

43 citations


Journal ArticleDOI
TL;DR: A global design for test methodology for testing a core-based system in its entirety is developed by introducing a “bypass” mode for each core by which the data can be transferred from a core input port to the output port without interfering the core circuitry itself.
Abstract: The purpose of this paper is to develop a global design for test methodology for testing a core-based system in its entirety. This is achieved by introducing a “bypass” mode for each core by which the data can be transferred from a core input port to the output port without interfering the core circuitry itself. The interconnections are thoroughly tested because they are used to propagate test data (patterns or signatures) in the system. The system is modeled as a directed weighted graph in which the accessibility (of the core input and output ports) is solved as a shortest path problem. Finally, a pipelined test schedule is made to overlap accessing input ports (to send test patterns) and output ports (to observe the signatures). The experimental results show higher fault coverage and shorter test time.

40 citations


Journal ArticleDOI
TL;DR: It is shown that the test sequence generated to target the stuck-at faults can reasonably guarantee short defect detection till a limit given by the Analog Detectability Intervals.
Abstract: This paper analyzes the possibilities and limitations of defect detection using fault model oriented test sequences. The analysis is conducted through the example of a short defect considering the static voltage test technique. Firstly, the problem of defect excitation and effect propagation is studied. It is shown that the effect can be either a defective effect or a defect-free effect depending on the value of unpredictable parameters. The concept of ‘Analog Detectability Interval’ (ADI) is used to represent the range of the unpredictable parameters creating a defective effect. It is demonstrated that the ADIs are pattern dependent. New concepts (‘Global ADI’, ‘Covered ADI’) are then proposed to optimize the defect detection taking into account the unpredictable parameters. Finally, the ability of a fault oriented test sequence to detect defect is discussed. In particular, it is shown that the test sequence generated to target the stuck-at faults can reasonably guarantee short defect detection till a limit given by the Analog Detectability Intervals.

31 citations


Journal ArticleDOI
TL;DR: A new reverse simulation approach to analog and mixed-signal circuit test generation that parallels digital test generation and defines the necessary tolerances on circuit structural components, in order to keep the output circuit signal within the envelope specified by the designer.
Abstract: We describe a new reverse simulation approach to analog and mixed-signal circuit test generation that parallels digital test generation. We invert the analog circuit signal flow graph, reverse simulate it with good and bad machine outputs, and obtain test waveforms and component tolerances, given circuit output tolerances specified by the functional test needs of the designer. The inverted graph allows backtracing to justify analog outputs with analog input sinusoids. Mixed-signal circuits can be tested using this approach, and we present test generation results for two mixed-signal circuits and four analog circuits, one being a multiple-input, multiple-output circuit. This analog backtrace method can generate tests for second-order analog circuits and certain non-linear circuits. These cannot be handled by existing methods, which lack a fault model and a backtrace method. Our proposed method also defines the necessary tolerances on circuit structural components, in order to keep the output circuit signal within the envelope specified by the designer. This avoids the problem of overspecifying analog circuit component tolerances, and reduces cost. We prove that our parametric fault tests also detect all catastrophic faults. Unlike prior methods, ours is a structural, rather than functional, analog test generation method.

26 citations


Journal ArticleDOI
TL;DR: A new approach that addresses both the problems of design validation and hardware testing since the early stages of the design flow is proposed and is shown to be efficient upon a set of representative circuits.
Abstract: In this paper we propose a new approach that addresses both the problems of design validation and hardware testing since the early stages of the design flow. The approach consists in adapting the mutation testing, a software method, to circuits described in VHDL. At the functional level, the approach behaves as a design validation method and at the hardware level as a classical ATPG. Standard software test metrics are used for assessing the quality of the design validation process, and the hardware fault coverage for assessing the test quality at the hardware level. An enhancement process that allows design validation to be efficiently reused for hardware testing is detailed. The approach is shown to be efficient upon a set of representative circuits.

20 citations


Journal ArticleDOI
TL;DR: A new neural network-based fault classification strategy for hard multiple faults in analog circuits is proposed, which reveals very high classification accuracy in both training and testing stages.
Abstract: A new neural network-based fault classification strategy for hard multiple faults in analog circuits is proposed. The magnitude of the harmonics of the Fourier components of the circuit response at different test nodes due to a sinusoidal input signal are first measured or simulated. A selection criterion for determining the best components that describe the circuit behaviour under fault-free (nominal) and fault situations is presented. An algorithm that estimates the overlap between different faults in the measurement space is also introduced. The learning vector quantization neural network is then effectively trained to classify circuit faults. Performance measures reveal very high classification accuracy in both training and testing stages. Two different examples, which demonstrate the proposed strategy, are described.

20 citations


Journal ArticleDOI
TL;DR: Criteria and metrics for quality assessment are proposed and used to assist the design team in selecting a ‘best-fitted’ architecture that satisfies not only functional requirements, but also test requirements.
Abstract: The purpose of this paper is to present a novel methodology for assessing the quality of architecture solutions of hw/sw systems, with particular emphasis on testability. Criteria and metrics for quality assessment are proposed and used to assist the design team in selecting a ‘best-fitted’ architecture that satisfies not only functional requirements, but also test requirements. The methodology makes use of object-oriented modeling techniques. Near-optimum clustering of methods and attributes into objects is carried out, in such a way that objects with moderate complexity, low coupling and high functional autonomy, result. The main features of the methodology are ascertained through a case study.

19 citations


Journal ArticleDOI
TL;DR: An electrical level model of the defective circuit is proposed extending previous models used effectively in the digital domain and is used to analyze the feasibility of testing a simple analog cell with the floating gate defects through the observation of the quiescent current consumption and the dynamic behavior.
Abstract: A unified approach to tackle the characterization of the floating gate defect in analog and mixed-signal circuits is introduced. An electrical level model of the defective circuit is proposed extending previous models used effectively in the digital domain. The poly-bulk, poly-well, poly-power rail and metal-poly capacitances are significant parameters in determining the behavior of the floating gate transistor. The model is used to analyze the feasibility of testing a simple analog cell with the floating gate defects through the observation of the quiescent current consumption and the dynamic behavior.

18 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that 2-level AND/EXOR realizations of arbitrary Boolean functions can be tested as easy (or hard) as 3-level and 4-level OR realizations.
Abstract: It is often stated that AND/EXOR circuits are much easier to test than AND/OR circuits. This statement, however, only holds true for circuits derived from restricted classes of AND/EXOR expressions, like positive polarity Reed-Muller and fixed polarity Reed-Muller expressions. For these two classes of expressions, circuits with good deterministic testability properties are known. In this paper we show that these circuits also have good random pattern testability attributes. An input probability distribution is given that yields a short expected test length for biased random patterns. This is the first time theoretical results on random pattern testability are presented for 2-level AND/EXOR circuit realizations of arbitrary Boolean functions. It turns out that analogous results cannot be expected for less restricted classes of 2-level AND/EXOR circuits. We present experiments demonstrating that generally minimized 2-level AND/OR circuits can be tested as easy (or hard) as minimized 2-level AND/EXOR circuits.

Journal ArticleDOI
TL;DR: In this paper, the behavior of a CMOS SRAM memory in the presence of open defects is analyzed and a technique to test open defects producing data retention faults is proposed, where an initial condition is forced during the writing phase.
Abstract: The behavior of a CMOS SRAM memory in the presence of open defects is analyzed. It has been found that destructive read-out depends on the level of the precharge. A technique to test open defects producing data retention faults is proposed. An initial condition is forced during the writing phase. In this way, intermediate voltages appear during the memorizing phase. Hence, the quiescent current consumption (I_DDQ) increases and the fault can be detected sensing the I_DDQ. The testability regions for the defective memory cell were determined using state diagrams. Conditions to obtain the optimum vector have been stated. A DFT circuitry has been proposed. The cost of the proposed approach in terms of area, test time, and performance degradation is analyzed.

Journal ArticleDOI
TL;DR: In this article, a concept is proposed to combine a bus-transfer based test approach (AMBA) with the well-known scan-test technique, which combines the advantages of modularity and core reuse with the benefits of high fault coverages and short time-to-market cycles (scan).
Abstract: In this paper a concept is proposed to combine a bus-transfer based test approach (AMBA) with the well-known scan-test technique. This novel approach combines the advantages of modularity and core reuse (AMBA) with the benefits of high fault coverages and short time-to-market cycles (scan). The consequences with respect to test hardware implementation and tool flow are discussed.

Journal ArticleDOI
TL;DR: The Accumulator-Based Two-pattern generation (ABT) algorithm is presented, that generates an exhaustive n-bit two-pattern test within exactly 2n × (2n − 1) + 1 clock cycles, i.e. within the theoretically minimum time.
Abstract: Two-pattern tests target the detection of most common failure mechanisms in cmos vlsi circuits, which are modeled as stuck-open or delay faults. In this paper the Accumulator-Based Two-pattern generation (ABT) algorithm is presented, that generates an exhaustive n-bit two-pattern test within exactly 2^n × (2^n − 1) + 1 clock cycles, i.e. within the theoretically minimum time. The ABT algorithm is implemented in hardware utilizing an accumulator whose inputs are driven by either a binary counter (counter-based implementation) or a Linear Feedback Shift Register (LFSR-based implementation). With the counter-based implementation different modules, having different number of inputs, can be efficiently tested using the same generator. For circuits that do not contain counters, the LFSR-based implementation can be implemented, since registers (that typically drive the accumulator inputs into dapatapath cores) can be easily modified to LFSRS with small increase in the hardware overhead. The great advantage of the presented scheme is that it can be implemented by augmening existing datapath components, rather than building a new pattern generation structure.

Journal ArticleDOI
TL;DR: In this paper, a new off-chip diagnosis method, which systematically separates the jitter-induced errors from the errors caused by these other factors, is described, and deterministic errors are removed via a subtraction technique.
Abstract: Gaussian aperture jitter leads to a reduction in the Signal-to-Noise-Ratio of A/D converters. Other noise sources, faults and nonlinearities also effect the digital output signal. A new off-chip diagnosis method, which systematically separates the jitter-induced errors from the errors caused by these other factors, is described. Deterministic errors are removed via a subtraction technique. High-level ADC simulations have been carried out to determine relations between the size of the jitter or decision-level noise and the remaining random errors. By carrying out two tests at two different input frequencies and using the simulation results, errors induced by decision-level noise can be removed.

Journal ArticleDOI
TL;DR: A theoretical analysis of the mismatch of timing models between the behavioral- and gate-levels led to the definition of a novel concept, that of dominated patterns, that captures the needed link between the levels.
Abstract: This paper aims at broadening the scope of hierarchical ATPG to the behavioral-level. The main problem is identified, namely the mismatch of timing models between the behavioral- and gate-levels. As a main contribution of this paper, a theoretical analysis of this problem led to the definition of a novel concept, that of dominated patterns, that captures the needed link between the levels. Some metrics are defined, taken from the software realm, that allow generation of test patterns at the behavioral-level. To validate the concept correctness, different ATPG systems are presented, and experimental results show an improvement in the test quality, thanks to the exploitation of behavioral-level information.

Journal ArticleDOI
TL;DR: Results of experiments are presented, which prove that valid tests in a noise-free environment are invalid when tester noise is considered, and recommend more noise-robust tests.
Abstract: The effect of test environment noise (tester noise) on test waveforms is considered. We show that tests generated ignoring the tester noise characteristics are prone to failure when actually applied to the circuit-under-test (CUT). The failure may result in the good circuit being declared faulty or the faulty circuit being declared good. This failure is independent of the fault model and nature of the test, i.e., AC or DC, time domain or frequency domain. We characterize the total noise at the primary outputs (PO‘s) of the circuit using second order statistics. We use the noise power spectrum and root mean square (RMS) values to make decisions about the test waveforms and recommend more noise-robust tests. For non-linear circuits we use the Central Limit Theorem of statistics to approximate narrow band noise at a primary input (PI) by a sum of sinusoidal distributions, and we use Monte-Carlo simulations to determine the noise at the PO‘s in the time domain. Results of experiments on an instrumentation amplifier, a biquadratic filter, and a Gilbert multiplier are presented, which prove that valid tests in a noise-free environment are invalid when tester noise is considered.

Journal ArticleDOI
TL;DR: In this short note, the possibilities and the limitations for the application of self-dual circuits with alternating inputs are experimentally investigated and the original circuit is assumed to be given as a netlist of gates.
Abstract: In this short note, the possibilities and the limitations for the application of self-dual circuits with alternating inputs are experimentally investigated. The original circuit is assumed to be given as a netlist of gates. The necessary area overhead, the fault coverage for single stuck-at faults in test mode and the error detection probability in on-line mode due to internal stuck-at faults and stuck-at faults at the input lines are determined for MCNC benchmark circuits.

Journal ArticleDOI
TL;DR: A fault simulator for synchronous sequential circuits that combines the efficiency of three-valued logic simulation with the exactness of a symbolic approach is presented and it is shown that the preciseness of the fault coverage computation can be improved even for the largest benchmark functions.
Abstract: We present a fault simulator for synchronous sequential circuits that combines the efficiency of three-valued logic simulation with the exactness of a symbolic approach The simulator is hybrid in the sense that three different modes of operation—three-valued, symbolic and mixed—are supported We demonstrate how an automatic switching between the modes depending on the computational resources and the properties of the circuit under test can be realized, thus trading off time/space for accuracy of the computation Furthermore, besides the usual Single Observation Time Test Strategy (SOT) for the evaluation of the fault coverage, the simulator supports evaluation according to the more general Multiple Observation Time Test Strategy (MOT) Numerous experiments are given to demonstrate the feasibility and efficiency of our approach In particular, it is shown that, at the expense of a reasonable time penalty, the exactness of the fault coverage computation can be improved even for the largest benchmark functions

Journal ArticleDOI
TL;DR: The feasibility of using internal thermal sensors to detect heat sources provoked by structural defects are considered and evaluated.
Abstract: Testing techniques based on the functional behaviour, the propagation delay and the levels of quiescent current have been used with great success for the last two decade technologies. However, the efficiency of such techniques is dubious for future technologies, characterised by huge mixed-mode complex circuits and very low supply voltage levels. In this paper the feasibility of using internal thermal sensors to detect heat sources provoked by structural defects are considered and evaluated.

Journal ArticleDOI
TL;DR: A novel approach to the delay fault testing problem in scan-based sequential circuits is proposed, based on the combination of a BIST structure with a scan- based design to apply delay test pairs to the circuit under test.
Abstract: Delay testing that requires the application of consecutive two-pattern tests is not an easy task in a scan-based environment. This paper proposes a novel approach to the delay fault testing problem in scan-based sequential circuits. This solution is based on the combination of a BIST structure with a scan-based design to apply delay test pairs to the circuit under test.

Journal ArticleDOI
TL;DR: Wobbling aims at removing the effect of the rounding operation that takes place in an ADC, so that the measured harmonic distortion and noise amplitude can be truly ascribed to the intrinsic non-linearity and noise of the ADC.
Abstract: In this paper we propose a new technique, wobbling, for the stabilization of spectral ADC-test parameters with respect to offset and amplitude deviations of the sinusoidal stimulus. Wobbling aims at removing the effect of the rounding operation that takes place in an ADC, so that the measured harmonic distortion and noise amplitude can be truly ascribed to the intrinsic non-linearity and noise of the ADC. We compare the wobbling technique with subtractive and non-subtractive noise dithering, both from a performance and an implementation point-of-view. We present results of simulations and measurements validating the wobbling technique for use in a production environment.

Journal ArticleDOI
TL;DR: The proposed stratified sampling methodology applies to non-equally probable DO faults, exhibiting a wide range of probabilities of occurrence, and leads to confidence intervals similar to the ones obtained with equally probable faults.
Abstract: The purpose of this paper is to present a novel methodology for Defect-Oriented (DO) fault sampling, and its implementation in a new extraction tool, lobs (\underline Layout \underline Observer). The methodology is based on the statistics theory, and on the application of the concepts of estimation of totals over subpopulations and stratified sampling to the fault sampling problem. The proposed stratified sampling methodology applies to non-equally probable DO faults, exhibiting a wide range of probabilities of occurrence, and leads to confidence intervals similar to the ones obtained with equally probable faults. ISCAS benchmark circuits are laid out and lobs used to ascertain the results, for circuits up to 100,000 MOS transistors, and extracted DO fault lists of 300,000 faults.

Journal ArticleDOI
TL;DR: A way of modeling the differences between the calculated delays and the real delays is introduced, and an efficient path selection method for path delay testing based on the model is proposed.
Abstract: In this paper, we introduce a way of modeling the differences between the calculated delays and the real delays, and propose an efficient path selection method for path delay testing based on the model. Path selection is done by judging which of two paths has the larger real delay by taking into account the ambiguity of calculated delay, caused by imprecise delay modeling as well as process disturbances. In order to make precise judgment under this ambiguity, the delays of only the unshared segments of the two paths are evaluated. This is because the shared segments are presumed to have the same real delays on both paths. The experiments used the delays of gates and interconnects, which were calculated from the layout data of ISCAS85 benchmark circuits using a real cell library. Experimental results show the method selects only about one percent of the paths selected by the most popular method.

Journal ArticleDOI
TL;DR: This algorithm is parameterized by user defined tuning factors allowing tradeoffs between fault coverage, area overhead and test application time, and is generic in the sense it handle and mixes heterogeneous test pattern generators and compactors.
Abstract: In this paper, we present a fast and efficient algorithm for BISTing datapaths described at the Register Transfer (RT) level. This algorithm is parameterized by user defined tuning factors allowing tradeoffs between fault coverage, area overhead and test application time. This algorithm is generic in the sense it handle and mixes heterogeneous test pattern generators and compactors.

Journal ArticleDOI
TL;DR: Three new algorithms to compute tests for faults in the interconnects of random access memories (RAM) using only read and write operations to diagnose more faults than would otherwise be possible are given.
Abstract: We give three new algorithms to compute tests for faults in the interconnects of random access memories (RAM) using only read and write operations. Diagnosis of address line faults is the most difficult step. Our Adaptive Diagnosis Algorithm (ADA) considers each address line separately. Single line faults are easy to diagnose, so the objective is to ascertain which address lines are free of faults, thereby pruning impossible multi-line faults. Our Consecutive Diagnosis Algorithm (CDA) uses a more complicated and lengthier test sequence. However, with CDA, interpreting the test results is easier and the diagnostic resolution is superior. Our third algorithm, Adaptive Diagnosis Algorithm with Repair (ADAR), relies upon additional testing after repair in order to diagnose more faults than would otherwise be possible. ADAR has three test stages with two repair stages between them.

Journal ArticleDOI
Andreas Rusznyak1
TL;DR: A method is proposed whereby the oscillator configuration remains as in application, and the amplifier gain and the series resistance of the resonator constituting this module have to be tested.
Abstract: To guarantee correct functioning of oscillators the amplifier gain and the series resistance of the resonator constituting this module have to be tested. A method is proposed whereby the oscillator configuration remains as in application.

Journal ArticleDOI
TL;DR: A novel built-in current sensor that uses two additional power supply voltages besides the system power supply voltage, and that is constructed by using a current mirror circuit to pick up an abnormal IDDQ is presented, suitable for testing low-voltage integrated circuits.
Abstract: This paper presents a novel built-in current sensor that uses two additional power supply voltages besides the system power supply voltage, and that is constructed by using a current mirror circuit to pick up an abnormal IDDQ. It is activated only by an abnormal quiescent power supply current and minimizes the voltage drop at the terminal of the circuit under test. Simulation results showed that it could detect 16-μA IDDQ against 0.03-V voltage drop at 3.3-V VDD and that it reduced performance degradation in the circuit under test. It is therefore suitable for testing low-voltage integrated circuits. Moreover, we verified the behavior of the sensor circuit implemented on the board by using discrete devices. Experimental results showed that the real circuit of the sensor functioned properly.

Journal ArticleDOI
TL;DR: A deterministic BIST scheme for circuits with multiple scan paths is presented and a procedure is described for synthesizing a pattern generator which stimulates all scan chains simultaneously and guarantees complete fault coverage.
Abstract: A deterministic BIST scheme for circuits with multiple scan paths is presented A procedure is described for synthesizing a pattern generator which stimulates all scan chains simultaneously and guarantees complete fault coverage The new scheme may require less chip area than a classical LFSR-based approach while better or even complete fault coverage is obtained at the same time

Journal ArticleDOI
TL;DR: The quality of the incremental testability analysis is similar to those of the conventional explicit testability re-calculation methods and the technique can be used efficiently for improving the testability of a design during the high-level test synthesis and partial scan selection processes.
Abstract: This paper presents an efficient estimation method for incremental testability analysis, which is based partially on explicit testability re-calculation and partially on gradient techniques. The analysis results have been used successfully to guide design transformations and partial scan selection. Experimental results on a variety of benchmarks show that the quality of our incremental testability analysis is similar to those of the conventional explicit testability re-calculation methods and the technique can be used efficiently for improving the testability of a design during the high-level test synthesis and partial scan selection processes.