scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Testing in 2002"


Journal ArticleDOI
TL;DR: IEEE P1500 is briefly described, and through a simplified example its scalable wrapper architecture, its test information transfer model described in a standardized Core Test Language, and its two compliance levels are illustrated.
Abstract: The increased usage of embedded pre-designed reusable cores necessitates a core-based test strategy, in which cores are tested as separate entities. IEEE P1500 Standard for Embedded Core Test (SECT) is a standard-under-development that aims at improving ease of reuse and facilitating interoperability with respect to the test of core-based system chips, especially if they contain cores from different sources. This paper briefly describes IEEE P1500, and illustrates through a simplified example its scalable wrapper architecture, its test information transfer model described in a standardized Core Test Language, and its two compliance levels. The standard is still under development, and this paper only reflects the view of six active participants of the standardization committee on its current status.

149 citations


Journal ArticleDOI
TL;DR: An integrated framework for the design of SOC test solutions is proposed, which includes a set of algorithms for early design space exploration as well as extensive optimization for the final solution.
Abstract: We propose an integrated framework for the design of SOC test solutions, which includes a set of algorithms for early design space exploration as well as extensive optimization for the final solution. The framework deals with test scheduling, test access mechanism design, test sets selection, and test resource placement. Our approach minimizes the test application time and the cost of the test access mechanism while considering constraints on tests and power consumption. The main feature of our approach is that it provides an integrated design environment to treat several different tasks at the same time, which were traditionally dealt with as separate problems. We have made an implementation of the proposed heuristic used for the early design space exploration and an implementation based on Simulated Annealing for the extensive optimization. Experiments on several benchmarks and industrial designs show the usefulness and efficiency of our approach.

111 citations


Journal ArticleDOI
TL;DR: An FPGA-based approach to speed-up fault injection campaigns for the evaluation of the fault-tolerance of VLSI circuits is proposed, allowing emulating the effects of faults and observing faulty behavior.
Abstract: In this paper we describe an FPGA-based approach to speed-up fault injection campaigns for the evaluation of the fault-tolerance of VLSI circuits. Suitable techniques are proposed, allowing emulating the effects of faults and observing faulty behavior. The proposed approach combines the efficiency of hardware-based techniques, and the flexibility of simulation-based techniques. Experimental results are provided showing that significant speed-up figures can be achieved with respect to state-of-the-art simulation-based fault injection techniques.

106 citations


Journal ArticleDOI
TL;DR: This paper describes and formalizes the notion of test protocols and the algorithms for test protocol expansion and scheduling and elaborate on the industrial usage of the concepts described.
Abstract: Modular testing is an attractive approach to testing large system ICs, especially if they are built from pre-designed reusable embedded cores. This paper describes an automated modular test development approach. The basis of this approach is that a core or module test is dissected into a test protocol and a test pattern list. A test protocol describes in detail how to apply one test pattern to the core, while abstracting from the specific test pattern stimulus and response values. Subsequent automation tasks, such as the expansion from core-level tests to system-chip-level tests and test scheduling, all work on test protocols, thereby greatly reducing the amount of compute time and data involved. Finally, an SOC-level test is assembled from the expanded and scheduled test protocols and the (so far untouched) test patterns. This paper describes and formalizes the notion of test protocols and the algorithms for test protocol expansion and scheduling. A running example is featured throughout the paper. We also elaborate on the industrial usage of the concepts described.

91 citations


Journal ArticleDOI
TL;DR: The first report of a design of reconfigurable core wrapper which allows for a dynamic change in the width of the test access mechanism (TAM) executing a core test, and efficient algorithms to compute the schedule are presented.
Abstract: In this paper a mathematical formulation and an efficient solution, of the embedded core-based system-on-chip (SOC) test scheduling problem (ECTSP) is presented. The ECTSP can be stated as followss given a chip with NC cores each having a test Tis where Ti takes time {\cal F}_T(T_i,w_j) to execute on a test access mechanism (TAM) of width wj, and a constraint W on the number of top-level test pinss calculate the TAM assignment vector π and the schedule Σ for each test Ti, such that the completion time of the full chip test is minimized. All existing approaches have solved the ECTSP by solving the TAM partition and scheduling problem sequentially. In this paper we present an unified approach to solve the ECTSP. We present the first report of a design of reconfigurable core wrapper which allows for a dynamic change in the width of the test access mechanism (TAM) executing a core test. An automatic procedure for the creation of DfT hardware required for reconfiguration using a graph theoretic representation of core wrappers is also presented. For the case of reconfigurable wrappers, efficient algorithms to compute the schedule are presented based upon some recent results in the field of malleable task scheduling. Cases in which the degree of reconfigurability are constrained are considereds the case when only a single core can have reconfigurable wrapper, a schedule with zero TAM idle time can be found in time O(NC(NC + W)lgW), and the case when only 2 different wrapper configurations are allowed can be solved in time O(NC3). Comparison with existing results on benchmark SOCs show that our algorithms outperform state-of-art ILP formulations not only in schedule makespan, but also significantly reduce computation time.

51 citations


Journal ArticleDOI
TL;DR: A novel architecture for scan-based mixed mode BIST is presented, which relies on a two-dimensional compression scheme, which combines the advantages of known vertical and horizontal compression techniques.
Abstract: In this paper a novel architecture for scan-based mixed mode BIST is presented. To reduce the storage requirements for the deterministic patterns it relies on a two-dimensional compression scheme, which combines the advantages of known vertical and horizontal compression techniques. To reduce both the number of patterns to be stored and the number of bits to be stored for each pattern, deterministic test cubes are encoded as seeds of an LFSR (horizontal compression), and the seeds are again compressed into seeds of a folding counter sequence (vertical compression). The proposed BIST architecture is fully compatible with standard scan design, simple and flexible, so that sharing between several logic cores is possible. Experimental results show that the proposed scheme requires less test data storage than previously published approaches providing the same flexibility and scan compatibility.

47 citations


Journal ArticleDOI
TL;DR: An efficient method to select a minimal set of testable paths in scan designs, such that every line in the circuit is covered by at least one of the longest testable path that contain it (if there are any).
Abstract: We propose an efficient method to select a minimal set of testable paths in scan designs, such that every line in the circuit is covered by at least one of the longest testable paths that contain it (if there are any). The proposed path selection approach is based on a stepwise path expansion procedure that uses delay information and compact information about untestable paths to select longest paths while avoiding untestable paths. Techniques called delay analysis and delay-constrained path expansion are used to speedup the selection of paths to test. Compared to earlier approaches, the proposed approach is fast and it is guaranteed to find testable paths. Experimental results for ISCAS89 benchmark circuits using standard scan and broadside testing are presented to demonstrate the effectiveness of the proposed method.

44 citations


Journal ArticleDOI
TL;DR: A method for testing a system-on-a-chip by using a compressed representation of the patterns on an external tester, which only requires a few additional gates in the wrapper, while the mission logic is untouched.
Abstract: The paper presents a method for testing a system-on-a-chip by using a compressed representation of the patterns on an external tester. The patterns for a certain core under test are decompressed by reusing scan chains of cores idle during that time. The method only requires a few additional gates in the wrapper, while the mission logic is untouched. Storage and bandwidth requirements for the ATE are reduced significantly.

40 citations


Journal ArticleDOI
TL;DR: The formulation of the resource allocation and test scheduling problems together as a well-known 2-dimensional bin-packing problem is formulation and a best-fit heuristic algorithm is adopted to achieve optimal solution.
Abstract: In this paper, a method to solve the resource allocation and test scheduling problems together in order to achieve concurrent test for core-based System-On-Chip (SOC) designs is presented. The primary objective for concurrent SOC test is to reduce test application time under the constraints of SOC pins and peak power consumption. The methodology used in this paper is not limited to any specific Test Access Mechanism (TAM). Additionally, it can also be applied to SOC budgeting at design phase to predict a tradeoff between test application time and SOC pins needed. The contribution of this paper is the formulation of the problem as a well-known 2-dimensional bin-packing problem. A best-fit heuristic algorithm is adopted to achieve optimal solution.

34 citations


Journal ArticleDOI
TL;DR: A mixed-signal test generator, called XGEN, that incorporates classical static values as well as dynamic signals such as transitions and pulses, and timing information such as signal arrival times, rise/fall times, and gate delay is developed.
Abstract: Due to technology scaling and increasing clock frequency, problems due to noise effects lead to an increase in design/debugging efforts and a decrease in circuit performance. This paper addresses the problem of efficiently and accurately generating two-vector tests for crosstalk induced effects, such as pulses, signal speedup and slowdown, in digital combinational circuits. These noise effects can propagate through a circuit and create a logic error in a latch or at a primary output. We have developed a mixed-signal test generator, called XGEN, that incorporates classical static values as well as dynamic signals such as transitions and pulses, and timing information such as signal arrival times, rise/fall times, and gate delay. In this paper we first discuss the general framework of the test generation algorithm followed by computational results. Comparison of results with SPICE simulations confirms the accuracy of this approach.

31 citations


Journal ArticleDOI
TL;DR: The possibility of using window comparators for on-chip and potentially also on-line response evaluation of analogue circuits is investigated, and results show that 100% of all assumed layout-realistic faults could be detected.
Abstract: The possibility of using window comparators for on-chip and potentially also on-line response evaluation of analogue circuits is investigated. No additional analogue test inputs are required. The additional circuitry can be either realised by means of standard digital gates taken from an available library or by full custom designed gates. With only a few gates an observation window can be realized, tailored to the application needs. With this approach, the test overhead can be kept extremely low. Due to the low gate capacitance also the load on the observed nodes is very low. Simulation results for some examples show that 100% of all assumed layout-realistic faults could be detected.

Journal ArticleDOI
TL;DR: A novel approach for using an embedded processor to aid in deterministic testing of the other components of a system-on-a-chip (SOC) is presented, and a significant amount of compression can be achieved resulting in less data that must be stored on the tester and less time to transfer the test data from the testers to the chip.
Abstract: A novel approach for using an embedded processor to aid in deterministic testing of the other components of a system-on-a-chip (SOC) is presented. The tester loads a program along with compressed test data into the processor's on-chip memory. The processor executes the program which decompresses the test data and applies it to scan chains in the other components of the SOC to test them. The program itself is very simple and compact, and the decompression is done very rapidly, hence this approach reduces both the amount of data that must be stored on the tester and reduces the test time. Moreover, it enables at-speed scan shifting even with a slow tester (i.e., a tester whose maximum clock rate is slower than the SOC's normal operating clock rate). A procedure is described for converting a set of test cubes (i.e., test vectors where the unspecified inputs are left as X's) into a compressed form. A program that can be run on an embedded processor is then given for decompressing the test cubes and applying them to scan chains on the chip. Experimental results indicate a significant amount of compression can be achieved resulting in less data that must be stored on the tester (i.e., smaller tester memory requirement) and less time to transfer the test data from the tester to the chip.

Journal ArticleDOI
TL;DR: An enhanced hardware/software co-design framework allowing the designer to introduce hardware fault detection properties in the system under consideration is introduced, providing a complete overview of the reliability co- design project.
Abstract: This paper introduces an enhanced hardware/software co-design framework allowing the designer to introduce hardware fault detection properties in the system under consideration By considering reliability requirements at system level, within a hw/sw co-design flow, it is possible to evaluate overheads and benefits of different solutions System specification, hardware and software concurrent fault detection design methodologies and hw/sw partitioning are the three key factors taken into account The paper discusses these aspects providing a complete overview of the reliability co-design project

Journal ArticleDOI
TL;DR: A BIST-based test methodology is presented that includes two special cells to detect and measure noise and skew occurring on the interconnects of the gigahertz system-on-chips and the integrity information accumulated by these special cells can be scanned out for final test and reliability analysis.
Abstract: As we approach 100 nm technology the interconnect issues are becoming one of the main concerns in the testing of gigahertz system-on-chips. Voltage distortion (noise) and delay violations (skew) contribute to the signal integrity loss and ultimately functional error, performance degradation and reliability problems. In this paper, we first define a model for integrity faults on the high-speed interconnects. Then, we present a BIST-based test methodology that includes two special cells to detect and measure noise and skew occurring on the interconnects of the gigahertz system-on-chips. Using an inexpensive test architecture the integrity information accumulated by these special cells can be scanned out for final test and reliability analysis.

Journal ArticleDOI
TL;DR: Enhanced reduced pin-count test (E-RPCT) as mentioned in this paper is an extension of traditional RPCT for circuits in which a large number of digital IC pins is multiplexed for scan.
Abstract: This paper presents enhanced reduced pin-count test (E-RPCT) for low-cost test. E-RPCT is an extension of traditional RPCT for circuits in which a large number of digital IC pins is multiplexed for scan. The basic concept of E-RPCT is to provide access to the internal scan chains via an IEEE 1149.1 compatible boundary-scan architecture, instead of direct access via the IC pins. The boundary-scan chain performs serial/parallel conversion of test data. E-RPCT also provides I/O wrap to test non-contacted pins. The paper presents E-RPCT for full-scan design, as well as for full-scan core-based design.

Journal ArticleDOI
TL;DR: Two data compression techniques that can be used to speed up the transmission of diagnostic data from the embedded RAM built-in self-test (BIST) circuit that has diagnostic support to the external tester are presented.
Abstract: A system-on-chip (SOC) usually consists of many memory cores with different sizes and functionality, and they typically represent a significant portion of the SOC and therefore dominate its yield. Diagnostics for yield enhancement of the memory cores thus is a very important issue. In this paper we present two data compression techniques that can be used to speed up the transmission of diagnostic data from the embedded RAM built-in self-test (BIST) circuit that has diagnostic support to the external tester. The proposed syndrome-accumulation approach compresses the faulty-cell address and March syndrome to about 28% of the original size on average under the March-17N diagnostic test algorithm. The key component of the compressor is a novel syndrome-accumulation circuit, which can be realized by a content-addressable memory. Experimental results show that the area overhead is about 0.9% for a 1Mb SRAM with 164 faults. A tree-based compression technique for word-oriented memories is also presented. By using a simplified Huffman coding scheme and partitioning each 256-bit Hamming syndrome into fixed-size symbols, the average compression ratio (size of original data to that of compressed data) is about 10, assuming 16-bit symbols. Also, the additional hardware to implement the tree-based compressor is very small. The proposed compression techniques effectively reduce the memory diagnosis time as well as the tester storage requirement.

Journal ArticleDOI
TL;DR: This paper introduces a new concept of testability called consecutive testability and proposes a design-for-testability method for making a given SoC consecutively testable based on integer linear programming problem.
Abstract: This paper introduces a new concept of testability called consecutive testability and proposes a design-for-testability method for making a given SoC consecutively testable based on integer linear programming problem. For a consecutively testable SoC, testing can be performed as follows. Test patterns of a core are propagated to the core inputs from test pattern sources (implemented either off-chip or on-chip) consecutively at the speed of system clock. Similarly the test responses are propagated to test response sinks (implemented either off-chip or on-chip) from the core outputs consecutively at the speed of system clock. The propagation of test patterns and responses is achieved by using interconnects and consecutive transparency properties of surrounding cores. All interconnects can be tested in a similar fashion. Therefore, it is possible to test not only logic faults but also timing faults that require consecutive application of test patterns at the speed of system clock since the consecutively testable SoC can achieve consecutive application of any test sequence at the speed of system clock.

Journal ArticleDOI
TL;DR: A Test Access Mechanism (TAM) named CAS-BUS that solves some of the new problems the test industry has to deal with and is scalable, flexible and dynamically reconfigurable.
Abstract: As System on a Chip (SoC) testing faces new challenges, some new test architectures must be developed. This paper describes a Test Access Mechanism (TAM) named CAS-BUS that solves some of the new problems the test industry has to deal with. This TAM is scalable, flexible and dynamically reconfigurable. The CAS-BUS architecture is compatible with the IEEE P1500 standard proposal in its current state of development, and is controlled by Boundary Scan features. This basic CAS-BUS architecture has been extended with two independent variants. The first extension has been designed in order to manage SoC made up with both wrapped cores and non wrapped cores with Boundray Scan features. The second deals with a test pin expansion method in order to solve the I/O bandwidth problem. The proposed solution is based on a new compression/decompression mechanism which provides significant results in case of non correlated test patterns processing. This solution avoids TAM performance degradation. These test architectures are based on the CAS-BUS TAM and allow trade-offs to optimize both test time and area overhead. A tool-box environment is provided, in order to automatically generate the needed component to build the chosen SoC test architecture.

Journal ArticleDOI
TL;DR: A new reseeding technique for test-per-clock test pattern generation suitable for at-speed testing of circuits with random-pattern resistant faults is presented and compares favorably to the other already known techniques with respect to test length and the hardware implementation cost.
Abstract: In this paper we present a new reseeding technique for test-per-clock test pattern generation suitable for at-speed testing of circuits with random-pattern resistant faults. Our technique eliminates the need of a ROM for storing the seeds since the reseeding is performed on-the-fly by inverting the logic value of some of the bits of the next state of the Test Pattern Generator (TPG). The proposed reseeding technique is generic and can be applied to TPGs based on both Linear Feedback Shift Registers (LFSRs) and accumulators. An efficient algorithm for selecting reseeding points is also presented, which targets complete fault coverage and allows to well exploiting the trade-off between hardware overhead and test length. Using experimental results we show that the proposed method compares favorably to the other already known techniques with respect to test length and the hardware implementation cost.

Journal ArticleDOI
TL;DR: New acceptance criteria for the DUT are suggested which solve some ambiguity problems arising if the classification of the D UT as good or bad is based on a few samples of the cross-correlation function.
Abstract: Pseudo-random testing techniques for mixed-signal circuits offer several advantages compared to explicit time-domain and frequency-domain test methods, especially in a BIST structure. To fully exploit these advantages a suitable choice of the pseudo-random input parameters should be done and an investigation on the accuracy of the circuit response samples needed to reduce the risk of misclassification should be carried out. Here these issues have been addressed for a testing scheme based on the estimation of the impulse response of the device under test (DUT) by means of input-output cross-correlation. Moreover, new acceptance criteria for the DUT are suggested which solve some ambiguity problems arising if the classification of the DUT as good or bad is based on a few samples of the cross-correlation function. Examples of application of the proposed techniques to real cases are also shown in order to assess the impact of the measurement system inaccuracies on the reliability of the test.

Journal ArticleDOI
TL;DR: A methodology to reduce specifications during specification testing for analog circuit is proposed and demonstrated and the result shows that the specification reduction depends on the testing confidence.
Abstract: Specification reduction can reduce test time, consequently, test cost. In this paper, a methodology to reduce specifications during specification testing for analog circuit is proposed and demonstrated. It starts with first deriving relationships between specifications and parameter variations of the circuit-under-test (CUT) and then reduces specifications by considering bounds of parameter variations. A statistical approach by taking into account of circuit fabrication process fluctuation is also employed and the result shows that the specification reduction depends on the testing confidence. A continuous-time state-variable benchmark filter circuit is applied with this methodology to demonstrate the effectiveness of the approach.

Journal ArticleDOI
TL;DR: A compact and efficient BIST circuit with diagnosis support and an automatic diagnostic system for embedded SRAM cores is proposed, which provides programmability for custom March algorithms with lower hardware cost.
Abstract: In this paper we propose a novel built-in self-test (BIST) design for embedded SRAM cores. Our contribution includes a compact and efficient BIST circuit with diagnosis support and an automatic diagnostic system. The diagnosis module of our BIST circuit can capture the error syndromes as well as fault locations for the purposes of repair and fault/failure analysis. In addition, our design provides programmability for custom March algorithms with lower hardware cost. The combination of the on-line programming mode and diagnostic system dramatically reduces the effort in design debugging and yield enhancement. We have designed and implemented test chips with our BIST design. Experimental results show that the area overhead of the proposed BIST design is only 2.4% for a 128 KB SRAM, and 0.65% for a 2 MB one.

Journal ArticleDOI
TL;DR: In this article, the authors used the thermal mapping of the silicon surface as a test observable to evaluate the test performance of CMOS digital ICs by using thermal mapping as a measurement observable.
Abstract: This paper treats the test of CMOS digital ICs by using the thermal mapping of the silicon surface as a test observable. Two different temperature-sensing strategies are presented. The novel sensors developed are an on-chip CMOS Differential Temperature (DT) sensor and a Proportional to Absolute Temperature (PTAT) sensor. The sensors have been implemented in a standard .18 μm CMOS technology.

Journal ArticleDOI
TL;DR: A new software-based self- test methodology for system-on-chips (SoC) based on embedded processors that enables an on-chip embedded processor core to test for crosstalk in system-level interconnects by executing a self-test program in the normal operational mode of the SoC, thereby allowing at-speed testing of interconnect crosStalk defects, while eliminating the need for test overhead and the possibility of over-testing.
Abstract: In deep-submicron technologies, long interconnects play an ever-important role in determining the performance and reliability of core-based system-on-chips (SoCs). Crosstalk effects degrade the integrity of signals traveling on long interconnects and must be addressed during manufacturing testing. External testing for crosstalk is expensive due to the need for high-speed testers. Built-in self-test, while eliminating the need for a high-speed tester, may lead to excessive test overhead as well as overly aggressive testing. To address this problem, we propose a new software-based self-test methodology for system-on-chips (SoC) based on embedded processors. It enables an on-chip embedded processor core to test for crosstalk in system-level interconnects by executing a self-test program in the normal operational mode of the SoC, thereby allowing at-speed testing of interconnect crosstalk defects, while eliminating the need for test overhead and the possibility of over-testing. We have demonstrated the feasibility of this method by applying it to test the interconnects of a processor-memory system. The defect coverage was evaluated using a system-level crosstalk defect simulation method.

Journal ArticleDOI
TL;DR: Experimental results presented in this paper demonstrate that the proposed method achieves the above objectives while also achieving higher fault coverages for most of the benchmark circuits considered.
Abstract: This paper presents a methodology to insert scan paths in a functional Register Transfer Level (RTL) specification of a design that can exploit existing functional paths between sequential elements in the original circuit for establishing scan chains. The primary objective for RTL scan insertion is to reduce the time taken for DFT, and thus reduce the time to market. Additionally, building scan chains at the functional RT-Level is expected to reduce the total area overhead introduced by full scan without compromising the fault coverage achieved. In addition, it often eliminates the delay associated with the additional multiplexer as a part of a conventional scan-cell in high performance designs. Experimental results presented in this paper demonstrate that the proposed method achieves the above objectives while also achieving higher fault coverages for most of the benchmark circuits considered.

Journal ArticleDOI
TL;DR: A new strategy for testing embedded cores using Test Access Mechanism (TAM) switches using a scheme for testing the interconnections between cores in parallel and results show significant optimization of area overhead as well as test time.
Abstract: The present paper introduces a new strategy for testing embedded cores using Test Access Mechanism (TAM) switches. An algorithm has been proposed for testing the cores using the TAM switch architecture. In addition, a scheme for testing the interconnections between cores in parallel is also presented. Experiments have been carried out on several synthetic SOC benchmarks. Results show significant optimization of area overhead as well as test time.

Journal ArticleDOI
TL;DR: This paper explained what are the criteria which must be satisfied for built-in self test (BIST) and illustrated with an example how a bad result may be obtained if one of these criteria is not satisfied.
Abstract: The combination of higher quality requirements and sensitivity of high performance circuits to delay defects has led to an increasing emphasis on delay testing of VLSI circuits. As delay testing using external testers requires expensive ATE, built-in self test (BIST) is an alternative technique that can significantly reduce the test cost. It has been proven that Single Input Change (SIC) test sequences are more effective than classical Multiple Input Change (MIC) test sequences when a high robust delay fault coverage is targeted. It has also been shown that random SIC (RSIC) test sequences achieve a higher fault coverage than random MIC (RMIC) test sequences when both robust and non robust tests are under considerations the experimental results were based on a software generation of RSIC sequences that are easily generated. Obviously, a hardware RSIC generation providing similar results can be obtained. However, this hardware generator must be carefully designed. In this paper, it is explained what are the criteria which must be satisfied for this purpose. A solution is proposed and illustrated with an example. Then, it is shown that a bad result may be obtained if one of these criteria is not satisfied.

Journal ArticleDOI
TL;DR: An on-chip detector for the on-line testing of faults affecting clock signals and making them change with incorrect duty-cycle is proposed and features self-checking ability with respect to its possible internal faults belonging to a realistic set including stuck-ats, transistor stuck-ons, stuck-opens and resistive bridgings.
Abstract: This paper proposes an on-chip detector for the on-line testing of faults affecting clock signals and making them change with incorrect duty-cycle. Our scheme is particularly suitable to be integrated within Systems-On-a-Chip (SOCs), in order to avoid their possible incorrect operation because of faults affecting clock signals, thus solving their extreme criticality in clock faults' testing. In particular, our detector is suitable to be applied to clock signals within each SOC digital core, to the clock signals at the interface between the diverse cores, as well as to those driving the DFT and BIST structures used to perform the SOC test. Our scheme features self-checking ability with respect to its possible internal faults belonging to a realistic set including stuck-ats, transistor stuck-ons, stuck-opens and resistive bridgings.

Journal ArticleDOI
TL;DR: The results show that post-injection analysis is a promising approach for reducing the cost of coverage estimation for concurrent error detection mechanisms in microprocessors.
Abstract: We present an analytical technique that uses fault injection data for estimating the coverage of concurrent error detection mechanisms in microprocessors. A major problem in such estimations is that the coverage depends on the program executed by the microprocessor as well as the input sequence to the program. We propose a method that predicts the error coverage for a specified input sequence based on fault injection data obtained for another input sequence. Our results show that post-injection analysis is a promising approach for reducing the cost of coverage estimation.

Journal ArticleDOI
TL;DR: This work presents and empirically evaluate a technique to generate anti-random vectors that is computationally feasible for large input vectors and long sequences of tests and evaluates effectiveness of applying anti- random vectors for behavioral model verification using branch coverage as the testing criterion.
Abstract: Anti-random testing has proved useful in a series of empirical evaluations. The basic premise of anti-random testing is to chose new test vectors that are as far away from existing test inputs as possible. The distance measure is either Hamming distance or Cartesian distance. Unfortunately, this method essentially requires enumeration of the input space and computation of each input vector when used on an arbitrary set of existing test data. This prevents scale-up to large test sets and/or long input vectors. We present and empirically evaluate a technique to generate anti-random vectors that is computationally feasible for large input vectors and long sequences of tests. We also show how this fast anti-random test generation (FAR) can consider retained state (i.e. effects of subsequent inputs on each other). We evaluate effectiveness of applying anti-random vectors for behavioral model verification using branch coverage as the testing criterion.