scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 1994"


Journal ArticleDOI
TL;DR: A new approach to the unit testing of object-oriented programs, a set of tools based on this approach, and two case studies are described, which allow for substantial automation of many aspects of testing, including test case generation, test drivergeneration, test execution, and test checking.
Abstract: This article describes a new approach to the unit testing of object-oriented programs, a set of tools based on this approach, and two case studies. In this approach, each test case consists of a tuple of sequences of messages, along with tags indicating whether these sequences should put objects of the class under test into equivalent states and/or return objects that are in equivalent states. Tests are executed by sending the sequences to objects of the class under test, then invoking a user-supplied equivalence-checking mechanism. This approach allows for substantial automation of many aspects of testing, including test case generation, test driver generation, test execution, and test checking. Experimental prototypes of tools for test generation and test execution are described. The test generation tool requires the availability of an algebraic specification of the abstract data type being tested, but the test execution tool can be used when no formal specification is available. Using the test execution tools, case studies involving execution of tens of thousands of test cases, with various sequence lengths, parameters, and combinations of operations were performed. The relationships among likelihood of detecting an error and sequence length, range of parameters, and relative frequency of various operations were investigated for priority queue and sorted-list implementations having subtle errors. In each case, long sequences tended to be more likely to detect the error, provided that the range of parameters was sufficiently large and likelihood of detecting an error tended to increase up to a threshold value as the parameter range increased.

345 citations


Journal ArticleDOI
Jacob Savir1, S. Patil1
TL;DR: It is shown that the broad-side method is inferior to the skewed-load method, which is another form of scan-based transition test, and there is, however, a merit in combining the skewed -load method with the broad -side method to achieve a higher transition fault coverage.
Abstract: A broad-side delay test is a form of a scan-based delay test, where the first vector of the pair is scanned into the chain and the second vector of the pair is the combinational circuit's response to this first vector. This delay test form is called "broad-side" since the second vector of the delay test pair is provided in a broad-side fashion, namely through the logic. This paper concentrates on several issues concerning broad-side delay test. It analyzes the effectiveness of broad-side delay test; shows how to compute broad-side delay test vectors; shows how to generate broad-side delay test vectors using existing tools that were aimed at stuck-at faults; shows how to compute the detection probability of a transition fault using broad-side pseudo-random patterns; shows the results of experiments conducted on the ISCAS sequential benchmarks; and discusses some concerns of the broad-side delay test strategy. It is shown that the broad-side method is inferior to the skewed-load method, which is another form of scan-based transition test. There is, however, a merit in combining the skewed-load method with the broad-side method. This combined method will achieve a higher transition fault coverage than each individual method alone. >

296 citations


Proceedings ArticleDOI
Jacob Savir1, S. Patil1
25 Apr 1994
TL;DR: This paper concentrates on generation of broad-side delay test vectors; shows the results of experiments conducted on the ISCAS sequential benchmarks, and discusses some concerns of the broad- side delay test strategy.
Abstract: A broad-side delay test is a form of a scan-based delay test, where the first vector of the pair is scanned into the chain, and the second vector of the pair is the combinational circuit's response to this first vector. This delay test form is called "broad-side" since the second vector of the delay test pair is provided in a broad-side fashion, namely through the logic. This paper concentrates on generation of broad-side delay test vectors; shows the results of experiments conducted on the ISCAS sequential benchmarks, and discusses some concerns of the broad-side delay test strategy. >

174 citations


Proceedings ArticleDOI
06 Jun 1994
TL;DR: A genetic algorithm (GA) framework for sequential circuit test generation that evolves candidate test vectors and sequences, using a fault simulator to compute the fitness of each candidate test.
Abstract: Test generation using deterministic fault-oriented algorithms is highly complex and time-consuming. New approaches are needed to augment the existing techniques, both to reduce execution time and to improve fault coverage. In this work, we describe a genetic algorithm (GA) framework for sequential circuit test generation. The GA evolves candidate test vectors and sequences, using a fault simulator to compute the fitness of each candidate test. Various GA parameters are studied, including alphabet size, fitness function, generation gap, population size, and mutation rate, as well as selection and crossover schemes. High fault coverages were obtained for most of the ISCAS89 sequential benchmark circuits, and execution times were significantly lower than in a deterministic test generator in most cases.

162 citations


Proceedings ArticleDOI
02 Oct 1994
TL;DR: A new ATPG algorithm has been proposed that reduces average heat dissipation (between successive test vectors) during test application to permit safe and inexpensive testing of low power circuits and bare dies that would otherwise require expensive heat removal equipment for testing at high speeds.
Abstract: A new ATPG algorithm has been proposed that reduces average heat dissipation (between successive test vectors) during test application. The objective is to permit safe and inexpensive testing of low power circuits and bare dies that would otherwise require expensive heat removal equipment for testing at high speeds. Three new functions, namely transition controllability, observability and test generation costs, have been defined. It has been shown that the transition test generation cost is the minimum number of transitions required to test the corresponding stuck-at fault in fanout free circuits. This cost function is used for target fault selection while the other two functions are used to guide the backtrace and objective selection procedures of PODEM. The tests generated by the proposed ATPG decrease heat dissipation during test application by a factor of 2-23 for benchmark circuits.

158 citations


Proceedings ArticleDOI
06 Nov 1994
TL;DR: It is shown that Recursive Learning can derive “good” Boolean divisors justifying the effort to attempt a Boolean division, and for 9 out of 10 ISCAS-85 benchmark circuits, the tool HANNIBAL obtains smaller circuits than the well-known synthesis system SIS.
Abstract: This paper proposes a new approach to multi-level logic optimization based on ATPG (Automatic Test Pattern Generation). Previous ATPG-based methods for logic minimization suffered from the limitation that they were quite restricted in the set of possible circuit transformations. We show that the ATPG-based method presented here allows (in principle) the transformation of a given combinational network C into an arbitrary, structurally different but functionally equivalent combinational network C'. Furthermore, powerful heuristics are presented in order to decide what network manipulations are promising for minimizing the circuit. By identifying indirect implications between signals in the circuit, transformations can be derived which are “good” candidates for the minimization of the circuit. In particular, it is shown that Recursive Learning can derive “good” Boolean divisors justifying the effort to attempt a Boolean division. For 9 out of 10 ISCAS-85 benchmark circuits our tool HANNIBAL obtains smaller circuits than the well-known synthesis system SIS.

115 citations


Proceedings ArticleDOI
06 Nov 1994
TL;DR: This paper proposes several new ways in which one or more redundant gates or wires can be added to a network and addresses the problem of efficient redundancy computation which allows to eliminate many unnecessary redundancy tests.
Abstract: In this paper, we discuss the problem of optimizing a multi-level logic combinational Boolean network. Our techniques apply a sequence of local perturbations and modifications of the network which are guided by the automatic test pattern generation ATPG based reasoning. In particular, we propose several new ways in which one or more redundant gates or wires can be added to a network. We show how to identify gates which are good candidates for local functionality change. Furthermore, we discuss the problem of adding and removing two wires, none of which alone is redundant, but when jointly added/removed they do not affect functionality of the network. We also address the problem of efficient redundancy computation which allows to eliminate many unnecessary redundancy tests. We have performed experiments on MCNC benchmarks and compared the results to those of misII and RAMBO. Experimental results are very encouraging.

95 citations


Journal ArticleDOI
TL;DR: In this paper, a logic level characterization and fault model for crosstalk faults is presented, and a fault list of such faults can be generated from the layout data, and given an automatic test pattern generation procedure for them.
Abstract: The continuous reduction of the device size in integrated circuits and the increase in the switching rate cause parasitic capacitances between conducting layers to become dominant and cause logic errors in the circuits. Therefore, capacitive couplings can be considered as potential logic faults. Classical fault models do not cover this class of faults. This paper presents a logic level characterization and fault model for crosstalk faults. The authors also show how a fault list of such faults can be generated from the layout data, and give an automatic test pattern generation procedure for them. >

94 citations


Proceedings ArticleDOI
02 Oct 1994
TL;DR: An approach based on Genetic Algorithms suitable for even the largest benchmark circuits, together with a prototype system named GATTO is described and its effectiveness (in terms of result quality and CPU time requirements) for circuits previously unmanageable is illustrated.
Abstract: This paper is concerned with the question of automated test pattern generation for large synchronous sequential circuits and describes an approach based on Genetic Algorithms suitable for even the largest benchmark circuits, together with a prototype system named GATTO. Its effectiveness (in terms of result quality and CPU time requirements) for circuits previously unmanageable is illustrated. The flexibility of the new approach enables users to easily trade off fault coverage and CPU time to suit their needs.

92 citations


Proceedings ArticleDOI
06 Nov 1994
TL;DR: Using the technique presented here an efficient static test set for analog and mixed-signal ICs can be constructed, reducing both the test time and the packaging cost.
Abstract: Static tests are key in reducing the current high cost of testing analog and mixed-signal ICs. A new DC test generation technique for detecting catastrophic failures in this class of circuits is presented. To include the effect of tolerance of parameters during testing, the test generation problem is formulated as a minimax optimization problem, and solved iteratively as successive linear programming problems. An analytical fault modeling technique, based on manufacturing defect statistics is used to derive the fault list for the test generation. Using the technique presented here an efficient static test set for analog and mixed-signal ICs can be constructed, reducing both the test time and the packaging cost.

86 citations


Proceedings ArticleDOI
02 Oct 1994
TL;DR: This paper describes the design of an efficient weighted random pattern system and various heuristics that affect the performance of the system are discussed and an experimental evaluation is provided.
Abstract: This paper describes the design of an efficient weighted random pattern system. The performance of the system is measured by the number of weight sets and the number of weighted random patterns required for high fault coverage. Various heuristics that affect the performance of the system are discussed and an experimental evaluation is provided.

Journal ArticleDOI
TL;DR: A separation of test generation process into two phases: path analysis and value analysis is proposed to satisfy the internal test goals and shows that the approach is very effective in achieving complete automation for high-level test generation.
Abstract: Hierarchically designed microprocessor-like VLSI circuits have complex data paths and embedded control machines to execute instructions. When a test pattern has to be applied to the input of an embedded module, determination of a sequence of instructions, which will apply this pattern and propagate the fault effects, is extremely difficult. After the instruction sequence is derived, to assign values at all interior lines without conflicts is also very difficult. In this paper, we propose a separation of test generation process into two phases: path analysis and value analysis. In the phase of path analysis, a new methodology for automatic assembly of a sequence of instructions is proposed to satisfy the internal test goals. In the phase of value analysis, an equation-solving algorithm is used to compute an exact value solution for all interior lines. This new ATPG methodology containing techniques for both path and value analysis forms a complete solution for a variety of microprocessor-like circuits. This new approach has been implemented and experimented on six high-level circuits. The results show that our approach is very effective in achieving complete automation for high-level test generation. >

Proceedings ArticleDOI
02 Oct 1994
TL;DR: The testability of analog circuits in the frequency domain is studied by introducing the analog fault observability concept and a significant reduction in the number of measured output parameters necessary for testing is reduced to one or two parameters without a loss in fault coverage.
Abstract: A technique for multifrequency test vector generation using testability analysis and output response detection by adding a translation built-in self test (T-BIST) is presented. We study the testability of analog circuits in the frequency domain by introducing the analog fault observability concept. This testability evaluation will be helpful in generating the test vectors and for selecting test nodes for the various types of faults. In the proposed approach test vector generation and test point selection allow a significant reduction in the number of measured output parameters necessary for testing (to one or two parameters) without a loss in fault coverage. The T-BIST approach consists of verifying whether or not the tested parameters for the given test vector are within the acceptance range. This technique is based on the conversion of each detected parameter to a DC voltage. Results are presented for different practical filters for which a complete test solution was achieved.

Patent
04 May 1994
TL;DR: In this article, a test vector generation and fault simulation (TGFS) comparator is implemented in the PLD or FPGA consisting of a partitioned sub-circuit configuration, and a multiplicity of copies of the same configuration each with a single and different fault introduced in it.
Abstract: An electronic circuit test vector generation and fault simulation apparatus is constructed with programmable logic devices (PLD) or field programmable gate array (FPGA) devices and messaging buses carrying data and function calls. A test generation and fault simulation (TGFS) comparator is implemented in the PLD or FPGA consisting of a partitioned sub-circuit configuration, and a multiplicity of copies of the same configuration each with a single and different fault introduced in it. The method for test vector generation involves determining test vectors that flag each of the fault as determined by a comparison of the outputs of the good and single fault configurations. Further the method handles both combinational as well as sequential type circuits which require generating a multiplicity of test vectors for each fault. The successful test vectors are now propagated to the inputs and outputs of the electronic circuit, through driver and receiver sub-circuits, modeled via their corresponding TGFS comparators, by means of an input/output/function messaging buses. A method of fault simulation utilizing the TGFS comparators working under a fault specific approach determines the fault coverage of the test vectors.

Proceedings ArticleDOI
28 Feb 1994
TL;DR: This work uses simple GAs to generate populations of candidate test vectors and to select the best vector to apply in each time frame, using a sequential circuit fault simulator to evaluate the fitness of each candidate vector.
Abstract: In this work we investigate the effectiveness of genetic algorithms (GAs) in the test generation process. We use simple GAs to generate populations of candidate test vectors and to select the best vector to apply in each time frame. A sequential circuit fault simulator is used to evaluate the fitness of each candidate vector, allowing the test generator to be used for both combinational and sequential circuits. We experimented with various GA parameters, namely population size, number of generations, mutation rate, and selection and crossover schemes. For the ISCAS85 combinational benchmark circuits, 100% of testable faults were detected in six of the ten circuits used, and very compact test sets were generated. Good results were obtained for many of the ISCAS89 sequential benchmark circuits, and execution times were significantly lower than in a deterministic test generator in most cases. >

Journal ArticleDOI
Sandip Kundu1
TL;DR: A diagnosis system that can diagnose faults in a scan chain so that the manufacturing process or physical design can be filed to improve yield is described.
Abstract: Testing screens for good chips. However, when test fall out is high (low yield) it becomes necessary to diagnose faults so that the manufacturing process or physical design can be filed to improve yield. Several scan based diagnostic schemes are used in industry. They work when the scan chain itself is fault free. In this paper we describe a diagnosis system that can diagnose faults in a scan chain. >

Journal ArticleDOI
TL;DR: Test statement insertion (TSI), an alternative to test point insertion and partial scan, is used to modify the circuit based on the selected test points, which has the major advantage of using TSI is a low pin count and test application time as compared to testPoint insertion andpartial scan.
Abstract: In this paper, a behavioral synthesis for testability system is presented. In this system, a testability modifier is connected to an existing behavioral level synthesis program, which accepts a circuit's behavioral description in C or VHDL as input. The outline of the system is as follows: (1) a testability analyzer is first applied to identify the hard-to-test areas in the circuit from the behavioral description; (2) a selection process is then applied to select test points or partial scan flip-flops. Selection is based on behavioral information rather than low-level structural description. This allows test point insertion or partial scan usage on circuits described as an interconnection of high level modules; (3) test statement insertion (TSI), an alternative to test point insertion and partial scan, is used to modify the circuit based on the selected test points. The major advantage of using TSI is a low pin count and test application time as compared to test point insertion and partial scan. In addition, TSI can be applied at the early design phase. This approach was implemented in a computer program, and applied to several sample circuits generated by a synthesis tool. The results are also presented. >

Proceedings ArticleDOI
02 Oct 1994
TL;DR: A testability analysis procedure for complex analogue circuits is presented based on layout dependent fault models extracted from process defect statistics, which concludes that the fault coverage achieved by this test can be improved by the use of a supplementary test based on power supply variations.
Abstract: A testability analysis procedure for complex analogue circuits is presented based on layout dependent fault models extracted from process defect statistics. The technique has been applied to a mixed-signal phase locked loop circuit and a number of test methodologies have been evaluated including the existing production test. It is concluded that the fault coverage achieved by this test can be improved by the use of a supplementary test based on power supply variations.

Journal ArticleDOI
TL;DR: The issue of test selection with respect to a general distributed test architecture containing distributed interfaces, and a fact that the methods given in the literature cannot ensure the same fault coverage as the corresponding original testing methods is investigated.

Journal ArticleDOI
25 Apr 1994
TL;DR: This work analyzes the probability that an arbitrary pseudo-random test sequence of short length detects all faults, and chooses the shortest subsequence that includes test patterns for all the faults of interest, hence resulting in 100% fault coverage.
Abstract: When using Built-In Self Test (BIST) for testing VLSI circuits, a major concern is the generation of proper test patterns that detect the faults of interest. Usually a linear feedback shift register (LFSR) is used to generate test patterns. We first analyze the probability that an arbitrary pseudo-random test sequence of short length detects all faults. The term short is relative to the probability of detecting the fault having the fewest test patterns. We then show how to guide the search for an initial state (seed) for a LFSR with a given primitive feedback polynomial so that all the faults of interest are detected by a minimum length test sequence. Our algorithm is based on finding the location of test patterns in the sequence generated by this LPSR. This is accomplished using, the theory of discrete logarithms. We then select the shortest subsequence that includes test patterns for all the faults of interest, hence resulting in 100% fault coverage. >

Proceedings ArticleDOI
25 Apr 1994
TL;DR: It was found that several multilevel, synthesized, robust path delay testable circuits require impractically long pseudo-random test sequences and weighted random testing techniques have been developed for robust pathdelay testing are applied.
Abstract: Importance of delay testing is growing especially for high speed circuits. Delay testing using automatic test equipment is expensive. Built-in self-test can significantly reduce the cost of comprehensive delay testing by replacing the test equipment. It was found that several multilevel, synthesized, robust path delay testable circuits require impractically long pseudo-random test sequences. Weighted random testing techniques have been developed for robust path delay testing. The proposed technique is successfully applied to these circuits and 100% robust path delay fault coverage obtained using only 1-2 sets of weights. >

Proceedings ArticleDOI
25 Apr 1994
TL;DR: The proposed algorithm indicates the set of adequate test frequencies and test nodes to increase fault observability and analyzes the case of single fault and of double and multiple faults.
Abstract: Testability analysis in analog circuits is an important task and a desirable approach for producing testable complex systems. In past years, most of the testability evaluation methods presented were based on measures of the degree of solvability of the fault diagnosis equations. In this paper, we study the testability of analog circuits in the frequency domain by introducing the analog fault observability concept. The proposed algorithm indicates the set of adequate test frequencies and test nodes to increase fault observability. We analyze the case of single fault and of double and multiple faults. Concepts such as fault masking, fault dominance, fault equivalence and non observable fault in analog circuits are defined and then used to evaluate testability. Finally, some experimental results are provided. >

Journal ArticleDOI
TL;DR: The authors present ScanBist, a low-overhead, scan-based built-in self-test method, along with its performance in several designs, and a novel clock synchronization scheme that allows at-speed testing of circuits.
Abstract: The authors present ScanBist, a low-overhead, scan-based built-in self-test method, along with its performance in several designs. A novel clock synchronization scheme allows at-speed testing of circuits. This design allows the testing of circuits operating at more than one frequency while retaining the combinational character of the circuit to be analyzed. We can therefore apply scan patterns that will exercise the circuit under test at the system speed, potentially providing a better coverage of delay faults when compared to other self-test methods. Modifications to an existing transition fault simulator account for cases where inputs originating from scan registers clocked at different frequencies drive a gate. We claim to detect transition faults only if the transition originates from the inputs driven by the highest frequency clock. ScanBist is useful at all levels of system packaging assuming that a standard TAP provides the control and boundary scan isolates the circuit from primary inputs and outputs during BIST mode. >


Proceedings ArticleDOI
06 Jun 1994
TL;DR: Experimental results show that designs generated by this approach are testable in a highly concurrent manner.
Abstract: The testability of a VLSI design is strongly affected by its register-transfer level (RTL) structure. Since the high-level synthesis process determines the RTL structure, it is necessary to consider testability during high-level synthesis. A synthesis system composed of scheduling and binding components minimizes the number of hardware sharing conicts between tests in the test schedule. Novel test conict estimates are used to direct the synthesis process. The test conflict estimation is based on examination of the interconnect structure of the partial design state during synthesis. Test conict estimates enable our synthesis system to select design options which increase test concurrency, thereby decreasing test time. Experimental results show that designs generated by this approach are testable in a highly concurrent manner.

Proceedings ArticleDOI
06 Nov 1994
TL;DR: This paper proposes a correction technique for simulation-based ATPG based on identifying the diverging state and on computing a fault cluster (faults close to each other) which has been used to generate tests with very high fault coverage.
Abstract: Simulation-based test vector generators require much less computer time than deterministic ATPG but they generate longer test sequences and sometimes achieve lower fault coverage. This is due to the divergence in the search process. In this paper, we propose a correction technique for simulation-based ATPG. This technique is based on identifying the diverging state and on computing a fault cluster (faults close to each other). A set of candidate faults from the cluster is targeted with a deterministic ATPG and the resulting test sequence is used to restart the search process of the simulation-based technique. This above process is repeated until all faults are detected or proven to be redundant/untestable. The program implementing this approach has been used to generate tests with very high fault coverage, and runs about 10 times faster than traditional deterministic techniques with very good test quality in terms of test length and fault coverage.

Proceedings ArticleDOI
23 Sep 1994
TL;DR: This paper presents a testability improvement method for digital systems described in VHDL behavioral specification based on testability analysis at registertransfer (RT) level which reflects test pattern generation costs, fault coverage and test application time.
Abstract: This paper presents a testability improvement method for digital systems described in VHDL behavioral specification. The method is based on testability analysis at registertransfer (RT) level which reflects test pattern generation costs, fault coverage and test application time. The testability is measured by controllability and observability, and determined by the structure of a design, the depth from I/O ports and the functional units used. In our approach, hardto-test parts are detected by a testability analysis algorithm and transformed by some known DFT techniques. Our experimental results show that testability improvement transformations guided by the RT level testability analysis have a strong correlation to ATPG results at gate level.

Proceedings ArticleDOI
05 Jan 1994
TL;DR: A novel fault independent algorithm for redundancy identification in combinational circuits based on a simple concept that a fault which requires an illegal combination of values as a necessary condition for its detection is undetectable and hence redundant is presented.
Abstract: This paper presents a novel fault independent algorithm for redundancy identification (FIRE) in combinational circuits. The algorithm is based on a simple concept that a fault which requires an illegal combination of values as a necessary condition for its detection is undetectable and hence redundant. It uses implications to find a subset of such faults whose detection requires conflicts on certain lines in the circuit. Our results on benchmark circuits indicate that we find a large number of redundancies, much faster when compared to a test-generation-based approach for redundancy identification. >

Proceedings ArticleDOI
02 Oct 1994
TL;DR: This paper introduces a novel technique to transform behavioral specifications, such that an existing behavioral test synthesis system can generate area-efficient, testable designs with significantly lower partial scan overhead.
Abstract: Recently, several high level synthesis approaches have been proposed to synthesize testable data paths from behavioral specifications. This paper introduces a novel technique to transform behavioral specifications, such that an existing behavioral test synthesis system can generate area-efficient, testable designs with significantly lower partial scan overhead. Experimental results demonstrate the significant savings in partial scan overhead when the transformation is applied before using the behavioral test synthesis system to synthesize 100% test-efficient designs.

Journal ArticleDOI
Jacob Savir1, S. Patil1
TL;DR: This paper concentrates on generation of broadside delay test vectors; shows the results of experiments conducted on the ISCAS sequential benchmarks, and discusses some concerns of the broad-sidedelay test strategy.
Abstract: A broad-side delay test is a form of a scan-based delay test, where the first vector of the pair is scanned into the chain, and the second vector of the pair is the combinational circuit's response to this first vector. This delay test form is called "broad-side" since the second vector of the delay test pair is provided in a broad-side fashion, namely through the logic. This paper concentrates on generation of broadside delay test vectors; shows the results of experiments conducted on the ISCAS sequential benchmarks, and discusses some concerns of the broad-side delay test strategy. >