scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 1991"


Proceedings ArticleDOI
26 Oct 1991
TL;DR: Heuristics to aid the derivation of small test sets that detect single stuck-at faults in combinational logic circuits are proposed and can be added to existing test pattern generators without compromising fault coverage.
Abstract: Heuristics to aid the derivation of small test sets that detect single stuck-at faults in combinational logic circuits are proposed. The heuristics can be added to existing test pattern generators without compromising fault coverage. Experimental results obtained by adding the proposed heuristics to a simple PODEM procedure and applying it to the ISCAS-85 and fully-scanned ISCAS-89 benchmark circuits are presented to substantiate the effectiveness of the proposed heuristics. >

237 citations


Proceedings ArticleDOI
26 Oct 1991
TL;DR: It is shown that reasonable predictions of quality level are possible for the functional tests, but that scan tests produce significantly worse quality levels than predicted, Apparent clustering of defects resulted in very good quality levels for fault coverages less than 99%.
Abstract: This paper discusses the use of stuck-at fault coverage as a means of determining quality levels. Data from a part tested with both functional and scan tests is analyzed and compared to three existing theories. It is shown that reasonable predictions of quality level are possible for the functional tests, but that scan tests produce significantly worse quality levels than predicted, Apparent clustering of defects resulted in very good quality levels for fault coverages less than 99%.

182 citations


Proceedings ArticleDOI
26 Oct 1991
TL;DR: This paper simulates complete single stuck-at test sets against a low-level model of bridge defects showing that an unacceptably high percentage of such defects are not detected by the complete stuck- at test sets.
Abstract: Two approaches have been used to balance the cost of generating effective tests for IC's and the need to increase the quality level of shipped IC's. The first approach favors using high-level fault models to reduce test generation costs, and the second approach favors the use of low-level, technology-specific fault models that lead to high test generation costs, but increased defect coverage in the tested circuits. In this paper we simulate complete single stuck-at test sets against a low-level model of bridge defects showing that an unacceptably high percentage of such defects are not detected by the complete stuck-at test sets. Next, we show how low-level bridge fault models can be incorporated into high-level test generation. Finally, we describe our system for generating effective tests for bridge faults and report on its performance.

154 citations


Journal ArticleDOI
TL;DR: The concept of sensitivity is discussed, and a fault/failure model that accounts for fault location is presented, and the relationship of the approach to testability is considered.
Abstract: Sensitivity analysis, which estimates the probability that a program location can hide a failure-causing fault, is addressed. The concept of sensitivity is discussed, and a fault/failure model that accounts for fault location is presented. Sensitivity analysis requires that every location be analyzed for three properties: the probability of execution occurring, the probability of infection occurring, and the probability of propagation occurring. One type of analysis is required to handle each part of the fault/failure model. Each of these analyses is examined, and the interpretation of the resulting three sets of probability estimates for each location is discussed. The relationship of the approach to testability is considered. >

129 citations


Proceedings ArticleDOI
11 Nov 1991
TL;DR: The difficult problem of identifying the equivalence of two faults, analogous to the problem of redundancy identification in ATPG, has been solved and the efficient algorithm is demonstrated by experimental results for a set of benchmark circuits.
Abstract: The authors present an efficient algorithm for the generation of diagnostic test patterns which distinguish between two arbitrary single stuck-at faults. The algorithm is able to extend a given set of test patterns which is generated from the viewpoint of fault detection to a diagnostic test pattern set with a diagnostic resolution down to a fault equivalence class. The difficult problem of identifying the equivalence of two faults, analogous to the problem of redundancy identification in ATPG, has been solved. The efficiency of the algorithm is demonstrated by experimental results for a set of benchmark circuits. DIATEST, the implementation of the algorithm, either generates diagnostic test patterns for all distinguishable pairs of faults or identifies pairs of faults as being equivalent for each of the benchmark circuits. >

129 citations


Proceedings ArticleDOI
11 Nov 1991
TL;DR: The authors propose a fault oriented partial scan design methodology to be performed as a sequel to test generation by analytically selecting only 10-20% of the flip-flops which are most likely to improve the quality of test generation.
Abstract: The authors propose a fault oriented partial scan design methodology to be performed as a sequel to test generation. Given the cost of converting each flip-flop to a scanned flip-flop and an overall bound on the cost of the scan design, the program OPUS-2 selects a set of flip-flops which are most likely to improve the quality of test generation. The expected improvement in testability is modeled by profit functions quantifying the reduction in weighted cycles, or the reduction in SCOAP values at hard-to-detect fault sites. Experiments performed on ISCAS89 sequential benchmark circuits show that, by analytically selecting only 10-20% of the flip-flops, the circuits can be tested to the same level of quality as a fully scanned circuit. The advantages of the proposed method are that the highest possible fault coverage can be achieved while limiting the cost of scan to a user-specified limit. >

120 citations


Proceedings ArticleDOI
11 Nov 1991
TL;DR: The authors introduce an efficient method for generating the functional forms of path analysis problems that holds promise for both static and dynamic hazard analysis and for test generation using all other delay-fault models, tau -irredundant fault models, and stuck-open fault models.
Abstract: The authors introduce an efficient method for generating the functional forms of path analysis problems. They demonstrate that the resulting function is linear in the size of the circuit. The functions are then tested for satisfiability either using a Boolean network satisfiability algorithm suggested by T. Larrabee (1989) or through the construction of BDDs. The effectiveness of the proposed approach is shown for timing analysis and robust path delay-fault test generation. This method also holds promise for both static and dynamic hazard analysis, and for test generation using all other delay-fault models, tau -irredundant fault models, and stuck-open fault models. >

109 citations


Proceedings ArticleDOI
26 Oct 1991
TL;DR: A new algorithm for selecting the best flip-flops to scan for achieving maximum fault coverage in a partial-scan circuit, called PASCAL (PArtial Scan AnaLysis), ranks the flip- flops based on their contribution to the fault coverage.
Abstract: The full-scan and the nonscan versions of a circuit This paper presents a new algorithm for selecting the best flip-flops to scan for achieving maximum fault coverage in a partial-scan circuit. The algorithm, called PASCAL (PArtial Scan AnaLysis), ranks the flip-flops based on their contribution to the fault coverage. The results of PASCAL provide a global view of the entire partial-scan design spectrum (from no scan to full scan), and allow the designer to estimate the fault coverage achievable with any number of scanned flip-flops and to select the minimal subset of flip-flops to scan for obtaining a desired fault coverage. The number of scanned flip-flops can be reduced by taking into account faults detected by functional tests.

92 citations


ReportDOI
01 Apr 1991
TL;DR: In this paper, a method is developed to test delay-insensitive circuits, using the single stuck-at fault model, where the circuits are synthesized from a high-level specification.
Abstract: A method is developed to test delay-insensitive circuits, using the single stuck-at fault model. These circuits are synthesized from a high-level specification. Since the circuits are hazard-free by construction, there is no test for hazards in the circuit. Most faults cause the circuit to halt during test, since they cause an acknowledgement not to occur when it should. There are stuck-at faults that do not cause the circuit to halt under any condition. These are stimulating faults; they cause a premature firing of a production rule. For such a stimulating fault to be testable, the premature firing has to be propagated to a primary output. If this is not guaranteed to occur, then one or more test points have to be added to the circuit. Any stuck-at fault is testable, with the possible addition of test points. For combinational delay-insensitive circuits, finding test vectors is reduced to the same problem as for synchronous combinational logic. For sequential circuits, the synthesis method is used to find a test for each fault efficiently, to find the location of the test points, and to find a test that detects all faults in a circuit. The number of test points needed to fully test the circuit is very low, and the size of the additional testing circuitry is small. A test derived with a simple transformation of the handshaking expansion yields high fault coverage. Adding tests for the remaining faults results in a small complete test for the circuit.

91 citations


Proceedings ArticleDOI
26 Oct 1991
TL;DR: The proposed system combines a simple single fault model for test generation with a more realistic multiple defect model for diagnosis, and the associated hardware is sufficiently simple that on-board implementation is possible.
Abstract: Recently there has been renewed interest in fault detection in static CMOS circuits through current monitoring (“Iddq testing”). It is shown that accurate defect (diagnosis miay be performed with a combination of current and voltage observations. The proposed system combines a simple single fault model for test generation with a more realistic multiple defect model for diagnosis. ‘The associated hardware is sufficiently simple that on-board implementation is possible.

85 citations


Journal ArticleDOI
TL;DR: NewHigh-level behavior fault models that are associated with high-level hardware descriptions of digital designs that are based on the failure modes of the language constructs of the high- level hardware description language are introduced.
Abstract: A critical aspect of digital electronics is the testing of the manufactured designs for correct functionality. The testing process consists of first generating a set of test vectors, then applying them as stimuli to the manufactured designs, and finally comparing the output response with that of the desired response. A design is considered acceptable when the output response matches the desired response and is rejected otherwise. Fundamental to the process of test vector generation is the assumption of an underlying fault model that is a model of the failures introduced during manufacture. The choice of the fault model influences the accuracy of testing and the computer CPU time required to generate test vectors for a given design. The most popular fault model in the industry today is the single stuck-at fault at the gate level that requires exorbitantly large CPU times for moderately complex digital designs. This article introduces new high-level behavior fault models that are associated with high-level hardware descriptions of digital designs. The derivation of these faults is based on the failure modes of the language constructs of the high-level hardware description language. Behavior faults include multiple input stuck-at faults and this article also reasons the nature of test vectors for such faults. The potential advantages of behavior fault modeling include early estimates of fault coverage in the design process prior to the synthesis of the gate-level representation of the design, faster fault simulation, and results that may be more comprehensible to the high-level architects. The behavior-fault-modeling approach is evaluated through a study of correlation of the results of behavior fault simulation of several representative digital designs with the results of gate-level single stuck-at fault simulation of equivalent gate-level representations.

Proceedings ArticleDOI
26 Oct 1991
TL;DR: Results presented for the ISCAS'85 benchmark circuits indicate that this test pattern generator is a practical solution to a problem that must be solved in order to detect the failures that occur in modern VLSI circuits.
Abstract: Test pattern generation for bridging faults has been considered impractical. This paper presents an accurate bridging fault test pattern generator Lhat requires only a gate-level implementation of the circuit. No transistorlevel simulations are required during test pattern generation. Results presented for the ISCAS'85 benchmark circuits indicate that this test pattern generator is 8 practical solution to a problem that must be solved in order to detect the failures that occur in modern VLSI circuits.

Proceedings ArticleDOI
26 Oct 1991
TL;DR: A two-stage procedure for locating V LSI faults is presented and an industrial implementation is reported in which faults were injected and diagnosed in a VLSI chip and the perjiormunce of two- stage fault location was measured.
Abstract: A two-stage procedure for locating VLSI faults is presented. The approach utilizes dynamic fault dictionaries, test set partitioning, and reduced fault lists to achieve a reduction in size and complexity over classic static fault dictionaries. An industrial implementation is reported in which faults were injected and diagnosed in a VLSI chip and the perjiormunce of two-stage fault location was measured.

Journal ArticleDOI
TL;DR: The authors compared two major approaches to the improvement of software-software fault elimination and software fault tolerance-by examination of the fault detection (and tolerance) of five techniques: run-time assertions, multiversion voting, functional testing augmented by structural testing, code reading by stepwise abstraction, and static data-flow analysis.
Abstract: The authors compared two major approaches to the improvement of software-software fault elimination and software fault tolerance-by examination of the fault detection (and tolerance, where applicable) of five techniques: run-time assertions, multiversion voting, functional testing augmented by structural testing, code reading by stepwise abstraction, and static data-flow analysis. The focus was on characterizing the sets of faults detected by the techniques and on characterizing the relationships between these sets of faults. Two categories of questions were investigated: (1) comparison between fault elimination and fault tolerance techniques and (2) comparisons among various testing techniques. The results provide information useful for making decisions about the allocation of project resources, show strengths and weaknesses of the techniques studies, and indicate directions for future research. >

Proceedings ArticleDOI
26 Oct 1991
TL;DR: It is shown that for a class of circuits with a high fnult compatibility well-known test set compaction methods do not effectively minimize the test set, and an algorithm based on finding a maximal clique in a graph to estimate the size of a minimum test set is presented.
Abstract: Generating minimal test sets for combinational circuits is a NP-hard problem. In this paper it will be shown that for a class of circuits with a high fnult compatibility well-known test set compaction methods such as dynamic compaction and reverse order fault simulation do not effectively minimize the test set. Furthermore it will be shown for a number of benchmark circuits that it is possible to generate test sets that are significantly smaller than test sets generated by conventional test set compaction methods. This paper will also present an algorithm based on finding a maximal clique in a graph to estimate the size of a minimum test set.


Journal ArticleDOI
TL;DR: A method for the derivation of fault signatures for the detection of faults in single-output combinational networks is described, which uses the arithmetic spectrum instead of the Rademacher-Walsh spectrum as a form of data compression to reduce the volume of response data at test time.
Abstract: A method for the derivation of fault signatures for the detection of faults in single-output combinational networks is described. The approach uses the arithmetic spectrum instead of the Rademacher-Walsh spectrum. It is a form of data compression that serves to reduce the volume of the response data at test time. The price which is paid for the reduction in the storage requirements is that some of the knowledge of exact fault location is lost. The derived signatures are short and easily tested using very simple test equipment. The test circuitry could be included on the chip since the overhead involved is comparatively small. The test procedure requires a high-speed counter cycling at maximum speed through selected subsets of all input combinations. Hence, the network under test is exercised at speed, and a number of dynamic errors that are not testable by means of conventional test-set approaches will be detected. >

Proceedings ArticleDOI
11 Jun 1991
TL;DR: A combination of techniques for efficiently inserting test points is proposed, which leads to the lowest reported number of test points while significantly reducing the number of random patterns which are required to achieve very close to 100% fault coverage.
Abstract: A combination of techniques for efficiently inserting test points is proposed. These techniques refer to three complementary abstraction levels: algorithms, circuits, and layout. The authors deal with pseudo-random testing and present a method of condensing test points based on the notion of fault sector. Based on a set of proposed heuristics, a tool for automatically inserting test points was developed. Experimental results obtained with the tool are presented to indicate that excellent pseudo-random testability can be achieved with few test points. This technique leads to the lowest reported number of test points while significantly reducing the number of random patterns which are required to achieve very close to 100% fault coverage. >

Journal ArticleDOI
TL;DR: A load balancing method which uses static partitioning initially and then uses dynamic allocation of work for processors which become idle is proposed to partition faults for parallel test generation with minimization of both the overall run time and test length.
Abstract: Heuristics are proposed to partition faults for parallel test generation with minimization of both the overall run time and test length as an objective. For efficient utilization of available processors, the work load has to be balanced at all times. Since it is very difficult to predict how difficult it will be to generate a test for a particular fault, the authors propose a load balancing method which uses static partitioning initially and then uses dynamic allocation of work for processors which become idle. A theoretical model is presented to predict the performance of the parallel test generation/fault simulation process. Experimental results based on an implementation of the Intel IPSC/2 hypercube multiprocessor using the ISCAS combinational benchmark circuits are presented. >

Proceedings ArticleDOI
25 Jun 1991
TL;DR: The test generation problem for synchronous sequential circuits is considered in the case where hardware reset is not available, and the use of multiple fault free responses as well as multiple time units for fault detection is suggested.
Abstract: The test generation problem for synchronous sequential circuits is considered in the case where hardware reset is not available. The observations which form the motivation for the work are given. On the basis of the observations, the use of multiple fault free responses as well as multiple time units for fault detection is suggested. Application to gate level synchronous sequential circuits is then considered. Experimental results are given to support the claim that a small number of observation times is required, and that a small number of fault free responses need be stored for every fault. 100% fault efficiency is achieved. >

Proceedings ArticleDOI
26 Oct 1991
TL;DR: Two fault injection techniques for experimental validation of fault handling mechanisms in computer systems are investigated and compared and it is shown that both methods generate many control flow errors, while pure data errors are infrequent.
Abstract: Two fault injection techniques for experimental validation of fault handling mechanisms in computer systems are investigated and compared. One technique is based on irradiation of ICs with heavy-ion radiation from a 252Cf source. The other technique uses voltage sags injected in the power supply rails to ICs. Both techniques have been used for fault injection experiments with the MC6809E microprocessor. Most errors generated by the 252Cf method were seen first in the address bus, while the power supply disturbances most frequently affected the control signals. An error classification shows that both methods generate many control flow errors, while pure data errors are infrequent. Results from a simulation experiment show that that the low number data errors in the 252Cf experiments can be explained by the fact that many errors in data registers are overwritten owing to the normal program execution.

Proceedings Article
26 Oct 1991
TL;DR: Experimental results show that cube-contained random patterns can achieve 100% fault coverage of synthesized ciscuits using orders of magnitude less patterns than when equiprobable random patterns are used.
Abstract: The novel concept of cube-contained random patterns represents an alternative to weighted random pattern testing. Reductions in random pattern test lengths are achieved by the successive assignment of temporarily fixed values to selected inputs during the random pattern generation process. Experimental results show that cube-contained random patterns can achieve 100% fault coverage of synthesized ciscuits using orders of magnitude less patterns than when equiprobable random patterns are used.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: The single transition fault model is augmented by carefully selected multiple transition faults which potentially increase the coverage of single stuck-at faults and experimental results for stuck- at faults are presented.
Abstract: A complete method is presented for generating tests for sequential machines. The transition fault model is employed, and the machine is assumed to be described by a state table. The test generation algorithm described is polynomial in the size of the state table, and is complete and accurate in the following sense. For every given transition fault, the algorithm provides either a test, or a proof that the fault is undetectable. The relationship between transition faults and stuck-at faults is investigated. The single transition fault model is augmented by carefully selected multiple transition faults which potentially increase the coverage of single stuck-at faults. A method to achieve 100% fault efficiency for stuck-at faults is then proposed, and experimental results for stuck-at faults are presented.

Patent
09 Apr 1991
TL;DR: In this article, a system for testing circuits of digital data telecommunications networks provides selected physical and protocol testing on an integrated basis, and test methods provide for analysis of test results to provide diagnosis of probable cause of actual or apparent faults related to data transmission.
Abstract: A system for testing circuits of digital data telecommunications networks provides selected physical and protocol testing on an integrated basis. Systems and test methods provide for analysis of test results to provide diagnosis of probable cause of actual or apparent faults related to data transmission and may also provide automatic implementation of additional diagnosis, followed by a second level of fault diagnosis using the additional test results. Display screens provide the results of fault analysis, provide comparative viewing of fault-free benchmark data and provide suggestions as to probable cause of faults. A central test unit portion of the system may be installed at a carrier's offices and testing may be controlled on a dial-up basis over telephone lines from a remote customer location using a personal computer as the test selection entry and information screen viewing device.

15 Oct 1991
TL;DR: A new test selection method is presented which applies in the case of nondeterministic specifications and implementations and can be adapted to (subsets of) these languages.
Abstract: The selection of appropriate test cases is an important issue in the development of communication protocols. Various test case selection methods have been developed for the case that the protocol specification is given in the form of a deterministic finite state machine (FSM). This paper present a new method which applies in the case of nondeterministic specifications and implementations. The testing process is more complex if the specification, or even the implementation, is non-deterministic. Nevertheless, under appropriate assumptions, the described test case selection method leads to a finite set of finite test cases for a given specification which guarantees that any deviation of the implementation from the specification will be detected. The paper presents the new test selection method in a framework for testing non-deterministic systems and demonstrates its use with small examples. 1. Introduction Testing plays an important role during the development of computer hardware and software. The selection of appropriate test cases is an important issue in this context. We assume in this paper that a specification of the desired behavior of the system component to be tested is available. Such a specification can be taken as the basis for the development of a suite of test cases, or for evaluating the coverage of a given test suite. This paper deals with the development of a test suite covering the behavior of a system component defined by a finite state machine specification. In contrast to most methods described in the literature, we allow for non-deterministic specifications and implementations. The issue of testing implementations in respect to a specified behavior has recently received much attention in the area of communication protocols [Rayn 87, Sari 89]. In order to validate the protocol implementation, a set of test cases, usually called a "test suite", is needed to determine whether an implementation conforms to its specification. In the case that a formal specification of the protocol is available, the test selection and fault analysis can be based on this specification [Sari 89, Boch 89m]. This paper considers the case that the specification and its implementation may have nondeterministic behaviors. We assume that both the specification and the implementation can be modelled by finite labelled transition systems. In addition to finite state machines, there are many languages which are based on (in general infinite) labelled transition systems, such as CCS [Miln 80], CSP [Hoar 85], and LOTOS [Bolo 87]. The test method described in this paper can be adapted to (subsets of) these languages. Most test selection methods for (deterministic) finite state machines [Nait 81, Chow 78, Gone 70, Sabn 88] assume that the purpose of testing is to demonstrate that the behavior of

Journal ArticleDOI
TL;DR: A built-off test strategy is presented which moves the additional hardware to a programmable extra chip which contains a hardware structure that can produce weighted random patterns corresponding to multiple programmable distributions.
Abstract: In self-testable circuits, additional hardware is incorporated for generating test patterns and evaluating test responses. A built-off test strategy is presented which moves the additional hardware to a programmable extra chip. This is a low-cost test strategy in three ways: (1) the use of random patterns eliminates the expensive test-pattern computation; (2) a microcomputer and an ASIC (application-specific IC) replace the expensive automatic test equipment; and (3) the design for testability overhead is minimized. The presented ASIC generates random patterns, applies them to a circuit under test, and evaluates the test responses by signature analysis. It contains a hardware structure that can produce weighted random patterns corresponding to multiple programmable distributions. These patterns give a high fault coverage and allow short test lengths. A wide range of circuits can be tested as the only requirement is a scan path and no other test structures have to be built in. >

Proceedings ArticleDOI
25 Jun 1991
TL;DR: It is shown that fault-tolerant protocol testing by deterministic fault injection achieve better coverages than by random fault injection.
Abstract: A deterministic test strategy consisting of deterministic fault injection at the message level is investigated. Messages sent by faulty units are replaced by such wrong messages that cause all program parts of the faultless protocol units to be executed subsequently. Since this well-aimed fault injection poses complex problems, heuristics based on the program flow of previous injections of wrong messages is dynamically applied. The program parts to be tested are selected with increasing granularity until either a design error is found or sufficient structural coverage is reached, which reflects the portion of tested program parts. Using a simplified program model, an algebraic analysis of the structural coverage and the design error coverage, which is the probability to reveal an existing design error, is carried out. It is shown that fault-tolerant protocol testing by deterministic fault injection achieve better coverages than by random fault injection. >

Proceedings ArticleDOI
11 Nov 1991
TL;DR: Methods are investigated for reducing events in sequential circuit fault simulation by reducing the number of faults simulated for each test vector and the Star-algorithm is extended to handle sequential circuits and provides global information about inactive faults, based on the fault-free circuit state.
Abstract: Methods are investigated for reducing events in sequential circuit fault simulation by reducing the number of faults simulated for each test vector. Inactive faults, which are guaranteed to have no effect on the output or the next state, are identified using local information from the fault-free circuit in one technique. In a second technique, the Star-algorithm is extended to handle sequential circuits and provides global information about inactive faults, based on the fault-free circuit state. Both techniques are integrated into the PROOFS synchronous sequential circuit fault simulator. An average 28% reduction in faulty circuit gate evaluations is obtained for the 19 ISCAS-89 benchmark circuits studied using the first technique, and 33% reduction for the two techniques combined. Execution times decrease by an average of 17% when the first technique is used. For the largest circuits, further improvements in execution time are made when the Star-algorithm is included. >

Proceedings Article
01 Jan 1991
TL;DR: In this article, a new fault model that takes into account complex couplings resulting from simultaneous access of memory cells is used in order to ensure a very high fault coverage, which achieves qn complexity thanks to the use of some topological restrictions.
Abstract: In this paper we present a new approach to the test of multi-port RAMS. A new fault model that takes into account complex couplings resulting from simultaneous access of memory cells is used in order to ensure a very high fault coverage. A new algorithm for the test of dual-port memories is here detailed. This algorithm achieves qn) complexity thanks to the use of some topological restrictions. We also present a new BlST scheme, based on programmable schematic generators, that allows great flexibility for ASIC design. I - INTRODUCTION Multi-port RAMS are widely used as embedded memories in telecommunication ASlCs and also in multi-processor systems. These special RAM chips are provided by IC vendors as well as multi-port RAM generators that are already available in some ASIC vendors libraries. The testing of these memories is getting more important since they are often embedded in VLSl and ULSl circuits (RAT 901. Efficient algorithms for the test of multi-port RAMS can thus be used in two ways : ASIC libraries providing test algorithms and ASIC libraries providing BIST circuit generators as an option for the multi-port RAM generator. Multi-port RAMS differ from single port RAMS by the possibility of accessing more than one cell at a time. Various authors have studied the fault modeling of single port RAMS as well as test algorithms for the detection of such faults. Coupling faults are a widely used model for the testing of single port RAMS (SUK 811 (THA 781. Recent work on multi-port RAM testing use the same coupling fault models (RAP 88) (NAD 90) or Induction Fault Analysis extracted faults (SAS 901 (which do not take couplings into account). In (CASSl), we have shown that the above fault model is not sufficient in order to test multi-port RAMS. Thus, we have introduced the complex coupling faults. In section II we briefly review the concept of complex coupling. In this paper, we detail an algorithm based on a topdogical approach that results in an O(n) complexity test algorithm with a very high fault coverage. Conceming the implementation of the algorithm and in order to be compatible with the high flexibility provided by ASIC libraries RAM generators, we have developed a totally parameterizable BlST circuit. The development is done using GDP tools and the THOMSON Composants Militaires et Spatiaux (TMS) CSAM ASIC compiled function library.

Journal ArticleDOI
TL;DR: A preliminary fault classification is proposed, which uncovers the types of realistic faults in MOS digital ICs that are hard to detect, paving the way to derive layout rules for hard-fault avoidance.
Abstract: In order to make possible the production of cost-effective electronic systems, integrated circuits (ICs) need to be designed for testability. The purpose of this article is to present a methodology for testability enhancement at the lower levels of the design (i.e., at circuit and layout levels). The proposed strategy uses both hardware refinement and software improvement. The main areas of low-cost software improvement are test generation based on a logic description closely related to the physical design, test-vector sequencing, and the introduction of circuit knowledge in fault simulation. The strategy for hardware improvement is based on realistic fault list generation, fault classification (according to fault impact on circuit behavior), and layout-level DFT (design for testability) rules derivation. A preliminary fault classification is proposed, which uncovers the types of realistic faults in MOS digital ICs that are hard to detect, paving the way to derive layout rules for hard-fault avoidance. Simulation examples are presented ascertaining that specific subsets of line-open and bridging faults (according to their topological characteristics) are hard to detect by logic testing using test patterns derived for line stuck-at fault detection.