scispace - formally typeset
Search or ask a question

Showing papers on "Design for testing published in 1985"


Book
Hideo Fujiwara1
31 Jul 1985
TL;DR: This book is very referred for you because it gives not only the experience but also lesson, that will give wellness for all people from many societies.
Abstract: Where you can find the logic testing and design for testability easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this logic testing and design for testability book. It is about this book that will give wellness for all people from many societies.

367 citations


Journal ArticleDOI
TL;DR: This article describes efforts to build a knowledge-based expert system for designing testable VLSI chips and introduces a framework for a methodology incorporating structural, behavioral, qualitative, and quantitative aspects of known DFT techniques.
Abstract: The complexity of VLSI circuits has increased the need for design for testability (DFT). Numerous techniques for designing more easily tested circuits have evolved over the years, with particular emphasis on built-in testing approaches. What has not evolved is a design methodology for evaluating and making choices among the numerous existing approaches. This article describes efforts to build a knowledge-based expert system for designing testable VLSI chips. A framework for a methodology incorporating structural, behavioral, qualitative, and quantitative aspects of known DFT techniques is introduced. This methodology provides a designer with a systematic DFT synthesis approach. The process of partitioning a design into subcircuits for individual processing is described and a new concept?I-path?is used to transfer data from one place in the circult to another. Rules for applying testable design methodologies to circuit partitions and for evaluating the various solutions obtained are also presented. Finally, a case study using a prototype system is described.

238 citations


Book
01 Jan 1985
TL;DR: Design for testability techniques offer one approach toward alleviating this situation by adding enough extra circuitry to a circuit or chip to reduce the complexity of testing.
Abstract: Today's computers must perform with increasing reliability, which in turn depends on the problem of determining whether a circuit has been manufactured properly or behaves correctly. However, the greater circuit density of VLSI circuits and systems has made testing more difficult and costly. This book notes that one solution is to develop faster and more efficient algorithms to generate test patterns or use design techniques to enhance testability - that is, "design for testability." Design for testability techniques offer one approach toward alleviating this situation by adding enough extra circuitry to a circuit or chip to reduce the complexity of testing. Because the cost of hardware is decreasing as the cost of testing rises, there is now a growing interest in these techniques for VLSI circuits.The first half of the book focuses on the problem of testing: test generation, fault simulation, and complexity of testing. The second half takes up the problem of design for testability: design techniques to minimize test application and/or test generation cost, scan design for sequential logic circuits, compact testing, built-in testing, and various design techniques for testable systems.Hideo Fujiwara is an associate professor in the Department of Electronics and Communication, Meiji University. Logic Testing and Design for Testability is included in the Computer Systems Series, edited by Herb Schwetman.

127 citations


Journal ArticleDOI
TL;DR: A theory of the algorithm design process based on observations of human design is described and a framework for automatic design is outlined, which helps understand human design better and the implementation process helps validate the framework.
Abstract: Algorithm design is a challenging intellectual activity that provides a rich source of observation and a test domain for a theory of problem-solving behavior. This paper describes a theory of the algorithm design process based on observations of human design and also outlines a framework for automatic design. The adaptation of the theory of human design to a framework for automation in the DESIGNER system helps us understand human design better, and the implementation process helps validate the framework. Issues discussed in this paper include the problem spaces used for design, the loci of knowledge and problem-solving power, and the relationship to other methods of algorithm design and to automatic programming as a whole.

90 citations


Journal Article
Maruyama1
TL;DR: In this paper, the authors present two implementations of exclusive OR, which can be successfully compared with the specification, ''exclusive-OR'' by reduc-tio ad absurdum.
Abstract: Verification need not be costly or time-consuming. With the powerful features of Prolog and the use of temporal logic, verification can be cut to several minutes on a mainframe. As more gates are being squeezed into single LSI chips, the accuracy of design is becoming increasingly significant. A chip design error may result in the repetition of a costly manufacturing process to make a new chip. To avoid such expenses, reliable methodologies must be developed to check the total design process. With a complete hardware synthesis system, we would not have to worry about checking designs, yet not one system is available for practical application. Simulation is the most widely used technique for checking hardware designs. In the early stages of design, simulation enables the designer to find and fix errors. In the final stages, however, simulation is not as effective , and some errors can remain hidden. The most serious problem is that simulation does not definitely ensure the conformance of design to specifications. This handicap is the reason we need formal verification. In formal verification, logic is used to precisely describe a logic circuit. Once specifications are described in logic, a theorem prover does the rest of the work. The first step is to compare a com-binational circuit with its specifications , a process that is easily translated to a logical expression. Figure 1 shows two implementations of exclusive OR , which can be successfully compared with the specification, \"exclusive-OR.\" T. J. Wagner took a further step, reporting on hardware verification by the FOL proof-checker developed at Stanford University.' His proof of an eight-bit multiplier with 260 steps is excellent, but the designer must still construct a verification with the proof checker, in the same way a proof is done in mathematics. Our goal is automated verification, which up to now has .been limited to several special circuits (adder, shifter).2 The idea of automated verification leads us to automated proof by reduc-tio ad absurdum. Suppose a certain condition is represented by proposition P. If we want to verify that this condition always holds for the design, we must prove that no counterexample ever occurs; that is, \\,p =(1) If we can infer condition (1) directly through analysis of logical formulas, verification is successful. If we cannot , another technique is necessary. Tracing causality is a key concept in the DDL Verifier.3`6 Starting from the negation of a proposition, …

65 citations


Journal ArticleDOI
TL;DR: It is argued that structured design techniques offer the most promising prospects for solving the design and test problems resulting from this increase in complexity in the Motorola microprocessor family.
Abstract: The first built-in self-test feature in a Motorola sidered a ?wart? until a RAM test application recast it as a ? feature.? Though the BIST approach?an idea conceived as a way to reduce production costs for the MC6805 family?did not meet its major design objective, the experience provided impetus for the development of BIST techniques for the MC6804P2, which met most of the objectives intended for the MC6805P2. The Motorola microprocessor family has come to incorporate a growing number of testability features; current devices typically employ a combination of BIST and other techniques. If present trends continue, transistor counts for microprocessor-related parts should approach 10 million within 10 years. The authors argue that structured design techniques offer the most promising prospects for solving the design and test problems resulting from this increase in complexity.

57 citations


Journal ArticleDOI
TL;DR: This paper introduces the global r-modification problem, which deals with making r (integer) transformations to a circuit in order to improve its testability, and presents a technique for the automatic design for testability of digital circuits based upon the analysis of controllability and observability measures.
Abstract: In this paper we present a technique for the automatic design for testability of digital circuits based upon the analysis of controllability and observability measures. The new concept of sensitivity is introduced, which is a measure for the degree to which the testability of a circuit improves as increased controllability and observability is achieved over a set of nodes in a circuit. In order to improve the testability of a circuit, three simple transformations are used, namely, the addition of a new primary input and possibly an AND (OR) gate so that a logic 0(1) can be injected into the interior of the circuit, and test points so that internal signal values can be observed. We then introduce the global r-modification problem, which deals with making r (integer) transformations to a circuit in order to improve its testability. This resynthesis problem has been formulated as a mixed integer linear programming problem. A program called Testability Improvement Program (TIP) has been developed for implementing this approach, and experimental results are presented. The work presented is applicable to problems of test generation, the design of fixtures for ATE, and determining the location of test pads on integrated circuit chips when employing electron beam testing.

48 citations


Proceedings ArticleDOI
01 Jun 1985
TL;DR: A prototype knowledge based system has been developed which simulates a human expert on design of testable PLAs and is able to negotiate with the user so as to lead the user through the design space to find a satisfactory solution.
Abstract: Testability is a very important aspect of VLSI circuits. Numerous design for testability (DFT) methods exist. Often designers face the complex problem of selecting the best DFT techniques for a particular chip under a set of design constraints and goals. In order to aid in designing testable circuits, a prototype knowledge based system has been developed which simulates a human expert on design of testable PLAs. The system, described in this paper, has knowledge about testable PLA design methodologies and is able to negotiate with the user so as to lead the user through the design space to find a satisfactory solution. A new search strategy, called reason analysis, is introduced.

24 citations



Journal ArticleDOI
TL;DR: Since the problem of test generation is NP-hard, a set of heuristics is introduced to keep the amount of computation reasonable; several implementation issues are finally investigated.
Abstract: The increasing complexity of VLSI systems demands structured approaches to reduce both design time and test generation effort. PLA's and scan paths have both been widely reported to be efficient in this sense. This correspondence presents an easily testable structure and its related testing strategies. The circuits are assumed to be based on the interconnection of combinatorial macros, mostly implemented by PLA's; tests are generated locally, considering the involved macro as an isolated item, and then are expressed in terms of primary inputs and outputs using a topological approach as general strategy and algebraic techniques for the propagation of signals through macros. Propagation is dealt with by new algorithms. Since the problem of test generation is NP-hard, a set of heuristics is introduced to keep the amount of computation reasonable; several implementation issues are finally investigated.

21 citations


Journal ArticleDOI
A. Jesse Wilkinson1
TL;DR: MIND is an expert system for VLSI test system diagnosis that integrates the principles of their hierarchical design and experts' heuristics to achieve a practical approach to reducing test system downtime.
Abstract: Because of its intended purpose, it is very complicated to diagnose faults in VLSI test system hardware. When this problem is considered in the hardware design phase, it is apparent that VLSI test systems need to have built-in self-test features. For a self-test to be of any value, the circuit check program should minimize the hardware involved in each test. MIND is an expert system for VLSI test system diagnosis that integrates the principles of their hierarchical design and experts' heuristics to achieve a practical approach to reducing test system downtime.

Journal ArticleDOI
K.A.E. Totton1
01 Mar 1985
TL;DR: The paper presents a review of current and proposed test methodologies for semicustom gate arrays, and the future direction of test strategy development is predicted, in the context of increasing integration density and the convergence of `semicustom?
Abstract: The paper presents a review of current and proposed test methodologies for semicustom gate arrays. The necessity of high quality testing is emphasised by considering some of the hazards and penalties associated with poor testability. The usefulness and limitations of testability analysis programs are then considered. A built-in test is introduced as an attractive alternative to conventional approaches based on automatic test pattern generation for highly structured circuits. This test technique is shown to offer significant benefits, including reduced test data volume, improved test quality, and easier maintenance testing. The advantages and disadvantages of three built-in test implementations for gate arrays are discussed. First, an architecture which combines an ad hoc design for testability with a comprehensive on-chip maintenance system is reviewed. This is followed by a presentation of an LSSD-based pseudorandom self-test and the associated test problems. Finally an exhaustive test, based on a similar architecture achieves a high quality test with guaranteed fault coverage. In conclusion, the future direction of test strategy development is predicted, in the context of increasing integration density and the convergence of `semicustom? and `full-custom? design styles.

Journal ArticleDOI
01 Jun 1985
TL;DR: Software simulations of faults in simple NMOS logic circuits are described showing that not all fault effects in NMOS circuits are modellable as ‘stuck’ nodes, indicating an improved fault model which would better reflect MOS fault effects has yet to be defined.
Abstract: VLSI circuits currently being designed are so complex that it is now extremely difficult to test them adequately to determine whether or not they have been processed correctly. Design for testability (DFT) techniques are often used in an attempt to ease this problem by identifying and redesigning potentially ‘difficult-to-test’ parts of the circuits. The ‘testability’ of the circuit is usually evaluated in terms of the stuck-at fault model. However, there have been growing doubts over the ability of this model to cover certain common faults that can occur in MOS processing (at present, the dominant VLSI technology). The paper describes software simulations of faults in simple NMOS logic circuits showing that not all fault effects in NMOS circuits are modellable as ‘stuck’ nodes. An improved fault model which would better reflect MOS fault effects has yet to be defined. Until such an improved model is available, DFT rules for MOS circuits are best regarded as provisional. We therefore conclude with a discussion of ad hoc ‘physical design for testability’ techniques that exploit current understanding of the relation between MOS faults and their fault effects.

Proceedings ArticleDOI
01 Jun 1985
TL;DR: The proposed rule-based modular design for testability methodology utilizes both BIST and scan path techniques for full custom VLSI designs and problems involved with integrating the approach with an emerging silicon compilation system are discussed.
Abstract: This paper discusses design for testability automation within a silicon compiler environment under development at GTE Laboratories Inc. The proposed rule-based modular design for testability methodology utilizes both BIST and scan path techniques for full custom VLSI designs. An on-chip test controller may be used. Testability evaluation is performed using both controllability/observability and information theoretic methods. A testability "expert" is required which can manage the analysis as it evolves during the synthesis process and which can make the ultimate testability decisions. Problems involved with integrating the approach with an emerging silicon compilation system are discussed.

Journal ArticleDOI
TL;DR: A new approach to the self-testing and testability analysis of the types of logic structure encountered in the data flow paths of computers, which allows a hybrid test technique to be adopted, based on both random and pseudoexhaustive test styles, and gives fault coverage figures in excess of 99.5 percent.
Abstract: The article describes a new approach to the self-testing and testability analysis of the types of logic structure encountered in the data flow paths of computers. The main purpose of the new methodology is to avoid the costs associated with manual or automatic test pattern generation. Instead of relying upon an automated process of scanning through a gatelevel description of the logic, this is an analytical approach applied to a block-level functional description of the logic structure. This approach allows a hybrid test technique to be adopted, based on both random and pseudoexhaustive test styles, and gives fault coverage figures in excess of 99.5 percent. The methodology is suitable only for highly structured and well-partitioned logic designs.

Proceedings Article
01 Jan 1985
Abstract: This paper describes the integration of a new tool for testability measurement and improvement into a design system for integrated circuits. The involved design system, CADDY (Carlsruhe Digital Design System), uses a functional description of a circuit written in a PASCAL like language and synthesizes a list of nets and real logical components. In this resulting structure all storing elements are configured as a scan path automatically. Therefore testability analysis and test generation may be restricted to pure combinational networks. This is done by the software tool PROTEST (Probabilistic Testability Analysis). PROTEST determines the testability of a combinational circuit by random patterns, it computes the test length necessary to reach a given fault coverage with an also given confidence, and it proposes modifications of the random pattern sets, which leads to decreasing test lengths.

Proceedings ArticleDOI
01 Jun 1985
TL;DR: This paper describes a family of multiprocessor-based accelerated CAD systems applied to electronic computer aided design and elaborates on the market and technology factors that influenced the product's development.
Abstract: This paper describes a family of multiprocessor-based accelerated CAD systems applied to electronic computer aided design. It discusses the product requirements, hardware architecture, system software, and principal applications running on the system. It also elaborates on the market and technology factors that influenced the product's development.

Journal ArticleDOI
Kofi E. Torku1, Dave A. Kiesling1
TL;DR: Noise resulting from the switching of high-current, off-product drivers in VLSI circuits cause substantial problems during manufacturing test and may lead to zero yield, but can be mitigated or eliminated either during the design phase or at mauufacturing test.
Abstract: Noise resulting from the switching of high-current, off-product drivers in VLSI circuits cause substantial problems during manufacturing test and may lead to zero yield. These problems, seldom addressed during product design, can be mitigated or eliminated either during the design phase or at mauufacturing test. In this article, we present some of our experiences with noise during manufacturing test, and discuss several solutions to the problem. These remedies include such manufacturing test options as relaxing the test specification, or ?guardbanding? which requires both substantial study of the impact on quality and continual tracking of the problem level; modifying the test program to eliminate the need for a guardband; or having the logic designer create a new set of functional test patterns specifically designed to avoid simultaneously switching drivers. We also examine alternative design methods for use during both product and tester design.


Journal ArticleDOI
01 Jun 1985
TL;DR: Automatic test pattern generation methods can be used to effect in determining the test stimuli (or test vectors) required to achieve or approximate an exhaustive test.
Abstract: Testing of circuits with a few hundred logic functions can, in general be performed by the use of selected logic stimuli (Mueldorf and Savkav (1)). Exhaustive testing of circuits demands that all possible logic states in which a circuit can exist must be considered. Automatic test pattern generation (ATPG) methods (Williams and Parker (2), Papaionnou (3), Schnurmann et al (4)) can be used to effect in determining the test stimuli (or test vectors) required to achieve or approximate such an exhaustive test. For combinational circuits where the present states of the output variables are a function only of the present states of the input variables, exhaustive testing requires derivation of a test sequence to create all of the possible input combinations and check the outputs for correct responses. These input stimuli can be applied from automatic test equipment systems (ATE) and the responses can subsequently be sensed by the same equipment.

Journal ArticleDOI
TL;DR: An overview of the issues involved in testing of VLSI chips is provided, namely testability analysis and design for testability, are reviewed.
Abstract: This paper provides an overview of the issues involved in testing of VLSI chips. The discussion is centered-around large digital logic chips. Memory chips are discussed only briefly to point out some of the differences involved in their testing. Topics of fault modelling, test generation, and test evaluation are introduced. Currently used procedures of testing chips on automatic test equipment (ATE) are outlined. Finally, two aspects of testing that are closely related to design, namely testability analysis and design for testability, are reviewed.

Journal ArticleDOI
TL;DR: In this article, an easily testable structure and related strategies are presented, where tests are generated locally and then expressed in terms of primary inputs and outputs using a topological approach as general strategy and algebraic techniques for propagation of signals through macros.