scispace - formally typeset
Search or ask a question

Showing papers on "Design for testing published in 1999"


Proceedings ArticleDOI
15 Jun 1999
TL;DR: In this article, the authors proposed a parallel serial full scan (PSFS) technique for reducing the test application time for full scan embedded cores, which divides the scan chain into multiple partitions and shifts in the same vector to each scan chain through a single scan in input.
Abstract: We propose a new design for testability technique, Parallel Serial Full Scan (PSFS), for reducing the test application time for full scan embedded cores. Test application time reduction is achieved by dividing the scan chain into multiple partitions and shifting in the same vector to each scan chain through a single scan in input. The experimental results for the ISCAS890 circuits showed that PSFS technique significantly reduces both the test application time and the amount of test data for full scan embedded cores.

334 citations


Proceedings ArticleDOI
28 Sep 1999
TL;DR: The experimental results demonstrate that with automation of the proposed solutions, logic BIST can achieve test quality approaching that of ATPG with minimal area overhead and few changes to the design flow.
Abstract: This paper discusses practical issues involved in applying logic built-in self-test (BIST) to four large industrial designs. These multi-clock designs, ranging in size from 200 K to 800 K gates, pose significant challenges to logic BIST methodology, flow, and tools. The paper presents the process of generating a BIST-compliant core along with the logic BIST controller for at-speed testing. Comparative data on fault grades and area overhead between automatic test pattern generation (ATPG) and logic BIST are reported. The experimental results demonstrate that with automation of the proposed solutions, logic BIST can achieve test quality approaching that of ATPG with minimal area overhead and few changes to the design flow.

324 citations



Proceedings ArticleDOI
28 Sep 1999
TL;DR: This article presents a methodology for debugging multiple clock domain systems-on-a-chip with a set of design-for-debug modules designed into an IC to make it debuggable.
Abstract: For today's multi-million transistor designs, existing design verification techniques cannot guarantee that first silicon is designed error free. Therefore, techniques are necessary to efficiently debug first-silicon. In this article, we present a methodology for debugging multiple clock domain systems-on-a-chip. In addition to scan chains, a set of design-for-debug modules is designed into an IC to make it debuggable. Debugger tool software interacts with the on-chip DfD to make the debug features available from a workstation.

149 citations


Journal ArticleDOI
TL;DR: The design for testability (DFT) of active analog filters based on oscillation-test methodology is described and the DFT techniques investigated are very suitable for automatic testable filter synthesis and can be easily integrated in the tools dedicated to automatic filter design.
Abstract: The oscillation-test strategy is a low cost and robust test method for mixed-signal integrated circuits. Being a vectorless test method, it allows one to eliminate the analog test vector generator. Furthermore, as the oscillation frequency is considered to be digital, it can be precisely analyzed using pure digital circuitry and can be easily interfaced to test techniques dedicated to the digital part of the circuit under test (CUT). This paper describes the design for testability (DFT) of active analog filters based on oscillation-test methodology. Active filters are transformed to oscillators using very simple techniques. The tolerance band of the oscillation frequency is determined by a Monte Carlo analysis taking into account the nominal tolerance of all circuit under test components. Discrete practical realizations and extensive simulations based on CMOS 1.2 /spl mu/m technology parameters affirm that the test technique presented for active analog filters ensures high fault coverage and requires a negligible area overhead. Finally, the DFT techniques investigated are very suitable for automatic testable filter synthesis and can be easily integrated in the tools dedicated to automatic filter design.

141 citations


Proceedings ArticleDOI
28 Sep 1999
TL;DR: An innovative self-test and self-repair technique that generates and analyzes the required failure bitmap information on the fly during self- test and then automatically repairs and verifies the repaired RAM arrays.
Abstract: An innovative self-test and self-repair technique supports built-in self-test and built-in self-repair of large embedded RAM arrays with spare rows and columns. The technique generates and analyzes the required failure bitmap information on the fly during self-test and then automatically repairs and verifies the repaired RAM arrays.

127 citations


Patent
03 Jun 1999
TL;DR: In this article, a system and method for scan-architecting for test circuitry (e.g., scan architecting) within an integrated circuit design having sub-designs is presented.
Abstract: A system and method for architecting design for test circuitry (eg, scan architecting) within an integrated circuit design having subdesigns (eg, modules) The novel system contains a default operational mode (no user specification) and an operational mode based on user specifications; within either mode, the system recognizes and allows definition of subdesign scan chains which can be linked together alone or with other scan elements to architect complex scan chains (eg, top level scan chains) The system includes specification, analysis, synthesis and reporting processes which can be used in an IC design having a hierarchical structure including modules The specification process accesses a design database and a script file and allows a user to define global scan properties (scan style, number of chains, etc), properties of a particular scan chain (membership, name, etc), test signals (scan-in, scan-out, scan-enable, etc), complex elements used as part of a scan chain without requiring scan replacement, wires and latches forming connections between scan elements; this information is associated with the selected design database Analysis reads the design database and performs architecting of scan chains based on inferred scan elements of the design and defined (eg specified) scan elements During analysis, the logic within the design database is not altered and a script is generated for user modification/verification Specification and analysis can be executed iteratively until the desired scan structures are planned Synthesis then implements the desired DFT circuitry by altering the design database based on the scan chains planned by analysis

110 citations


Proceedings ArticleDOI
01 Jun 1999
TL;DR: The novel feature of the approach is the use an embedded microprocessor/memory pair to test the remaining components of the SOC to achieve at-speed testing and achieving great flexibility since most of the testing process is based on software.
Abstract: The purpose of this paper is to develop a flexible design for test methodology for testing a core-based system on chip (SOC). The novel feature of the approach is the use an embedded microprocessor/memory pair to test the remaining components of the SOC. Test data is downloaded using DMA techniques directly into memory while the microprocessor uses the test data to test the core. The test results are transferred to a MISR for evaluation. The approach has several important advantages over conventional ATPG such as achieving at-speed testing, not limiting the chip speed to the tester speed during test and achieving great flexibility since most of the testing process is based on software. Experimental results on an example system are discussed.

102 citations


Journal ArticleDOI
TL;DR: A novel test methodology that not only substantially reduces the total test pattern number for multiple circuits but also allows a single input data line to support multiple scan chains and provides a low-cost and high-performance method to integrate the boundary scan and scan architectures.
Abstract: Scan designs can alleviate test difficulties of sequential circuits by replacing the memory elements with scannable registers. However, scan operations usually result in long test application time. Most classical methods to solving this problem either perform test compaction to obtain fewer test vectors or use multiple scan chain design to reduce the scan time. For a large system, test vector compaction is a time-consuming process, while multiple scan chains either require extra pin overhead or need the sharing of normal I/O and scan I/O pins. In this paper, we present a novel test methodology that not only substantially reduces the total test pattern number for multiple circuits but also allows a single input data line to support multiple scan chains. Our main idea is to explore the "sharing" property of test patterns among all circuits under test (CUT's). By appropriately connecting the inputs of all CUT's during the automatic test-pattern generation process such that the generated test patterns can be broadcast to all scan chains when the actual testing operation is executed, the above-mentioned problems can be solved effectively. Our method also provides a low-cost and high-performance method to integrate the boundary scan and scan architectures. Experimental results show that 157 test patterns are enough to detect all detectable faults in the ten ISCAS'85 combinational circuits, while 280 are enough for the ten largest ISCAS'89 scan-based sequential circuits.

74 citations


Journal ArticleDOI
TL;DR: The differences between traditional and core-based test development are described, the future challenges regarding standardization, tool development, and academic and industrial research are listed, and an overview of current industrial approaches are presented.
Abstract: Advances in semiconductor design and manufacturing technology enable the design of complete systems on one IC. To develop these system ICs in a timely manner, traditional IC design in which everything is designed from scratch, is replaced by a design style based on embedding large reusable modules, the so-called cores. Effectively, the design of a core-based IC is partitioned over the core provider(s) and the system-chip integrator. The development of tests should follow the same partitioning. We describe the differences between traditional and core-based test development, and present an overview of current industrial approaches. We list the future challenges regarding standardization, tool development, and academic and industrial research.

73 citations


Proceedings ArticleDOI
24 Sep 1999
TL;DR: A fast simulation-based method to compute an efficient seed (initial state) of a given primitive polynomial LFSR TPG that is able to deal with combinational circuits of great size and with a lot of primary inputs.
Abstract: Linear Feedback Shift Registers (LFSRs) are commonly used as pseudo-random test pattern generators (TPGs) in BIST schemes. This paper presents a fast simulation-based method to compute an efficient seed (initial state) of a given primitive polynomial LFSR TPG. The size of the LFSR, the primitive feedback polynomial and the length of the generated test sequence are a priori known. The method uses a deterministic test cube compression technique and produces a one-seed LFSR test sequence of a predefined test length that achieves high fault coverage. This technique can be applied either in pseudo-random testing for BISTed circuits containing few random resistant faults, or in pseudo-deterministic BIST where it allows the hardware generator overhead area to be reduced. Compared with existing methods, the proposed technique is able to deal with combinational circuits of great size and with a lot of primary inputs. Experimental results demonstrate the effectiveness of our method.

Proceedings ArticleDOI
Michinobu Nakao1, Seiji Kobayashi1, Kazumi Hatayama1, K. Iijima1, S. Terada1 
28 Sep 1999
TL;DR: Efficient test point selection algorithms, which are suitable for utilizing overhead reduction approaches such as restricted cell replacement, test point flip-flops sharing, are proposed to meet the above requirements.
Abstract: This paper presents a practical test point insertion method for scan-based BIST. To apply test point insertion in actual LSIs, especially high performance LSIs, it is important to reduce the delay penalty and the area overhead of the inserted test points. Here efficient test point selection algorithms, which are suitable for utilizing overhead reduction approaches such as restricted cell replacement, test point flip-flops sharing, are proposed to meet the above requirements. The effectiveness of the algorithms is demonstrated by some experiments.

Journal ArticleDOI
TL;DR: This paper presents a design for testability technique for testing such core-based systems and shows that the proposed scheme has significantly lower area overhead, delay overhead, and test application time compared to FScan-BScan and F Scan-TBus, without any compromise in the system fault coverage.
Abstract: In a fundamental paradigm shift in system design, entire systems are being built on a single chip, using multiple embedded cores. Though the newest system design methodology has several advantages in terms of time-to-market and system cost, testing such core-based systems is difficult, mainly due to the problem of justifying test sequences at the inputs of a core embedded deep in the circuit and propagating test responses from the core outputs. In this paper, we first present a design for testability technique for testing such core-based systems. In this scheme, untestable cores are first made testable using hierarchical testability analysis techniques. If necessary, additional testability hardware is added to the cores to make them transparent so that they can propagate test data without information loss. This testability and transparency technique is currently applicable to cores of the following types: application-specific integrated circuits, application-specific programmable processors, and application-specific instruction processors. Other core types can be made testable and transparent using traditional techniques. The testable and transparent cores can then he integrated together with some system-level testability hardware to ensure justification of precomputed test sequences of each core from system primary inputs to the core inputs and propagation of test responses from core outputs to system primary outputs. Justification and propagation of test sequences are done at the system level by extending and suitably modifying the symbolic hierarchical testability analysis method that has been successfully applied to register-transfer level circuits. Since the testability analysis method is symbolic, the system test generation method is independent of the bit-width of the cores. The system-level test set is obtained as a byproduct of the testability analysis and insertion method without further search. The test methodology was applied to six example systems. Besides the proposed test method, the two methods that are currently used in the industry were also evaluated: (1) FScan-BScan, where each core is full-scanned, and system test is performed using boundary scan and (2) FScan-TBus, where each core is full-scanned, and system test is performed using a test bus. The experiments show that the proposed scheme has significantly lower area overhead, delay overhead, and test application time compared to FScan-BScan and FScan-TBus, without any compromise in the system fault coverage.

Journal ArticleDOI
TL;DR: A global design for test methodology for testing a core-based system in its entirety is developed by introducing a “bypass” mode for each core by which the data can be transferred from a core input port to the output port without interfering the core circuitry itself.
Abstract: The purpose of this paper is to develop a global design for test methodology for testing a core-based system in its entirety. This is achieved by introducing a “bypass” mode for each core by which the data can be transferred from a core input port to the output port without interfering the core circuitry itself. The interconnections are thoroughly tested because they are used to propagate test data (patterns or signatures) in the system. The system is modeled as a directed weighted graph in which the accessibility (of the core input and output ports) is solved as a shortest path problem. Finally, a pipelined test schedule is made to overlap accessing input ports (to send test patterns) and output ports (to observe the signatures). The experimental results show higher fault coverage and shorter test time.

Proceedings ArticleDOI
24 Sep 1999
TL;DR: It is argued that the fact that expansion and scheduling take place on test protocols rather than on complete tests is important to reduce the computational complexity of the associated software tools.
Abstract: A core-based design style introduces new test challenges, which, if not dealt with properly, might defeat the entire purpose of using pre-designed cores. Macro Test is a liberal test approach for core-based designs, i.e., it supports all kinds of test access mechanisms to the embedded cores. The separation of tests into test protocols and test patterns plays a crucial role in Macro Test. Tasks as expansion of core-level tests to chip level, scheduling of tests, and test assembly are carried out on test protocols by software tools. This paper addresses the role of test protocols and features an example of a small scan-testable core. We argue that the fact that expansion and scheduling take place on test protocols rather than on complete tests is important to reduce the computational complexity of the associated software tools.

Proceedings ArticleDOI
16 Nov 1999
TL;DR: A new design for testing SRAM based field programmable gate arrays (FPGAs) based on slightly modifying the original SRAM part in the FPGA so that it will allow the configuration data to be looped on a chip and then the test becomes easier.
Abstract: This paper presents a new design for testing SRAM based field programmable gate arrays (FPGAs) The new proposed method is able to test both the configurable logic blocks (CLBs) and the interconnection networks The proposed design is based on slightly modifying the original SRAM part in the FPGA so that it will allow the configuration data to be looped on a chip and then the test becomes easier This method requires a very short test time compared to the previous works Moreover, the off-chip memory used in the storage of the configurations data is considerably reduced The application of this method to the XC4000 family and ORCA shows that (relative to that required by the previous works) the test time can be reduced by 872% and the required off-chip memory can be reduced by 886%

Journal ArticleDOI
TL;DR: The method utilizes the register-transfer level (RTL) circuit description of an ASPP or ASIP to come up with a set of test microcode patterns which can be written into the instruction read-only memory (ROM) of the processor.
Abstract: In this paper, we present design for testability (DFT) and hierarchical test generation techniques for facilitating the testing of application-specific programmable processors (ASPPs) and application-specific instruction processors (ASIPs). The method utilizes the register-transfer level (RTL) circuit description of an ASPP or ASIP to come up with a set of test microcode patterns which can be written into the instruction read-only memory (ROM) of the processor. These lines of microcode dictate a new control/data flow in the circuit and can be used to test modules which are not easily testable. The new control/data flow is used to justify precomputed test sets of a module from the system primary inputs to the module inputs and propagate output responses from the module output to the system primary outputs. The testability analysis, which is based on the relevant control/data flow extracted from the RTL circuit, is symbolic. Thus, it is independent of the bit-width of the data path and is extremely fast. The test microcode patterns are a by-product of this analysis. If the derived test microcode cannot test all untested modules in the circuit, then test multiplexers are added (usually to the off-critical paths of the data path) to test these modules. This is done to guarantee the testability of all modules in the circuit. If the control microcode memory of the processor is erasable, then the test microcode lines can be erased once the testing of the chip is over. In that case, the DFT scheme has very little overhead (typically less than 1%). Otherwise, the test microcode lines remain as an overhead in the control memory. The method requires the addition of only one external test pin. Application of this technique to several examples has resulted in a very high fault coverage (above 99.6%) for all of them. The test generation time is about three orders of magnitude smaller compared to an efficient gate-level sequential test generator. The average area overhead (without assuming an erasable ROM) is 3.1% while the delay overheads are negligible. This method does not require any scan in the controller or data path. It is also amenable to at-speed testing.

Proceedings ArticleDOI
16 Nov 1999
TL;DR: If a failure occurs in the scan chain, irregular IDDQ current flow will occur and identify the defective chain and the actual location of the failure inside the chain can be ascertained.
Abstract: For functional failure analysis, use of the scan design for effective testing of sequential circuits is very popular and can be considered the norm for the LSI industry. However, in order to take advantage of the features offered by scan designs, it is imperative that the scan chain is operating properly. In this paper, I will introduce a new technique for the efficient diagnosis of the scan chain. The basis for this paper is that if a failure occurs in the scan chain, irregular IDDQ current flow will occur and identify the defective chain. Moreover, the actual location of the failure inside the chain can also be ascertained. Then, with result of exemplification, I shall prove the effectiveness of this method.

Journal ArticleDOI
TL;DR: Current ATPG techniques and efforts to adapt ATPG technology to handle deep-submicron faults and to identify design errors and timing problems during design verification are described.
Abstract: Test development automation tools, which automate dozens of tasks essential for developing adequate tests, generally fall into four categories: design for testability (DFT), test pattern generation, pattern-grading, and test program development and debugging. The focus in the article is on automatic test-pattern-generation tools. Researchers have looked primarily at issues such as scalability, ability to handle various fault models, and how to extend the algorithms beyond Boolean domains to handle different abstraction levels. Their aims were to speed up test generation, reduce test sequence length, and minimize power consumption. As design trends move toward nanometer technology however, new ATPG problems are emerging. Current modeling and vector generation techniques must give way to new techniques that consider timing information during test generation, scale to larger designs, and can capture extreme design conditions. The authors describe current ATPG techniques and efforts to adapt ATPG technology to handle deep-submicron faults and to identify design errors and timing problems during design verification.

Proceedings ArticleDOI
28 Sep 1999
TL;DR: The design-for-test and design- for-debug features of the AMD-K7/sup TM/ microprocessor are described and considerations for debug support from wafer level test through systems level test are considered.
Abstract: The time-to-volume and production manufacturing requirements drove the test methodology chosen for the seventh generation AMD x86 compatible processor. The use of embedded hardware to meet the test objectives includes considerations for debug support from wafer level test through systems level test. The use of ATPG to generate high stuck-at fault coverage tests was central to top-level design considerations of the AMD-K7/sup TM/ processor. Partitioning choices were also made to ensure ATPG produced tests could be enhanced with additional test sets targeting other fault models. This paper describes the design-for-test and design-for-debug features of the AMD-K7/sup TM/ microprocessor.

Journal ArticleDOI
Marly Roncken1
01 Feb 1999
TL;DR: The impact of self timing on the effectiveness of I/sub DDQ/-based test methods for bridging faults is quantified, and a Design-for-Test (DfT) approach is proposed to develop a low-cost DfT solution.
Abstract: For a CMOS manufacturing process, asynchronous ICs are similar to synchronous ICs. The defect density distributions are similar, and hence, so are the fault models and fault-detection methods. So, what makes us think that asynchronous circuits are much harder to test than synchronous circuits? Because the effectiveness of best known test methods for synchronous circuits drops when applied to asynchronous circuits? They may very well be a temporal hurdle. Many test methods have already been reevaluated and successfully adapted from the synchronous to the asynchronous test domain. The paper addresses one of the final hurdles: I/sub DDQ/ testing. This type of test method, based on measuring the quiescent power supply current, is very effective for detecting (resistive) bridging faults in CMOS circuits. Detection of bridging faults is crucial, because they model the majority of today's manufacturing defects. I/sub DDQ/ fault effects are sensitized in a particular state or set of states and can only be detected if we stop the circuit operation right there. This is a problem for asynchronous circuits, because their operation is self-timed. In the paper, we quantify the impact of self timing on the effectiveness of I/sub DDQ/-based test methods for bridging faults, and propose a Design-for-Test (DfT) approach to develop a low-cost DfT solution. For comparison, we do the same for logic voltage testing and stuck-at faults. The approach is illustrated on circuits from Tangram, the asynchronous design-style employed at Philips Research, but it is applicable to asynchronous circuits in general.

Proceedings ArticleDOI
21 Mar 1999
TL;DR: A method to clock the domino pipeline at the maximum rate by using soft synchronizers between pipeline stages and thus allowing "time borrowing" i.e., allowing input signals to arrive at a pipe stage after the clock tick is described.
Abstract: We describe a method to clock the domino pipeline at the maximum rate by using soft synchronizers between pipeline stages and thus allowing "time borrowing" i.e., allowing input signals to arrive at a pipe stage after the clock tick. We show a robust way of placing "roadblocks" (equivalent to slave latches) in each pipe stage to maintain the optimal clock rate. As explicit latches are not required at the pipe stage boundaries, the latch overhead is eliminated. We use the self-resetting scheme to circumvent often performance-limiting precharge timing requirements. We also address several issues regarding the testability of self-resetting domino circuits including scan register design and multiple stuck fault testing.

Journal ArticleDOI
TL;DR: A novel cut-based functional debugging paradigm that leverages the advantages of both emulation and simulation is introduced that enables the user to run long test sequences in emulation, and upon error detection, roll-back to an arbitrary instance in execution time and transparently switch over to simulation-based debugging.
Abstract: Growing design complexity has made functional debugging of application-specific integrated circuits crucial to their development. Two widely used debugging techniques are simulation and emulation. Design simulation provides good controllability and observability of the variables in a design, but is two to ten orders of magnitude slower than the fabricated design. Design emulation and fabrication provide high execution speed, but significantly restrict design observability and controllability. To facilitate debugging, and in particular error diagnosis, we introduce a novel cut-based functional debugging paradigm that leverages the advantages of both emulation and simulation. The approach enables the user to run long test sequences in emulation, and upon error detection, roll-back to an arbitrary instance in execution time, and transparently switch over to simulation-based debugging for full design visibility and controllability. The new debugging approach introduces several optimization problems. We formulate the optimization tasks, establish their complexity, and develop most-constrained least-constraining heuristics to solve them. The effectiveness of the new approach and accompanying algorithms is demonstrated on a set of benchmark designs where combined emulation and simulation is enabled with low hardware overhead.

Journal ArticleDOI
TL;DR: An out-of-order, three-way superscalar /spl times/86 microprocessor with a 15-stage pipeline, organized to allow 600 MHz operation, can fetch, decode, and retire up to three /spltimes/86 instructions per cycle to independent integer and floating-point schedulers.
Abstract: An out-of-order, three-way superscalar /spl times/86 microprocessor with a 15-stage pipeline, organized to allow 600 MHz operation, can fetch, decode, and retire up to three /spl times/86 instructions per cycle to independent integer and floating-point schedulers. The schedulers can simultaneously dispatch up to nine operations to seven integer and three floating-point execution resources. A sophisticated, cell-based design technique and judicious application of custom circuitry permit the development of a processor with an aggressive architecture and high clock frequency with a rapid design cycle. Design-for-test techniques such as scan and clock bypassing permit straightforward testing and debugging of the part.

Journal ArticleDOI
TL;DR: A new testability analysis and test-point insertion method at the register transfer level (RTL), assuming a full scan and a pseudorandom built-in self-test design environment that allows full application of RTL synthesis optimization on both the functional and the test logic concurrently within the designer constraints such as area and delay.
Abstract: This paper proposes a new testability analysis and test-point insertion method at the register transfer level (RTL), assuming a full scan and a pseudorandom built-in self-test design environment. The method is based on analyzing the RTL synchronous specification in synthesizable very high speed integrated circuit hardware descriptive language (VHDL). A VHDL intermediate form representation is first obtained from the VHDL specification and then converted to a directed acyclic graph (DAG) that represents all data dependencies and flow of control in the VHDL specification. Testability measures (TMs) are computed on this graph. The considered TMs are controllability and observability for each bit of each signal/variable that is declared or may be implied in the VHDL specification. Internal signals of functional modules (FMs) such as adders and comparators are also analyzed to compute their controllability and observability values. The internal signals are obtained by decomposing at the RTL large FMs into smaller ones. The calculation of TMs is carried out at a functional level rather than the gate level, to reduce or eliminate errors introduced by ignoring reconvergent fanouts in the gate network, and to reduce the complexity of the DAG construction. Based on the controllability/observability values, test-point insertion is performed to improve the testability for each bit of each signal/variable. This insertion is carried out in the original VHDL specification and thus becomes a part of it unlike in other existing methods. This allows full application of RTL synthesis optimization on both the functional and the test logic concurrently within the designer constraints such as area and delay. A number of benchmark circuits were used to show the applicability and the effectiveness of our method in terms of the resulting testability, area, and delay.

Journal ArticleDOI
TL;DR: This article presents an electronic design automation technology analysis and forecast for 1999, which covers IC design moving to a higher level of abstraction; IC design reuse; and hardware/software integration.
Abstract: This article presents an electronic design automation technology analysis and forecast for 1999. The subjects covered include: IC design moving to a higher level of abstraction; IC design reuse; and hardware/software integration.

Patent
01 Jun 1999
TL;DR: In this paper, a graphical user interface (GUI) provides a design engineer the capability of automatically inserting scan logic and test logic into a design, which can serve as a front end for a design framework.
Abstract: A graphical user interface (GUI) provides a design engineer the capability of automatically inserting scan logic and test logic into a design. The graphical user interface includes a scan insertion option for a design engineer to invoke a scan insertion tool to check the design for testability. The graphical user interface also permits the design engineer to invoke a test generation tool such as an automatic test pattern generator (ATPG) tool to check the design for fault coverage. The graphical user interface, which can serve as a front end for a design framework, enables a design engineer to efficiently increase testability while still in a design phase.

Proceedings ArticleDOI
Carol Pyron, M. Alexander1, J. Golab1, G. Joos1, B. Long1, R. Molyneaux1, R. Raina1, N. Tendolkar1 
28 Sep 1999
TL;DR: Design for manufacturability enhancements provide better tracking of initial silicon and fuse-based memory repair capabilities for improved yield and time-to-market and methodology and modeling improvements increased LSSD stuck-at fault test coverage.
Abstract: Several advances have been made in the design for testability of the MPC7400, the first fourth generation PowerPC microprocessor. The memory array built-in self-test algorithms now support detecting write-recovery defects and more comprehensive diagnostics. Delay defects can be tested with scan patterns with the phased locked loop providing the at-speed launch-capture events. Several methodology and modeling improvements increased LSSD stuck-at fault test coverage. Design for manufacturability enhancements provide better tracking of initial silicon and fuse-based memory repair capabilities for improved yield and time-to-market.

Journal ArticleDOI
01 Mar 1999
TL;DR: This work considers four test sequencing problems that frequently arise in test planning and design for testability (DFT) processes and presents various solution approaches to solve them.
Abstract: We consider four test sequencing problems that frequently arise in test planning and design for testability (DFT) processes. Specifically, we consider the following problems: (1) how to determine a test sequence that does not depend on the failure probability distribution; (2) how to determine a test sequence that minimizes expected testing cost while not exceeding a given testing time; (3) how to determine a test sequence that does not utilize more than a given number of tests, while minimizing the average ambiguity group size; and (4) how to determine a test sequence that minimizes the storage cost of tests in the diagnostic strategy. We present various solution approaches to solve the above problems and illustrate the usefulness of the proposed algorithms.

Proceedings ArticleDOI
28 Sep 1999
TL;DR: A testability enhancement technique for delay faults in standard scan circuits that does not involve modifications to the scan chain and can be resynthesized with the circuit to minimize its hardware and performance overheads is proposed.
Abstract: We propose a testability enhancement technique for delay faults in standard scan circuits that does not involve modifications to the scan chain. Extra logic is placed on next-state variables, and if necessary, on primary inputs, and can be resynthesized with the circuit to minimize its hardware and performance overheads. The proposed technique allows us to achieve complete coverage of detectable delay faults. A simple test generation procedure that guarantees complete coverage when used with the proposed technique is also described.