scispace - formally typeset
Search or ask a question

Showing papers on "Design for testing published in 1998"


Proceedings ArticleDOI
18 Oct 1998
TL;DR: An overview of current industrial practices as well as academic research in core-based IC design is provided and the challenges for future research are described.
Abstract: Advances in semiconductor process and design technology enable the design of complex system chips. Traditional IC design in which every circuit is designed from scratch and reuse is limited to standard-cell libraries, is more and more replaced by a design style based on embedding large reusable modules, the so-called cores. This core-based design poses a series of new challenges, especially in the domains of manufacturing test and design validation and debug. This paper provides an overview of current industrial practices as well as academic research in these areas. We also discuss industry-wide efforts by VSIA and IEEE P1500 and describe the challenges for future research.

513 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: This paper presents the concept of a structured test access mechanism for embedded cores: test data access from chip pins to TESTSHELL and vice versa is provided by the TESTRAIL, while the operation of the TEStsHELL is controlled by a dedicated test control mechanism (TCM).
Abstract: The main objective of core-based IC design is improvement of design efficiency and time-to-market. In order to prevent test development from becoming the bottleneck in the entire development trajectory, reuse of pre-computed tests for the reusable pre-designed cores is mandatory. The core user is responsible for translating the test at core level into a test at chip level. A standardized test access mechanism eases this task, therefore contributing to the plug-n-play character of core-based design. This paper presents the concept of a structured test access mechanism for embedded cores. Reusable IP modules are wrapped in a TESTSHELL. Test data access from chip pins to TESTSHELL and vice versa is provided by the TESTRAIL, while the operation of the TESTSHELL is controlled by a dedicated test control mechanism (TCM). Both TESTRAIL as well as TCM are standardized, but open for extensions.

338 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: A novel test vector compression/decompression technique is proposed for reducing the amount of test data that must be stored on a tester and transferred to each core when testing a core-based design.
Abstract: A novel test vector compression/decompression technique is proposed for reducing the amount of test data that must be stored on a tester and transferred to each core when testing a core-based design. A small amount of on-chip circuitry is used to reduce both the test storage and test time required for testing a core-based design. The fully specified test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core (the compression is lossless). Instead of having to transfer each entire test vector from the tester to the core, a smaller amount of compressed data is transferred instead. This reduces the amount of test data that must be stored on the tester and hence reduces the total amount of test time required for transferring the data with a given test data bandwidth.

310 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: This paper describes a structured test re-use methodology and infrastructure for core-based system chips that addresses the test access, isolation, interconnect and shadow logic test problems without requiring modifications to the components, even for cores with more ports than chip pins.
Abstract: This paper describes a structured test re-use methodology and infrastructure for core-based system chips. The methodology is based on the use of a structured test bus framework that provides access to virtual components in a system chip allowing the test methodologies and test vectors for these components to be re-used. It addresses the test access, isolation, interconnect and shadow logic test problems without requiring modifications to the components, even for cores with more ports than chip pins. The test area overhead required, including test bus routing, to implement this methodology can be less than 1%.

298 citations


Journal ArticleDOI
TL;DR: This survey attempts to outline some of this recent work on analog testing, ranging from tools for simulation-based test set development and optimization to built-in self-test (BIST) circuitry.
Abstract: Traditionally, work on analog testing has focused on diagnosing faults in board designs. Recently, with increasing levels of integration, not just diagnosing faults, but distinguishing between faulty and good circuits has become a problem. Analog blocks embedded in digital systems may not easily be separately testable. Consequently, many papers have been recently written proposing techniques to reduce the burden of testing analog and mined-signal circuits. This survey attempts to outline some of this recent work, ranging from tools for simulation-based test set development and optimization to built-in self-test (BIST) circuitry.

282 citations


Journal ArticleDOI
TL;DR: This paper surveys representative contributions to power modeling, estimation, synthesis, and optimization techniques that account for power dissipation during the early stages of the design flow that have appeared in the recent literature.
Abstract: Silicon area, performance, and testability have been, so far, the major design constraints to be met during the development of digital very-large-scale-integration (VLSI) systems. In recent years, however, things have changed; increasingly, power has been given weight comparable to the other design parameters. This is primarily due to the remarkable success of personal computing devices and wireless communication systems, which demand high-speed computations with low power consumption. In addition, there exists a strong pressure for manufacturers of high-end products to keep power under control, due to the increased costs of packaging and cooling this type of device. Last, the need of ensuring high circuit reliability has turned out to be more stringent. The availability of tools for the automatic design of low-power VLSI systems has thus become necessary. More specifically, following a natural trend, the interests of the researchers have lately shifted to the investigation of power modeling, estimation, synthesis, and optimization techniques that account for power dissipation during the early stages of the design flow. This paper surveys representative contributions to this area that have appeared in the recent literature.

232 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: A novel methodology that extends the BIST concept to diagnosis and repair utilizing redundant components and allows for the autonomous repair of defective circuitry without external stimulus is described.
Abstract: As the density of embedded memory increases, manufacturing yields of integrated circuits can reach unacceptable limits. Normal memory testing operations require BIST to effectively deal with problems such as limited access and "at speed" testing. In this paper we describe a novel methodology that extends the BIST concept to diagnosis and repair utilizing redundant components. We describe an application using redundant columns and accompanying algorithms. It allows for the autonomous repair of defective circuitry without external stimulus (e.g. laser repair). The method has been implemented with negligible timing penalties and reasonable area overhead.

222 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: The design of scan chains as transport mechanism for test patterns from IC pins to embedded cores and vice versa and the test time consequences of reusing cores with fixed internal scan chains in multiple ICs with varying design parameters are analyzed.
Abstract: The size of the test vector set forms a significant factor in the overall production costs of ICs, as it defines the test application time and the required pin memory size of the test equipment. Large core-based ICs often require a very large test vector set for a high test coverage. This paper deals with the design of scan chains as transport mechanism for test patterns from IC pins to embedded cores and vice versa. The number of pins available to accommodate scan test is given, as well as the number of scan test patterns and scannable flip flops of each core. We present and analyze three scan chain architectures for core-based ICs, which aim at a minimum test vector set size. We give experimental results of the three architectures for an industrial IC. Furthermore we analyze the test time consequences of reusing cores with fixed internal scan chains in multiple ICs with varying design parameters.

216 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: By appropriately connecting the inputs of all circuits under test during ATPG process such that the generated test patterns can be broadcast to all scan chains when actual testing is executed, it is shown that 177 and 280 test patterns are enough to detect all detectable faults in all 10 ISCas'85 combinational circuits and 10 largest ISCAS'89 sequential circuits.
Abstract: Single scan chain architectures suffer from long test application time, while multiple scan chain architectures require large pin overhead and are not supported by boundary scan. We present a novel method to allow a single input line to support multiple scan chains. By appropriately connecting the inputs of all circuits under test during ATPG process such that the generated test patterns can be broadcast to all scan chains when actual testing is executed, we show that 177 and 280 test patterns are enough to detect all detectable faults in all 10 ISCAS'85 combinational circuits and 10 largest ISCAS'89 sequential circuits, respectively.

199 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: This work presents a versatile automatic functional test generation methodology for microprocessors that can be applied to both design validation and manufacturing test, especially in high speed "native" mode.
Abstract: New methodologies based on functional testing and built-in self-test can narrow the gap between necessary solutions and existing techniques for processor validation and testing. We present a versatile automatic functional test generation methodology for microprocessors. The generated assembly instruction sequences can be applied to both design validation and manufacturing test, especially in high speed "native" mode. All the functional capabilities of complex processors can be exercised, leading to high quality validation sequences and manufacturing tests with high fault coverage. The tests can also be applied in a built-in self-test fashion. Experimental results on two microprocessors show that this method is very effective in generating high quality manufacturing tests as well as in functional design validation.

163 citations


Journal ArticleDOI
TL;DR: An efficient scheme to compress and decompress in parallel deterministic test patterns for circuits with multiple scan chains while achieving a complete fault coverage for any fault model for which test cubes are obtainable is presented.
Abstract: The paper presents an efficient scheme to compress and decompress in parallel deterministic test patterns for circuits with multiple scan chains. It employs a boundary-scan-based environment for high quality testing with flexible trade-offs between test data volume and test application time while achieving a complete fault coverage for any fault model for which test cubes are obtainable. It also reduces bandwidth requirements, as all test cube transfers involve compressed data. The test patterns are generated by the reseeding of a two-dimensional hardware structure which is comprised of a linear feedback shift register (LFSR), a network of exclusive-or (XOR) gates used to scramble the bits of test vectors, and extra feedbacks which allow including internal scan flip-flops into the decompressor structure to minimize the area overhead. The test data decompressor operates in two modes: pseudorandom and deterministic. In the first mode, the pseudorandom pattern generator (PRPG) is used purely as a generator of test vectors. In the latter case, variable-length seeds are serially scanned through the boundary-scan interface into the PRPG and parts of internal scan chains and, subsequently, a decompression is performed in parallel by means of the PRPG and selected scan flip-flops interconnected to form the decompression device. Extensive experiments with the largest ISCAS' 89 benchmarks show that the proposed technique greatly reduces the amount of test data in a cost effective manner.

Book
01 Jan 1998
TL;DR: This work implements the 1149.4 Standard Mixed-Signal Test Bus with a focus on the integration of Behavioral Modeling into Fault Simulation, and aims to demonstrate the benefits of using this Standard on an IC.
Abstract: List of Figure. List of Tables. Preface. Contributors. 1. Introduction. Motivation. History. Current Research. Influence of Digital Test. Analog Test Issues. Test Paradigms. Organization. Conclusion. 2. Defect-Oriented Testing. Introduction. Previous Work. Estimation Method. Topological Method. Taxonomical Method. Defect-Based Realistic Fault Dictionary. Implementation. A Case Study. Fault Matrix Generation. Stimuli Matrix. Simulation Results. Silicon Results. Observations and Analysis. IFA-based Fault Grading and DFT for Analog Circuits. A/D Converter Testing. Description of the Experiment. Fault Simulation Issues. Fault Simulation Results. Analysis. DFT Measures. High-Level Analog Fault Models. Discussion: Strengths and Weaknesses of IFA-Based Tests. 3. Fault Simulation. Introduction. Why Analog Fault Simulation? Analog Fault Models and What-if Analysis. Focus and Organization. Fault Simulation of Linear Analog Circuits. Householder's Formula. Discrete Z-domain Mapping. Fault Bands and Band Faults. Interval-Mathematics Approach. Summary. C Fault Simulation of Nonlinear Analog Circuits. The Complementary Pivot Method. Fault Simulation via One-Step Relaxation. Simulation by Fault Ordering. Handling Statistical Variations. Summary. Fault Co-Simulation with Multiple Levels of Abstraction. Mixed-Signal Simulators. Incorporating Behavioral Models in Fault Simulation. Fault Macromodeling and Induced Behavioral Fault Modeling. Statistical Behavioral Modeling. Remarks on Hardware Description Languages. Concluding Remarks. 4. Automatic Test Generation Algorithms. Introduction. Fundamental Issues in Analog ATPG. Structural Test Versus Functional Test. Path Sensitization. Measurement Impact on Test Generation. Simulation Impact on Test Generation. Test Generation Algorithms and Results. Functional Test Generation Algorithms. Structural Test Generation Algorithms. ATPG Based on Automatic Test Selection Algorithms. DFT-based Analog ATPG Algorithms. Conclusions. 5. Design for Test. Preliminaries. Analog Characteristics. Common Characteristics. Generic Test Techniques. Increased Controllability/Observability. A/D Boundary Control. System-Specific Test Techniques. Analog Scan. Boundary Scan. Macro-Based DFT. Operational Amplifiers. Data Converters. Filters. Quality Analysis. Preliminaries. Analysis. Analysis. Conclusion. 6. Spectrum-Based Built-in Self-Test. Introduction. Some Early BIST Schemes. On-Chip Signal Generation. Digital Frequency Synthesis. Delta-Sigma Oscillators. Fixed-Length Periodic Bit Stream. Parameter Analysis. Fast Fourier Transform. Sinewave Correlation. Bandpass Filters. Application: MADBIST. Baseband MADBIST. Baseband MADBIST Experiments. MADBIST for Transceiver Circuits. Conclusions and Future Directions. 7. Implementing the 1149.4 Standard Mixed-Signal Test Bus. Overview of 1149.1 and 1149.4186. Test Functions Needed to Implement 1149.4189. Test Capabilities That This Standard Facilitates. Resistance, Capacitance, and Inductance Measurement. Measuring DC Parameters of Inputs and Outputs. Differential Measurements. Bandwidth. Delay Measurement. Potential Benefits of Using This Standard. Costs of Implementing This Standard on an IC. Practical Circuits Compliant with the Standard (Draft 18). Achieving Measurement Accuracy. DC Measurement Errors. AC Measurement Errors. Noise. Lessons from Test ICs. IMP (International Microelectronics Products) IC. Matsushita IC. Conclusions. 8. Test Techniques for CMOS Switched-Current Circuits. Introduction. Current Copiers: Basic Building Blocks of SI Circuits. Structure and Operation. Testing Current Copiers. Testing of Switched-Current Algorithmic A/D Converters. Structure and Operation. Concurrent Error Detection (CED). Test Generation. BIST Design. Scan Structures: Design for Testability. Conclusion. Index.

Journal ArticleDOI
TL;DR: In this paper, a low-cost vectorless test solution, known as oscillation test, is investigated to test the operational amplifier (op amp), which is one of the most encountered analog building blocks.
Abstract: The operational amplifier (op amp) is one of the most encountered analog building blocks. In this paper, the problem of testing an integrated op amp is treated. A new low-cost vectorless test solution, known as oscillation test, is investigated to test the op amp. During the test mode, the op amps are converted to a circuit that oscillates and the oscillation frequency is evaluated to monitor faults. The tolerance band of the oscillation frequency is determined using a Monte Carlo analysis taking into account the nominal tolerance of all important technology and design parameters. Faults in the op amps under test which cause the oscillation frequency to exit the tolerance band can therefore be detected. Some Design for Testability (DfT) rules to rearrange op amps to form oscillators are presented and the related practical problems and limitations are discussed. The oscillation frequency can be easily and precisely evaluated using pure digital circuitry. The simulation and practical implementation results confirm that the presented techniques ensure a high fault coverage with a low area overhead.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: A novel test methodology is proposed to decrease testing time for core-based system LSIs based on BIST and ATPG and is formulated as a combinatorial optimization problem to select the optimal set of test vectors for each core.
Abstract: In this paper, we propose a novel test methodology for core-based system LSIs. Our test methodology aims to decrease testing time for core-based system LSIs. Considering testing time reduction, our test methodology is based on BIST and ATPG. The main contributions of this paper are summarized as follows. (i). BIST is efficiently combined with external testing to relax the limitation of the external primary inputs and outputs. (ii). External testing for one of cores and BISTs for the others are performed in parallel to reduce the total testing time. (iii). The testing time minimization problem for core-based system LSIs is formulated as a combinatorial optimization problem to select the optimal set of test vectors from given sets of test vectors for each core.

Journal ArticleDOI
A. Carbine1, D. Feltham
TL;DR: The need to quickly ramp a complex, high-performance microprocessor into high-volume manufacturing with low defect rates led this design team to a custom, low-area DFT approach and a manually written test methodology that targeted several fault models.
Abstract: The need to quickly ramp a complex, high-performance microprocessor into high-volume manufacturing with low defect rates led this design team to a custom, low-area DFT approach and a manually written test methodology that targeted several fault models. Their approach effectively balanced testability needs with other design constraints, while enabling excellent time to market and test quality.

Proceedings ArticleDOI
01 May 1998
TL;DR: This paper proposes a new methodology for resting a core-based system-on-chip (SOC), targeting the simultaneous reduction of test area overhead and test application time, and demonstrates the ability to design highly testable SOCs with minimized test Area overhead, minimized test applicationTime, or a desired trade-off between the two.
Abstract: This paper proposes a new methodology for resting a core-based system-on-chip (SOC), targeting the simultaneous reduction of test area overhead and test application time. Testing of embedded cores is achieved using the transparency properties of surrounding cores. At the core level, testability and transparency can be achieved by reusing existing logic inside the core, and providing different versions of the core having different area overheads and transparency latencies. At the chip level, the technique analyzes the topology of the SOC to select the core versions that best meet the user's desired test area overhead and test application time objectives. Application of the method to example SOCs demonstrates the ability to design highly testable SOCs with minimized test area overhead, minimized test application time, or a desired trade-off between the two. Significant reduction in area overhead and test application time compared to an existing SOC testing technique is also demonstrated.

Proceedings ArticleDOI
10 Feb 1998
TL;DR: This paper presents high-level estimation techniques for hardware effort and hardware/software communication time, and presents a cost function for the purpose of hardware/ software partitioning that offers a dynamic weighting of its components.
Abstract: High-level estimation techniques are of paramount importance for design decisions like hardware/software partitioning or design space explorations. In both cases an appropriate compromise between accuracy and computation time determines about the feasibility of those estimation techniques. In this paper we present high-level estimation techniques for hardware effort and hardware/software communication time. Our techniques deliver fast results at sufficient accuracy. Furthermore, it is shown in which way these techniques are applied in order to cope with contradictory design goals like performance constraints and hardware effort constraints. As a solution, we present a cost function for the purpose of hardware/software partitioning that offers a dynamic weighting of its components. The conducted experiments show that the usage of our estimation techniques in conjunction with their efficient combination leads to reasonable hardware/software implementations as opposed to approaches that consider single constraints only.

Journal ArticleDOI
TL;DR: This scheme identifies a suitable control and data flow from the register-transfer level circuit, and uses it to test each embedded element in the circuit by symbolically justifying its precomputed test set from the system primary inputs to the element inputs and symbolically propagating the output response to the systemPrimary outputs.
Abstract: In this paper, we present a technique for extracting functional (control/data flow) information from register-transfer level controller/data path circuits, and illustrate its use in design for hierarchical testability of these circuits. This scheme does not require any additional behavioral information. It identifies a suitable control and data flow from the register-transfer level circuit, and uses it to test each embedded element in the circuit by symbolically justifying its precomputed test set from the system primary inputs to the element inputs and symbolically propagating the output response to the system primary outputs. When symbolic justification and propagation become difficult, it inserts test multiplexers at suitable points to increase the symbolic controllability and observability of the circuit. These test multiplexers are mostly restricted to off-critical paths. Testability analysis and insertion are completely based on the register-transfer level circuit and the functional information automatically extracted from it, and are independent of the data path bit width owing to their symbolic nature. Furthermore, the data path test set is obtained as a byproduct of this analysis without any further search. Unlike many other design-for-testability techniques, this scheme makes the combined controller-data path very highly testable. It is general enough to handle control-flow-intensive register-transfer level circuits like protocol handlers as well as data-flow intensive circuits like digital filters. It results in low area/delay/power overheads, high fault coverage, and very low test generation times (because it is symbolic and independent of bit width). Also, a large part of our system-level test sets can be applied at speed. Experimental results on many benchmarks show the average area, delay, and power overheads for testability to be 3.1, 1.0, and 4.2%, respectively. Over 99% fault coverage is obtained in most cases with two-four orders of magnitude test generation time advantage over an efficient gate-level sequential test pattern generator and one-three orders of magnitude advantage over an efficient gate-level combinational test pattern generator (that assumes full scan). In addition, the test application times obtained for our method are comparable with those of gate-level sequential test pattern generators, and up to two orders of magnitude smaller than designs using full scan.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: A novel design-for-test (DFT) concept for I/O delay testing while contacting very few pads, using boundary scan and new test-generation software uncovered unique manufacturing defects of the IBM System/390 Generation 3/sup TM/ and several ASIC chips.
Abstract: This paper describes a novel design-for-test (DFT) concept for I/O delay testing while contacting very few pads, using boundary scan and new test-generation software. In production testing of the IBM System/390 Generation 3/sup TM/ and several ASIC chips, these patterns uncovered unique manufacturing defects.

Proceedings ArticleDOI
V. Boppana1, M. Fujita
18 Oct 1998
TL;DR: A technique for capturing the effects of all possible faulty behaviors that can be generated from specific sets of nodes (called X-lists) in the circuit is presented, which provides a way for drawing powerful diagnostic inferences about the presence of faults when analyzing the observed faulty responses.
Abstract: In this paper, we provide techniques for fault and error diagnosis based on capturing unmodeled faulty behavior. We present a technique for capturing the effects of all possible faulty behaviors that can be generated from specific sets of nodes (called X-lists) in the circuit. Since all possible erroneous behaviors are captured, this provides a way for drawing powerful diagnostic inferences about the presence of faults at these sets of nodes when analyzing the observed faulty responses. We also present an efficient diagnosis algorithm that exploits the modeling of all possible behaviors and can be built in a framework of conventional test and simulation tools. Experimental results with numerous diagnosis experiments are then used to demonstrate that the techniques developed can indeed be used to achieve significant improvements in the accuracy of diagnosis.

Proceedings ArticleDOI
M. Hirech1, J. Beausang, Xinli Gu
18 Oct 1998
TL;DR: This paper proposes integrating scan chain reordering based on physical design information into synthesis-based design reoptimization, describes the benefits of such an approach, the design synthesis context, presents new ordering concepts and concludes with results on real designs.
Abstract: Scan chain reordering based on physical design information helps in reducing routing bottleneck and in minimizing design constraint violations. This paper proposes integrating this capability into synthesis-based design reoptimization. It describes the benefits of such an approach, the design synthesis context, presents new ordering concepts and concludes with results on real designs.

Proceedings ArticleDOI
26 Apr 1998
TL;DR: A design-for-test method which serializes parallel circuit inputs and de-serializes circuit outputs to achieve 1 GHz operation on test equipment operating at frequencies below 100 MHz has been used to successfully characterize the operation of a 1 GHz microprocessor chip.
Abstract: As microprocessor speeds approach 1 GHz and beyond the difficulties of at-speed testing continue to increase. In particular, automated test equipment which operates at these frequencies is very limited. This paper discusses a design-for-test method which serializes parallel circuit inputs and de-serializes circuit outputs to achieve 1 GHz operation on test equipment operating at frequencies below 100 MHz. This method has been used to successfully characterize the operation of a 1 GHz microprocessor chip.

Proceedings ArticleDOI
26 Apr 1998
TL;DR: It is shown that it is possible to synthesize very large and fast phase shifters for BIST applications with guaranteed phaseshifts between scan chains and very small number of gates per channel.
Abstract: The paper presents novel systematic design techniques for the automated synthesis of phase shifter circuits used to remove effects of structural dependencies featured by test generators driving parallel scan chains. As shown in the paper, it is possible to synthesize very large and fast phase shifters for BIST applications with guaranteed phaseshifts between scan chains and very small number of gates per channel.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: Error detecting and correcting code based memory design, self-checking design, VLSI-level retry architectures, perturbation hardened design, tools for evaluation of soft error rates, and other on-line testing techniques are becoming mandatory in order to achieve increasingly levels of soft-error robustness and push aggressively the limits of technological scaling.
Abstract: Error detecting and correcting code based memory design, self-checking design, VLSI-level retry architectures, perturbation hardened design, tools for evaluation of soft error rates, and other on-line testing techniques are becoming mandatory in order to achieve increasingly levels of soft-error robustness and push aggressively the limits of technological scaling. In the next few years, considerable efforts have to be concentrated on the development of such techniques and the related CAD tools.

Proceedings ArticleDOI
02 Dec 1998
TL;DR: An approach to test generation using time expansion models that can reduce hardware overhead and test length compared with full scan while preserving almost 100% fault efficiency is presented.
Abstract: We present an approach to test generation using time expansion models. The tests for acyclic sequential circuits can be generated by applying combinational ATPG to our time expansion models. We performed experiments on application to partial scan designed register-transfer circuits. The results show that our approach can reduce hardware overhead and test length compared with full scan while preserving almost 100% fault efficiency.

Journal ArticleDOI
TL;DR: A new bit-stuffing technique, which simultaneously solves both the carry-over and source-termination problems efficiently, is proposed and designed in an NU, and a simplified parallel multiplier requires approximately half of the area of a standard parallel multiplier while maintaining a good compression ratio.
Abstract: In this paper, we present a very large scale integration (VLSI) design of the adaptive binary arithmetic coding for lossless data compression and decompression. The main modules of it consist of an adaptive probability estimation modeler (APEM), an arithmetic operation unit (AOU), and a normalization unit (NU), A new bit-stuffing technique, which simultaneously solves both the carry-over and source-termination problems efficiently, is proposed and designed in an NU. The APEM estimates the conditional probabilities of input symbols efficiently using a table lookup approach with 1.28-kbytes memory. A new formula which efficiently reflects the change of symbols' occurring probability is proposed, and a complete binary tree is used to set up the values in the probability table of an APEM. In an AOU, a simplified parallel multiplier, which requires approximately half of the area of a standard parallel multiplier while maintaining a good compression ratio, is proposed. Owing to these novel designs, the designed chip can compress any type of data with an efficient compression ratio, An asynchronous interface circuit with an 8-b first-in first-out (FIFO) buffer for input/output (UO) communication of the chip is also designed. Thus, both UO and compression operations in the chip can be done simultaneously. Moreover, the concept of design for testability is used and a scan path is implemented in the chip. A prototype 0.8-/spl mu/m chip has been designed and fabricated in a reasonable die size. This chip can yield a processing rate of 3 Mb/s with a clock rate of 25 MHz.

Journal ArticleDOI
TL;DR: A method for test-point insertion in large combinational circuits, to increase their path delay fault testability and to demonstrate the effectiveness of the proposed methods in increasing the testability of large benchmark circuits.
Abstract: We present a method for test-point insertion in large combinational circuits, to increase their path delay fault testability. Using an appropriate test application scheme with multiple clock periods, a test point on a line g divides the set of paths through g for testing purposes into a subset of paths from the primary inputs up to g, and a subset of paths from g to the primary outputs. Each one of these subsets can be tested separately. The number of paths that need to be tested directly is thus reduced. In addition, by breaking an untestable path into two or more testable subpaths, it is possible to obtain a fully testable circuit. Test-point insertion is done to reduce the number of paths, using a time-efficient procedure. Indirectly, it also reduces the number of tests and renders untestable paths testable. When the number of paths is sufficiently small, and if the test generation procedure to be used for the circuit is known, a procedure is given to perform test-point insertion directly targeting the path delay faults that are still untestable. Experimental results are presented to demonstrate the effectiveness of the proposed methods in increasing the testability of large benchmark circuits, and to demonstrate the overheads involved.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: TAO exploits the algebra of regular expressions to provide a unified framework for handling a wide variety of circuits including application-specific integrated circuits, application- specific programmable processors, application -specific instruction processors, digital signal processors and microprocessors.
Abstract: In this paper, we present TAO, a novel methodology for high-level testability analysis and optimization of register-transfer level controller/data path circuits. Unlike existing high-level testing techniques that cater restrictively to certain classes of circuits or design styles, TAO exploits the algebra of regular expressions to provide a unified framework for handling a wide variety of circuits including application-specific integrated circuits, application-specific programmable processors, application-specific instruction processors, digital signal processors and microprocessors. We also augment TAO with a design-for-test framework that can provide a low-cost testability solution by examining the trade-offs in choosing from a diverse array of testability modifications like partial scan or test multiplexer insertion in different parts of the circuit. Test generation is symbolic and, hence, independent of bit-width. Experimental results on benchmark circuits show that TAO is very efficient, in addition to being comprehensive. The fault coverage obtained is above 99% in all cases. The average area and delay overheads for incorporating testability into the benchmarks are only 3.3% and 1.1%, respectively. The test application time is comparable to that associated with gate-level sequential test generators.

Proceedings ArticleDOI
23 Feb 1998
TL;DR: A Computer-Aided Testing (CAT) tool is proposed that brings a systematic way of dealing with testing problems in emerging microsystems that shall be addressed in the near future as an extension to this work.
Abstract: In this work a Computer-Aided Testing (CAT) tool is proposed that brings a systematic way of dealing with testing problems in emerging microsystems. Experiments with case-studies illustrate the techniques and tools embedded in the CAT environment. Some of the many open problems that shall be addressed in the near future as an extension to this work are also discussed.

Proceedings ArticleDOI
21 Jun 1998
TL;DR: This paper presents the application of discrete event system (DES) techniques to delay fault modeling and analysis, and develops models and algorithms that provide design testability evaluation and robust delay fault test generation.
Abstract: This paper presents the application of discrete event system (DES) techniques to delay fault modeling and analysis. DES is a dynamical system that evolves according to asynchronous occurrence of certain discrete changes, called events. An integrated circuit (chip) may be considered as a discrete event system. DES modeling techniques are used for delay fault analysis of a chip design. This formal analysis technique may help avoid some of the large cost of simulation, DES delay gate models and circuit path delay models are developed as well as algorithms that provide design testability evaluation and robust delay fault test generation.