scispace - formally typeset
Search or ask a question

Showing papers by "Kewal K. Saluja published in 2005"


Proceedings Article•DOI•
01 May 2005
TL;DR: Experimental results show the effectiveness of the novel low-capture-power X-filling method in reducing capture power dissipation without any impact on area, timing, and fault coverage.
Abstract: Research on low-power scan testing has been focused on the shift mode, with little or no consideration given to the capture mode power. However, high switching activity when capturing a test response can cause excessive IR drop, resulting in significant yield loss. This paper addresses this problem with a novel low-capture-power X-filling method by assigning 0's and 1's to unspecified (X) bits in a test cube to reduce the switching activity in capture mode. This method can be easily incorporated into any test generation flow, where test cubes are obtained during ATPG or by X-bit identification. Experimental results show the effectiveness of this method in reducing capture power dissipation without any impact on area, timing, and fault coverage.

183 citations


Proceedings Article•DOI•
08 Nov 2005
TL;DR: A novel low-capture-power X-filling method of assigning 0's and 1's to unspecified (X) bits in a test cube obtained during ATPG to improve the applicability of scan-based at-speed testing by reducing the risk of test yield loss.
Abstract: Scan-based at-speed testing is a key technology to guarantee timing-related test quality in the deep submicron era. However, its applicability is being severely challenged since significant yield loss may occur from circuit malfunction due to excessive IR drop caused by high power dissipation when a test response is captured. This paper addresses this critical problem with a novel low-capture-power X-filling method of assigning 0's and 1's to unspecified (X) bits in a test cube obtained during ATPG. This method reduces the circuit switching activity in capture mode and can be easily incorporated into any test generation flow to achieve capture power reduction without any area, timing, or fault coverage impact. Test vectors generated with this practical method greatly improve the applicability of scan-based at-speed testing by reducing the risk of test yield loss

144 citations


Proceedings Article•DOI•
08 Nov 2005
TL;DR: The progressive random access scan is rejuvenated as a design for testability method that simultaneously addresses three limitations of the traditional serial scan namely, test data volume, test application time, and test power.
Abstract: Traditional testing research for testing VLSI circuits has been confined to the use of serial scan test architecture whose origin lies in keeping the hardware overhead low. However, there has been a paradigm shift in the cost factor - the transistor cost has been dropping exponentially whereas the test cost is starting to increase. We believe that adding marginally more hardware is acceptable provided the test cost can be reduced considerably. This paper takes such a view of testing and rejuvenates the random access scan as a design for testability method that simultaneously addresses three limitations of the traditional serial scan namely, test data volume, test application time, and test power. The novelty of the progressive random access scan approach proposed in this paper lies in developing the test architecture and formulating the test application time and test data volume reduction problems. We provide a traveling salesman formulation of these problems in our test architecture setting. Experimental results show the practicality of our approach as the hardware cost components, consisting of routing and transistor count, increase only marginally compared to the serial scan approach whereas there is a dramatic decrease in test power consumption (nearly a 1000 fold decrease in average test power) as well as the test data volume and the test times are halved

55 citations


Proceedings Article•DOI•
12 Dec 2005
TL;DR: This paper formally defines and evaluates exposure in mobile sensor networks with the presence of obstacles and noise and develops algorithms to calculate the upper and lower bounds on exposure.
Abstract: Sensor networks possess the inherent potential to detect the presence of a target in a monitored region. Although a stationary sensor network is often adequate to meet application requirements, it is not suited to many situations, for example, a huge number of nodes are required to monitor a large region. In such situations, mobile sensor networks can be used to resolve the communication and sensing coverage problems. This paper addresses the problem of detecting a target using mobile sensor networks. One of the fundamental issues in target detection problems is exposure, which measures how the region is covered by the sensor network. While traditional studies focus on stationary sensor networks, this paper formally defines and evaluates exposure in mobile sensor networks with the presence of obstacles and noise. To conform with practical situations, detection is conducted without presuming the target's activities and moving directions. As there is no fixed layout of node positions, a time expansion technique is developed to evaluate exposure. Since determining exposure can be computationally expensive, algorithms to calculate the upper and lower bounds on exposure are developed. Simulation results are also presented to illustrate the effectiveness of the algorithms

47 citations


Journal Article•DOI•
TL;DR: The method classifies the pairs of aggressor and victim lines, using topological and timing information, to deduce a set of target crosstalk faults, and identifies the false crosStalk faults that need not (and/or cannot) be tested in synchronous sequential circuits.
Abstract: We describe a method of identifying a set of target crosstalk faults which may need to be tested in synchronous sequential circuits. Our method classifies the pairs of aggressor and victim lines, using topological and timing information, to deduce a set of target crosstalk faults. In this process, our method also identifies the false crosstalk faults that need not (and/or cannot) be tested in synchronous sequential circuits. Experimental results for ISCAS'89 and ITC'99 benchmark circuits show that the proposed method is CPU time efficient in obtaining the reduced lists of the target crosstalk faults. Also, the lists of the target crosstalk faults obtained by our method are substantially smaller than the sets of all possible combinations of faults.

18 citations


Journal Article•DOI•
TL;DR: It is shown that defect-based testing can be used to optimize the cost of program disturb tests of NVM by establishing the relationship between defect location and fault manifestation using electrical simulation.
Abstract: Nonvolatile memories (NVMs) are susceptible to a special type of faults known as program disturb faults. These faults are described using logical fault models and often functional tests are used to detect different faults that occur under such models. The use of functional fault models and tests results in the simplification of the testing process, although such tests can be very long. In this paper, we present a defect-based model that can be used to model different disturb faults in NVM. The relationship between defect location and fault manifestation is first established using electrical simulation. Next, the use of stress tests and margin read schemes and how they are used to detect disturb faults is discussed. Using electrical simulation results, we show that defect-based testing can be used to optimize the cost of program disturb tests of NVM.

13 citations


Journal Article•DOI•
TL;DR: This article proposes clock skew scheduling as a tool to address causes of performance-related circuit yield loss and is an interesting example of how managing circuit-level parameters can have a direct impact on yield metrics and therefore a clear example of the direction of DFM research.
Abstract: Semiconductor technology advances have enabled designers to integrate more functionality in a single chip. As design complexity increases, many new design techniques are developed to optimize chip area and power consumption, as well as performance. Traditionally, yield improvement has been achieved through process improvement. However, in deep-submicron technologies, process variations are difficult to control. As a result, many design decisions significantly affect yield. Therefore, designers should consider yield-related issues during the design phase. This article proposes clock skew scheduling as a tool to address causes of performance-related circuit yield loss. It is an interesting example of how managing circuit-level parameters can have a direct impact on yield metrics and therefore a clear example of the direction of DFM research.

11 citations


Proceedings Article•DOI•
23 May 2005
TL;DR: A graph theoretic model of pipelined processors is proposed and a systematic approach for delay fault testing of such processor cores using the processor instruction set is developed.
Abstract: Although nearly all modern processors use a pipelined architecture, no method has yet been proposed in the literature to model these for the purpose of test generation. The paper proposes a graph theoretic model of pipelined processors and develops a systematic approach for delay fault testing of such processor cores using the processor instruction set. Our methodology consists of using a graph model of the pipelined processor, extraction of architectural constraints, classification of paths, and generation of tests using a constrained ATPG. These tests are then converted to a test program, a sequence of instructions, for testing the processor. Thus, the tests generated by our method can be applied in a functional mode of operation and can also be used for self-test. We applied our method to two example processors, namely a 16 bit five stage VPRO pipelined processor and a 32 bit pipelined DLX processor, to demonstrate the effectiveness of our methodology.

11 citations


Proceedings Article•DOI•
03 Jan 2005
TL;DR: The proposed algorithm extends existing fundamental principles of logic event-driven simulation to crosstalk delay faults excitation, injection, and verification and is capable of handling multiple-aggressors/single-victim faults in an efficient manner.
Abstract: A conventional approach to the simulation of crosstalk-induced delay faults is commonly centered around an electrical-level circuit simulation. While yielding high accuracy, the process is time-consuming and may no longer be feasible for modern, high-density VLSI circuits. To address this issue, we propose and develop a novel approach for gate-level simulation of crosstalk delay faults caused by coupling between aggressor and victim signal lines. Our algorithm extends existing fundamental principles of logic event-driven simulation to crosstalk delay faults excitation, injection, and verification. In addition, the simulator is capable of handling multiple-aggressors/single-victim faults in an efficient manner.

8 citations


Journal Article•DOI•
TL;DR: The proposed approach uses a graph theoretic model (represented as an Instruction Execution Graph) of the datapath and a finite state machine model of the controller for the elimination of functionally untestable paths at the early stage without looking into the circuit details and extraction of constraints for the paths that can be tested.
Abstract: This paper proposes an efficient methodology of delay fault testing of processor cores using their instruction sets. These test vectors can be applied in the functional mode of operation, hence, self-testing of processor core becomes possible for path delay fault testing. The proposed approach uses a graph theoretic model (represented as an Instruction Execution Graph) of the datapath and a finite state machine model of the controller for the elimination of functionally untestable paths at the early stage without looking into the circuit details and extraction of constraints for the paths that can potentially be tested. Parwan and DLX processors are used to demonstrate the effectiveness of our method.

7 citations



Proceedings Article•DOI•
18 Dec 2005
TL;DR: This paper studies the simultaneous solution of all three problems of serial scan by making use of progressive random access scan test architecture and develops a test generation methodology which reduces the test application time by nearly 75%, test data volume by 50% and test power by nearly 99% compared to serial scan.
Abstract: Three issues that are dominating test research today are test application time, test data volume and test power. Researchers have focused on these issues mostly considering the popular serial scan architecture for its relatively low hardware overhead while ignoring the fact that exponential drop in hardware cost offers opportunities for implementing a test architecture that previously may have been unacceptable. This paper takes such a paradigm shift into account and studies the simultaneous solution of all three problems of serial scan by making use of progressive random access scan test architecture. This architecture only increases the hardware cost marginally while providing marked improvements for the three issues. This paper explains the test architecture and then develops a test generation methodology which reduces the test application time by nearly 75%, test data volume by 50% for the benchmark circuits. Above all, the architecture is inherently so efficient that it reduces the test power by nearly 99% or more of the test power consumption compared to serial scan

Journal Article•DOI•
TL;DR: This work creates a functionally equivalent "balanced" ATPG model of the circuit in which all reconverging paths have the same sequential depth and presents a generalized method to model any given multiple stuck-at fault as a single stuck- at fault.
Abstract: It is known that the complexity of automatic test pattern generation (ATPG) for acyclic sequential circuits is similar to that of combinational ATPG. The general problem, however, requires time-frame expansion and multiple-fault detection and hence does not allow the use of available combinational ATPG programs. The first contribution of this work is a combinational single-fault ATPG method for the most general class of acyclic sequential circuits. Without inserting any real hardware, we create a functionally equivalent "balanced" ATPG model of the circuit in which all reconverging paths have the same sequential depth. Some primary inputs and gates are duplicated in this model, which is converted into a combinational circuit by shorting all flip-flops. A test vector obtained by a combinational ATPG program for a fault in this combinational circuit is transformed into a test sequence to detect a corresponding fault in the original sequential circuit. A combinational ATPG program finds tests for all but a small set of faults that must be explicitly detected as multiple-faults. Those are modeled for ATPG using the second contribution of this work, which is a generalized method to model any given multiple stuck-at fault as a single stuck-at fault. The procedure requires insertion of at most n+3 modeling gates for a fault of multiplicity n. We show that the modeled circuit is functionally equivalent to the original circuit and the targeted multiple fault is equivalent to the modeled single stuck-at fault. Benchmark results show at least an order of magnitude saving in the ATPG CPU time by the new combinational method over sequential ATPG.

Journal Article•DOI•
TL;DR: F Fault diagnosis based on the X-fault model can improve the accuracy of failure analysis for a wide range of physical defects in complex and deep submicron integrated circuits.
Abstract: A new fault model, called the X-fault model, is proposed for fault diagnosis of physical defects with unknown behaviors by using X symbols. An efficient X-fault simulation method and an efficient X-fault diagnostic reasoning method are presented. Fault diagnosis based on the X-fault model can improve the accuracy of failure analysis for a wide range of physical defects in complex and deep submicron integrated circuits.

Proceedings Article•DOI•
18 Dec 2005
TL;DR: A design of space compactors that can be used in pass/fail mode as well as in diagnostic mode with enhanced performance by trading off compaction ratio for diagnostic ability is introduced.
Abstract: Testing of VLSI circuits is challenged by the increasing volume of test data that adds constraints on tester memory and impacts test application time substantially. Space compactors are commonly used to reduce the test volume by one or two orders of magnitude. However, such level of compaction reduces the quality of the diagnostic of faults because it is difficult to identify the locations of errors in the compacted response. In this paper, we introduce a design of space compactors that can be used in pass/fail mode as well as in diagnostic mode with enhanced performance by trading off compaction ratio for diagnostic ability. We analyze the properties of the compactors and evaluate their performance through simulations

Proceedings Article•DOI•
03 Jan 2005
TL;DR: A new design flow is proposed that combines a false-path-aware gate sizing and a statistical-timing-driven clock scheduling algorithms to maximize timing yield and achieves significant timing yield improvements.
Abstract: Timing margin (slack) needs to be carefully managed to ensure a satisfactory timing yield. We propose a new design flow that combines a false-path-aware gate sizing and a statistical-timing-driven clock scheduling algorithms to maximize timing yield. Our gate sizing algorithm preserves the true path lengths that may otherwise be altered by the traditional gate sizing algorithms due to the presence of false paths. The slack is then distributed to each path according to its path delay uncertainty to maximize the timing yield. Experimental results show that our flow achieves significant timing yield improvements (> 20%) than a traditional flow for a subset of the benchmark circuits with little or negligible area penalty.

Journal Article•DOI•
TL;DR: A novel approach to improving the IDDQ-based diagnosability of a CMOS circuit is presented by dividing the circuit into independent partitions and using a separate power supply for each partition.
Abstract: This paper presents a novel approach to improving the IDDQ-based diagnosability of a CMOS circuit by dividing the circuit into independent partitions and using a separate power supply for each partition. This technique makes it possible to implement multiple IDDQ measurement points, resulting in improved IDDQ-based diagnosability. The paper formalizes the problem of partitioning a circuit for this purpose and proposes optimal and heuristic based solutions. The effectiveness of the proposed approach is demonstrated through experimental results.

01 Jan 2005
TL;DR: This article proposes clock skew scheduling as a tool to address causes of performance-related circuit yield loss and is an interesting example of how managing circuit-level parameters can have a direct impact on yield metrics.
Abstract: Editor’s note: This article proposes clock skew scheduling as a tool to address causes of performance-related circuit yield loss. It is an interesting example of how managing circuit-level parameters can have a direct impact on yield metrics and, therefore, a clear example of the direction of DFM research.