scispace - formally typeset
Search or ask a question

Showing papers by "Kewal K. Saluja published in 2008"


Proceedings ArticleDOI
04 Jan 2008
TL;DR: The proposed logic level simulation methodology is validated by showing that it is accurate within 1% of the reference model, and the overall delay degradation of large digital circuits due to NBTI is relatively small.
Abstract: Negative bias temperature instability (NBTI) has been identified as a major and critical reliability issue for PMOS devices in nano-scale designs. It manifests as a negative threshold voltage shift, thereby degrading the performance of the PMOS devices over the lifetime of a circuit. In order to determine the quantitative impact of this phenomenon an accurate and tractable model is needed. In this paper we explore a novel and practical methodology for modeling NBTI degradation at the logic level for digital circuits. Its major contributions include i) a SPICE level simulation to identify stress on PMOS devices under varying input conditions for various gate types and ii) a gate level simulation methodology that is scalable and accurate for determining stress on large circuits. We validate the proposed logic level simulation methodology by showing that it is accurate within 1% of the reference model. Contrary to many other papers in this area, our experimental results show that the overall delay degradation of large digital circuits due to NBTI is relatively small.

34 citations


Proceedings ArticleDOI
25 May 2008
TL;DR: This paper proposes a novel and practical capture-safe test generation scheme, featuring reliable capture-safety checking and effective capture- safety improvement by combining X-bit identification & X-filling with low launch- switching-activity test generation.
Abstract: Capture-safety, defined as the avoidance of any timing error due to unduly high launch switching activity in capture mode during at-speed scan testing, is critical for avoiding test- induced yield loss. Although point techniques are available for reducing capture IR-drop, there is a lack of complete capture-safe test generation flows. The paper addresses this problem by proposing a novel and practical capture-safe test generation scheme, featuring (1) reliable capture-safety checking and (2) effective capture-safety improvement by combining X-bit identification & X-filling with low launch- switching-activity test generation. This scheme is compatible with existing ATPG flows, and achieves capture-safety with no changes in the circuit-under-test or the clocking scheme.

32 citations


Proceedings ArticleDOI
24 Jun 2008
TL;DR: This work presents a heuristic to selectively apply temporal redundancy to flip-flops within a pipelined logic unit, achieving significant reductions in failures associated with soft errors with minimal overhead.
Abstract: The combination of continued technology scaling and increased on-chip transistor densities has made vulnerability to radiation induced soft errors a significant design concern. In particular, the effects of these errors on logic nodes are predicted to play an increasingly large role in determining the overall failure rate of future VLSI chips. While a myriad of techniques have been proposed to mitigate the effects of soft errors, system designers must ensure that the application of these solutions does not come at the expense of other design goals. This work presents a heuristic to selectively apply temporal redundancy to flip-flops within a pipelined logic unit, achieving significant reductions in failures associated with soft errors with minimal overhead.

16 citations


Proceedings ArticleDOI
19 May 2008
TL;DR: A blind calibration scheme that is tailored for sensor networks with mobile nodes that exploits the fact that sensor devices are moving in the same region and hence the signal statistics they observe over time are almost the same.
Abstract: In-field calibration of sensor devices is known to be a challenging problem because there is often no access to a controlled signal field and/or a pre-calibrated device to measure the existing signal field. In this paper, we describe a blind calibration scheme that is tailored for sensor networks with mobile nodes. The scheme proposed in this paper exploits the fact that sensor devices are moving in the same region and hence the signal statistics they observe over time are almost the same. Analysis and simulation results are included to demonstrate the effectiveness of the proposed scheme.

15 citations


Proceedings ArticleDOI
08 Nov 2008
TL;DR: A novel technique called a duplication cache is proposed to reduce the overhead of memory duplication in CMP-based high availability systems and shows that for a range of benchmarks memory duplication can be reduced by 60-90% with performance degradation ranging from 1-12%.
Abstract: High availability systems typically rely on redundant components and functionality to achieve fault detection, isolation and fail over. In the future, increases in error rates will make high availability important even in the commodity and volume market. Systems will be built out of chip multiprocessors (CMPs) with multiple identical components that can be configured to provide redundancy for high availability. However, the 100% overhead of making all components redundant is going to be unacceptable for the commodity market, especially when all applications might not require high availability. In particular, duplicating the entire memory like the current high availability systems (e.g. NonStop and Stratus) do is particularly problematic given the fact that system costs are going to be dominated by the cost of memory. In this paper, we propose a novel technique called a duplication cache to reduce the overhead of memory duplication in CMP-based high availability systems. A duplication cache is a reserved area of main memory that holds copies of pages belonging to the current write working set (set of actively modified pages) of running processes. All other pages are marked as read-only and are kept only as a single, shared copy. The size of the duplication cache can be configured dynamically at runtime and allows system designers to trade off the cost of memory duplication with minor performance overhead. We extensively analyze the effectiveness of our duplication cache technique and show that for a range of benchmarks memory duplication can be reduced by 60-90% with performance degradation ranging from 1-12%. On average, a duplication cache can reduce memory duplication by 60% for a performance overhead of 4% and by 90% for a performance overhead of 5%.

13 citations


Proceedings ArticleDOI
24 Nov 2008
TL;DR: This paper presents test generation methods for stuck-open faults using stuck-at test vectors and stuck- at test generation tools, and considers two types of test application mechanisms, namely launch on capture test and enhanced scan test.
Abstract: Defects in the modern LSIs manufactured by the deep-submicron technologies are known to cause complex faulty phenomena. Testing by targeting only stuck-at or bridging faults is no longer sufficient. Yet, increasing defect coverage is even more important. A stuck-open fault model considers transistor level defects, many of which are not covered by a stuck-at fault model. Further, test vectors for stuck-open faults also have the ability to detect the defects modeled by delay faults. This paper presents test generation methods for stuck-open faults using stuck-at test vectors and stuck-at test generation tools. The resultant test vectors achieve high coverage of stuck open faults while maintaining the original stuck-at fault coverage, thus offering the benefit of potential better defect coverage. We consider two types of test application mechanisms, namely launch on capture test and enhanced scan test. The effectiveness of the proposed methods is established by experimental results for benchmark circuits.

11 citations


Proceedings ArticleDOI
16 Jun 2008
TL;DR: Results show that significant error reduction can be achieved when nonlinearity is considered, and two blind calibration schemes for nonlinear mobile sensor nodes: nullspace based calibration (NBC) and moments based calibrated (MBC) are described.
Abstract: In-field calibration of sensor devices is known to be a challenging problem because there is often no access to a controlled signal field and/or a pre-calibrated device to provide the ground truth. Nonlinear characteristics of sensor devices make the calibration problem even harder. In this paper, we describe two blind calibration schemes for nonlinear mobile sensor nodes: nullspace based calibration (NBC) and moments based calibration (MBC). Simulation results are included to demonstrate the effectiveness of the proposed schemes. MBC scheme is also used to calibrate light sensors on MICA2 motes in a light field generated by a light bulb. Results show that significant error reduction can be achieved when nonlinearity is considered.

10 citations


Proceedings ArticleDOI
01 Nov 2008
TL;DR: A cost efficient camera sensor deployment strategy based on random deployment of sensors to monitor and/or surveillance a region of interest is investigated and it is demonstrated that this strategy leads to near optimal numbers of sensors and deployments and hence the overall cost of deployment.
Abstract: Availability of low cost low power camera sensors is likely to make possible applications that may otherwise have been unimaginable. In this paper we investigate a cost efficient camera sensor deployment strategy based on random deployment of sensors to monitor and/or surveillance a region of interest. We assume that there are costs associated with the sensors as well as with the deployments and our goal is to minimize the total cost while satisfying the desired coverage requirement. We develop analytical methods to derive the expected coverage of a single sensor as well as the joint coverage for a given number of homogenous and heterogenous camera sensors. Following this we propose an adaptive sensor deployment strategy based on our analytical method. We then evaluate the expected cost of our deployment strategy by deriving expressions for the number of deployments and the number of sensors deployed during each deployment as a function of the probability distributions of joint coverage by sensors. Finally, we carry out simulation studies to validate the analytical results and to demonstrate that our strategy leads to near optimal numbers of sensors and deployments and hence the overall cost of deployment.

9 citations


Journal ArticleDOI
TL;DR: A novel X-filling method, called double-capture (DC) X- filling, is proposed for generating test vectors with low and balanced capture switching activity for two captures, Applicable to dynamic & static compaction in any ATPG system, which can reduce IR-drop, and thus yield loss, without any circuit/clock modification, timing/circuit overhead, fault coverage loss, and additional design effort.
Abstract: At-speed scan testing, based on ATPG and ATE, is indispensable to guarantee timing-related test quality in the DSM era. However, at-speed scan testing may incur yield loss due to excessive IR-drop caused by high test (shift & capture) switching activity. This paper discusses the mechanism of circuit malfunction due to IR-drop, and summarizes general approaches to reducing switching activity, by which highlights the problem of current solutions, i.e. only reducing switching activity for one capture while the widely used at-speed scan testing based on the launch-off-capture scheme uses two captures. This paper then proposes a novel X-filling method, called double-capture (DC) X-filling, for generating test vectors with low and balanced capture switching activity for two captures. Applicable to dynamic & static compaction in any ATPG system, DC X-filling can reduce IR-drop, and thus yield loss, without any circuit/clock modification, timing/circuit overhead, fault coverage loss, and additional design effort.

9 citations


Proceedings ArticleDOI
04 Jan 2008
TL;DR: This paper study various defects in the insulating layers of a IT flash cell and analyze their impact on cell performance, and presents a test methodology and test algorithms that enable the detection of tunnel oxide defects in an efficient manner.
Abstract: Testing non volatile memories for tunnel oxide defects is one of the most important aspects to guarantee cell reliability. Defective tunnel oxide layer in core memory cells can result in various disturb faults. In this paper, we study various defects in the insulating layers of a IT flash cell and analyze their impact on cell performance. Further, we present a test methodology and test algorithms that enable the detection of tunnel oxide defects in an efficient manner.

6 citations


Journal ArticleDOI
TL;DR: Two methods for maximizing stuck-open fault coverage using stuck-at test vectors are presented and the effectiveness of the proposed methods is established by experimental results for benchmark circuits.
Abstract: Physical defects that are not covered by stuck-at fault or bridging fault model are increasing in LSI circuits designed and manufactured in modern Deep Sub-Micron (DSM) technologies. Therefore, it is necessary to target non-stuck-at and non-bridging faults. A stuck-open is one such fault model that captures transistor level defects. This paper presents two methods for maximizing stuck-open fault coverage using stuck-at test vectors. In this paper we assume that a test set to detect stuck-at faults is given and we consider two formulations for maximizing stuck-open coverage using the given test set as follows. The first problem is to form a test sequence by using each test vector multiple times, if needed, as long as the stuck-open coverage is increased. In this case the target is to make the resultant test sequence as short as possible under the constraint that the maximum stuck-open coverage is achieved using the given test set. The second problem is to form a test sequence by using each test vector exactly once only. Thus in this case the length of the test sequence is maintained as the number of given test vectors. In both formulations the stuck-at fault coverage does not change. The effectiveness of the proposed methods is established by experimental results for benchmark circuits.

Journal ArticleDOI
TL;DR: This paper determines the impact of various defects on cell performance and develops a methodology based on channel erase technique to detect these defects and proposes a very low cost design-for-testability approach.

Journal ArticleDOI
TL;DR: This paper presents methods for detecting transistor short faults using logic level fault simulation and test generation and defines fault coverage and fault efficiency in three different way, namely, optimistic, pessimistic and probabilistic and assess them.
Abstract: This paper presents methods for detecting transistor short faults using logic level fault simulation and test generation. The paper considers two types of transistor level faults, namely strong shorts and weak shorts, which were introduced in our previous research. These faults are defined based on the values of outputs of faulty gates. The proposed fault simulation and test generation are performed using gate-level tools designed to deal with stuck-at faults, and no transistor-level tools are required. In the test generation process, a circuit is modified by inserting inverters, and a stuck-at test generator is used. The modification of a circuit does not mean a design-for-testability technique, as the modified circuit is used only during the test generation process. Further, generated test patterns are compacted by fault simulation. Also, since the weak short model involves uncertainty in its behavior, we define fault coverage and fault efficiency in three different way, namely, optimistic, pessimistic and probabilistic and assess them. Finally, experimental results for ISCAS benchmark circuits are used to demonstrate the effectiveness of the proposed methods.