scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Online VLSI Testing

01 Oct 1998-IEEE Design & Test of Computers (IEEE Computer Society Press)-Vol. 15, Iss: 4, pp 12-16
About: This article is published in IEEE Design & Test of Computers.The article was published on 1998-10-01. It has received 11 citations till now. The article focuses on the topics: Redundancy (engineering) & Very-large-scale integration.
Citations
More filters
Journal ArticleDOI
TL;DR: An overview of the latest research in the domain of the self-replication of processing elements within a programmable logic substrate, a key prerequisite for achieving system-level fault tolerance in the Embryonics project's bio-inspired approach.
Abstract: The multicellular structure of biological organisms and the interpretation in each of their cells of a chemical program (the DNA string or genome) is the source of inspiration for the Embryonics (embryonic electronics) project, whose final objective is the design of highly robust integrated circuits, endowed with properties usually associated with the living world: self-repair and self-replication. In this article, we provide an overview of our latest research in the domain of the self-replication of processing elements within a programmable logic substrate, a key prerequisite for achieving system-level fault tolerance in our bio-inspired approach.

24 citations

Dissertation
15 May 2008
TL;DR: In this article, the soft error rate (SER) estimation was proposed to predict the effects of cosmic radiation and high-energy particle strikes in integrated circuit chips by building SER models.
Abstract: Except where reference is made to the work of others, the work described in this thesis is my own or was done in collaboration with my advisory committee. This dissertation does not include proprietary or classified information. Permission is granted to Auburn University to make copies of this thesis at its discretion, upon the request of individuals or institutions and at their expense. Nanometer CMOS VLSI circuits are highly sensitive to soft errors due to environmental causes such as cosmic radiation and high-energy particles. These errors are random and not related to permanent hardware faults. Their causes may be internal caused in microelectronic circuits when high energy particles strike at sensitive regions of the silicon devices. The soft error rate (SER) estimation analytically predicts the effects of cosmic radiation and high-energy particle strikes in integrated circuit chips by building SER models. An accurate analysis requires simulation using circuit netlist, device characteristics, manufacturing process and technology parameters, and measurement data on environmental radiation. Experimental SER testing is expensive and analytical approaches are, therefore, beneficial. We model neutron-induced soft errors using two parameters, namely, occurrence rate and intensity. Our new soft error rate (SER) estimation analysis propagates occurrence rate and intensity as the width of single event transient (SET) pulses, expressed as v a probability and a probability density function, respectively, through the circuit. We consider the entire linear energy transfer (LET) range of the background radiation which is available from measurement data specific to the environment and device material. Soft error rates are calculated for ISCAS85 benchmark circuits in the standard units, failure in time (FIT, i.e., failures in 10 9 hours). In comparison to the reported SER analysis results in the literature, our method considers several more relevant factors including sensitive regions, circuit technology, etc., which may influence the SER. Our simulation results for ISCAS85 benchmark circuits show similar trend as other reported work. For example, our soft error rate results for C432 and C499 considering ground-level environment are 1.18×10 3 FIT and 1.41×10 3 FIT, respectively. Although no measured data are available for logic circuits, SER for 0.25µ and 0.13µ 1M-bit SRAMs have been reported in the range 10 4 to 10 5 FIT, and for 0.25µ 1G-bit SRAM around 4.2 × 10 3 FIT. We also discuss the factors that may cause several orders of magnitude difference in our results and certain other logic analysis methods. The CPU …

20 citations


Cites background from "Online VLSI Testing"

  • ...These include fault tolerant computing, Error Correcting Code (ECC) and parity, online-testing [66, 97, 99, 101, 137, 138] and redundancy [151, 163]....

    [...]

Proceedings ArticleDOI
01 Sep 2006
TL;DR: A non-concurrent on-line testing technique via scan chains that is characterized by high error coverage, moderate hardware overhead, and negligible time redundancy is presented.
Abstract: With operational faults becoming the dominant cause of failure modes in modern VLSI, widespread deployment of on-line test technology has become crucial. In this paper, we present a non-concurrent on-line testing technique via scan chains. We discuss the modifications needed in the design so that it can be tested on-line using our technique. We demonstrate our technique on a case study of a pipelined 8 x 8 multiply and accumulate unit. The case study shows that our technique is characterized by high error coverage, moderate hardware overhead, and negligible time redundancy.

10 citations


Cites background from "Online VLSI Testing"

  • ...Key benefits of on-line testing include low-latency fault detection and correction, fault effect localization, and fault tolerance [1]....

    [...]

Journal ArticleDOI
TL;DR: Two novel architectures of embedded self-testing checkers for low-cost and cyclic arithmetic codes, one based on code word generators and adders, the other based oncode word accumulators are presented.
Abstract: Code checkers that monitor the outputs of a system can detect both permanent and transient faults. We present two novel architectures of embedded self-testing checkers for low-cost and cyclic arithmetic codes, one based on code word generators and adders, the other based on code word accumulators. In these schemes, the code checker receives all possible code words but one, irrespective of the number of different code words that are produced by the circuit under check (CUC). So any code checker can be employed that is self-testing for all or a particular subset of code words, and the structure of the code checker need not be tailored to the set of code words produced by the CUC. The proposed code word generators and accumulators are built from simple standard hardware structures, counters and end-around-carry adders. They can also be utilized in an off-line BIST environment as pattern generators and test response compactors.

9 citations

Dissertation
01 Oct 2011
TL;DR: The aim of this research is to explore the possibility of whether an alternative hardware architecture inspired from the biological world, and entirely different from traditional processing, may be better suited for implementing intelligent behaviour while also exhibiting robustness.
Abstract: The evolution of Artificial Intelligence has passed through many phases over the years, going from rigorous mathematical grounding to more intuitive bio-inspired approaches. However, to date, it has failed to pass the Turing test. A popular school of thought is that stagnation in the 1970s and 1980s was primarily due to insufficient hardware resources. However, if this had been the only reason, recent history should have seen AI advancing in leaps and bounds – something that is conspicuously absent. Despite the abundance of AI algorithms and machine learning techniques, the state of the art still fails to capture the rich analytical properties of biological beings or their robustness. Moreover, recent research in neuroscience points to a radically different approach to cognition, with distributed divergent connections rather than convergent ones. This leads one to question the entire approach that is prevalent in the discipline of AI today, so that a re-evaluation of the basic fabric of computation may be in order. In practice, the traditional solution for solving difficult AI problems has always been to throw more hardware at it. Today, that means more parallel cores. Although there are a few parallel hardware architectures that are novel, most parallel architectures – and especially the successful ones – simply combine Von Neumann style processors to make a multi-processor environment. The drawbacks of the Von Neumann architecture are widely published in literature. Regardless, even though the novel architectures may not implement non-Von-Neumann style cores, computation is still based on arithmetic and logic units (ALU). The aim of this research is to explore the possibility of whether an alternative hardware architecture inspired from the biological world, and entirely different from traditional processing, may be better suited for implementing intelligent behaviour while also exhibiting robustness.

7 citations

References
More filters
Journal ArticleDOI
TL;DR: An overview of a comprehensive collection of on-line testing techniques for VLSI, avoiding complex fail-safe interfaces using discrete components; radiation hardened designs, avoiding expensive fabrication process such as SOI, etc.
Abstract: This paper presents an overview of a comprehensive collection of on-line testing techniques for VLSI. Such techniques are for instance: self-checking design, allowing high quality concurrent checking by means of hardware cost drastically lower than duplication; signature monitoring, allowing low cost concurrent error detection for FSMs; on-line monitoring of reliability relevant parameters such as current, temperature, abnormal delay, signal activity during steady state, radiation dose, clock waveforms, etc.; exploitation of standard BIST, or implementation of BIST techniques specific to on-line testing (Transparent BIST, Built-In Concurrent Self-Test,...); exploitation of scan paths to transfer internal states for performing various tasks for on-line testing or fault tolerance; fail-safe techniques for VLSI, avoiding complex fail-safe interfaces using discrete components; radiation hardened designs, avoiding expensive fabrication process such as SOI, etc.

234 citations

Proceedings ArticleDOI
C. Hennebert1, G. Guiho
22 Jun 1993
TL;DR: An overview of the SACEM system which controls the train movements on RER A in Paris, which transports daily one million passengers, is given, including the techniques aimed at insuring safety (online error detection, software validation).
Abstract: The authors give an overview of the SACEM system which controls the train movements on RER A in Paris, which transports daily one million passengers. The various aspects of the dependability of the system are described, including the techniques aimed at insuring safety (online error detection, software validation). Fault tolerance of the onboard-ground compound system is emphasized.

43 citations

Proceedings ArticleDOI
18 Oct 1998
TL;DR: Error detecting and correcting code based memory design, self-checking design, VLSI-level retry architectures, perturbation hardened design, tools for evaluation of soft error rates, and other on-line testing techniques are becoming mandatory in order to achieve increasingly levels of soft-error robustness and push aggressively the limits of technological scaling.
Abstract: Error detecting and correcting code based memory design, self-checking design, VLSI-level retry architectures, perturbation hardened design, tools for evaluation of soft error rates, and other on-line testing techniques are becoming mandatory in order to achieve increasingly levels of soft-error robustness and push aggressively the limits of technological scaling. In the next few years, considerable efforts have to be concentrated on the development of such techniques and the related CAD tools.

40 citations


"Online VLSI Testing" refers background or methods in this paper

  • ...The (4,2)-code coupled with (4, 2 )-redundancy localizes an error to the module in which it occurred....

    [...]

  • ...Each module in the (4, 2 )-redundant system consists of a processor, an encoder, a decoder, and memory, all operating synchronously and deterministically....

    [...]

  • ...Examples of online VLSI testing practice in communication systems include the Phillips implementation of (4, 2 )-redundancy in its SOPHO S-2500 communication switches and H1 broadband switches....

    [...]

  • ...In a (4, 2 )-redundant system, the processing logic is quadrupled, while the memory size is doubled....

    [...]

  • ...This (4, 2 )-code ensures that any two code symbols are sufficient to derive the original information....

    [...]

Journal ArticleDOI
TL;DR: This paper will discuss the state of the art and future trends of on-line testing techniques for VLSI and describe on- line testing techniques that could provide adequate solutions to emerging requirements and problems.

40 citations

Journal ArticleDOI
TL;DR: Fault-tolerant and error detection features in HaL's memory management unit (MMU) allow recovery from transient errors in the MMU, and low overhead linear polynomial codes have been chosen to minimize both the hardware and software instrumentation impact.
Abstract: This paper describes fault-tolerant and error detection features in HaL's memory management unit (MMU). The proposed fault-tolerant features allow recovery from transient errors in the MMU. It is shown that these features were natural choices considering the architectural and implementation constraints in the MMU's design environment. Three concurrent error detection and correction methods employed in address translation and coherence tables in the MMU are described. Virtually-indexed and virtually-tagged cache architecture is exploited to provide an almost fault-secure hardware coherence mechanism in the MMU, with very small performance overhead (less than 0.01% in the instruction throughput). Low overhead linear polynomial codes have been chosen in these designs to minimize both the hardware and software instrumentation impact. >

26 citations