scispace - formally typeset
Search or ask a question

Showing papers on "Fault model published in 1998"


Journal ArticleDOI
TL;DR: In this paper, the authors show that when the size of the slipping patch is much smaller than the dimensions of the fault plane, and strength recovery is geologically instantaneous, the displacement profile follows an approximately linear decrease towards the tip similar to natural examples.

216 citations


Journal ArticleDOI
TL;DR: This paper presents a modeling procedure and diagnostic algorithm for fixture related faults in panel assembly based on least squares estimation, which is capable of detecting and classifying multiple fixture faults.
Abstract: This paper presents a modeling procedure and diagnostic algorithm for fixture related faults in panel assembly, From geometric information about the panel and fixture, a fixture fault model can be constructed off-line. Combining the fault model with inline panel dimensional measurements, the algorithm is capable of detecting and classifying multiple fixture faults. The algorithm, which relies heavily on the fault model, is based on least squares estimation. Consequently, the test is of relatively simple form and is easily implemented and analyzed, Experimental results of applying the algorithm to an autobody assemble process are provided.

139 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: A new approach for testing word-oriented memories is presented, distinguishing between inter-word and intra-word faults and allowing for a systematic way of converting tests for bit-oriented Memories to rests for word- oriented memories.
Abstract: Most memory test algorithms are optimized tests for a particular memory technology, and a particular set of fault models, under the assumption that the memory is bit-oriented; i.e., read and write operations affect only a single bit in the memory. Traditionally, word-oriented memories have been tested by repeated application of a test for bit-oriented memories whereby a different data background (which depends on the used intra-word fault model) is used during each iteration. This results in rime inefficiencies and limited fault coverage. A new approach for testing word-oriented memories is presented, distinguishing between inter-word and intra-word faults and allowing for a systematic way of converting tests for bit-oriented memories to rests for word-oriented memories. The conversion consists of concatenating the bit-oriented test for inter-word faults with a test for intra-word faults. This approach results in more efficient tests with complete coverage of the targeted faults. Because most memories have an external data path which is wider than one bit, word-oriented memory tests are very important.

93 citations


Proceedings ArticleDOI
16 May 1998
TL;DR: This paper introduces a method to detect and identify faults in wheeled mobile robots to use adaptive estimation to predict (in parallel) the outcome of several faults.
Abstract: This paper introduces a method to detect and identify faults in wheeled mobile robots. The idea behind the method is to use adaptive estimation to predict (in parallel) the outcome of several faults. Models of the system behavior under each type of fault are embedded in the various parallel estimators (each of which is a Kalman filter). Each filter is thus tuned to a particular fault. Using its embedded model each filter predicts values for the sensor readings. The residual (the difference between the predicted and actual sensor reading) is an indicator of how well the filter is performing. A fault detection and identification module is responsible for processing the residual to decide which fault has occurred. As an example the method is implemented successfully on a Pioneer I robot. The paper concludes with a discussion of future work.

92 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: The paper will experimentally show that the test patterns generated at the behavioral level provide a very high stuck-at fault coverage when applied to different gate-level implementations of the given VHDL behavioral specification.
Abstract: This paper proposes a behavioral-level test pattern generation algorithm for behavioral VHDL descriptions. The proposed approach is based on the comparison between the implicit description of the fault-free behavior and the faulty behavior, obtained through a new behavioral fault model. The paper will experimentally show that the test patterns generated at the behavioral level provide a very high stuck-at fault coverage when applied to different gate-level implementations of the given VHDL behavioral specification. Gate-level ATPGs applied on these same circuits obtain lower fault coverage, in particular when considering circuits with hard to detect faults.

91 citations


Journal ArticleDOI
TL;DR: In this article, two numerical algorithms for fault location and distance protection using data from one end of a transmission line are presented, which are relatively simple and easy to be implemented in the on-line application.
Abstract: Two numerical algorithms for fault location and distance protection which use data from one end of a transmission line are presented. Both algorithms require only current signals as input data. Voltage signals are unnecessary for determining the unknown distance to the fault. The solution for the most frequent phase to ground fault is presented. The algorithms are relatively simple and easy to be implemented in the on-line application. The algorithms allow for accurate calculation of the fault location irrespective of the fault resistance and load. To illustrate the features of the new algorithms, steady-state and dynamic tests are presented.

87 citations


Proceedings ArticleDOI
23 Jun 1998
TL;DR: TFT, the work presented in this paper, provides transparent fault-tolerance at a higher interface than prior solutions, and must enforce a deterministic computation above the system call interface.
Abstract: An important objective of software fault tolerant systems should be to provide a fault-tolerance infrastructure in a manner that minimizes the effort required by the application developer. In the limit, the objective is to provide fault tolerance transparently to the application. TFT, the work presented in this paper, provides transparent fault-tolerance at a higher interface than prior solutions. TFT coordinates replicas at the system call interface, interposing a supervisor agent between the application and the operating system. Moving the replica coordination to this interface allows uncorrelated faults within the operating system and below to be tolerated and also admits the possibility of online operating system and hardware upgrades. To accomplish its task, TFT must enforce a deterministic computation above the system call interface. The potential sources of non-determinism addressed include non-deterministic system calls, delivery of asynchronous events, and the representation of operating system abstractions that differ between replicas.

84 citations


Journal ArticleDOI
TL;DR: The bounds of the minimum vertex cut set for m-ary n-dimensional hypercubes are studied by requiring each node to have at least k healthy neighbors to show that this model can better reflect fault patterns in a real system than the existing ones.
Abstract: In this paper, we study fault tolerance measures for m-ary n-dimensional hypercubes based on the concept of forbidden faulty sets. In a forbidden faulty set, certain nodes cannot be faulty at the same time and this model can better reflect fault patterns in a real system than the existing ones. Specifically, we study the bounds of the minimum vertex cut set for m-ary n-dimensional hypercubes by requiring each node to have at least k healthy neighbors. Our result enhances and generalizes a result by Latifi et al. for binary hypercubes. Our study also shows that the corresponding result based on the traditional fault model (where k is zero) tends to underestimate network resilience of large networks such as m-ary n-dimensional hypercubes.

72 citations


Journal ArticleDOI
TL;DR: A fast fault simulation approach based on ordinary logic emulation that reduces the number of faults actually emulated by screening off faults not activated or with short propagation distances before emulation, and by collapsing nonstem faults into their equivalent stem faults.
Abstract: A fast fault simulation approach based on ordinary logic emulation is proposed. The circuit configured into our system that emulates the faulty circuit's behaviour is synthesized from the good circuit and the given fault list in a novel way. Fault injection is made easy by shifting the content of a fault injection scan chain or by selecting the output of a parallel fault injection selector, with which we get rid of the time-consuming bit-stream regeneration process. Experimental results for ISCAS-89 benchmark circuits show that our serial fault emulator is about 20 times faster than HOPE. The speedup grows with the circuit size by our analysis. Two hybrid fault emulation approaches are also proposed. The first reduces the number of faults actually emulated by screening off faults not activated or with short propagation distances before emulation, and by collapsing nonstem faults into their equivalent stem faults. The second reduces the hardware requirement of the fault emulator by incorporating an ordinary fault simulator.

71 citations


Book
01 Jan 1998
TL;DR: In this article, the authors propose a self-testability model for digital circuits and fault models, which is based on random testing and built-in self-testing models, and test length for a test sequence.
Abstract: Random testing and built-in self-test models for digital circuits and fault models basic concepts and test generation methods performance measurements for a test sequence basic principles of random testing random test length for combinational circuits random test length for sequential circuits random test length for RAMs random test length for microprocessors generation of random test sequences experimental results signature analysis design for random testability appendices - A - random pattern sources, B - calculation of a probability of complete fault coverage, C - finite Markov chains, D - black-box fault model, E - exact calculation of activities, F - comparing asynchronous and synchronous test, G - proofs of properties 7.1, 7.2 and 12.3, H - microprocessor Motorola 6800, I - pseudorandom testing, J - random testing of delay faults, K - subsequences of required lengths, L - diagnosis from random testing, M - conjecture about multiple faults exercises solutions to exercises.

67 citations


Proceedings ArticleDOI
23 Jun 1998
TL;DR: Formal risk analysis, an approach for automatically generating a fault tree from finite state machine-based descriptions of a system, is presented and is the basis for subsequent improvements of the system design and quantitative analysis of safety and liveness requirements in the presence of failures.
Abstract: Usually, fault tree analyses are performed manually. They are based on documents that describe the system. Considerable knowledge, system insight, and overview is necessary to consider many failure modes, and dependencies between system components and their functionality at a time. Often, the behavior is too complicated to fully comprehend all possible failure consequences. Manual fault tree analysis is error-prone, costly and not necessarily complete. Formal risk analysis, an approach for automatically generating a fault tree from finite state machine-based descriptions of a system, is presented. The generated fault tree is complete with respect to all failures assumed possible. It is the basis for subsequent improvements of the system design and quantitative analysis of safety and liveness requirements in the presence of failures. A case study of formal risk analysis, the automatic generation of a fault tree for all sensor failures of a production cell's elevating rotary table, is discussed.

Proceedings ArticleDOI
19 Jan 1998
TL;DR: A set of collapsing rules based on the analysis of the assembly code and of the behavior of a fault free run of the system are proposed, useful to reduce the fault list length and the fault injection time without decreasing the accuracy of the results.
Abstract: Fault injection is become a popular approach to evaluate and possibly to improve the dependability of computer-based systems. One of the main issues to be solved when setting up a fault injection experiment is the generation of a list of faults to be injected, really representative of the whole set of possible faults. This paper proposes a set of collapsing rules based on the analysis of the assembly code and of the behavior of a fault free run of the system, useful to reduce the fault list length and the fault injection time without decreasing the accuracy of the results. The approach is suitable to be adapted for microprocessor-based systems and is independent on the method used to generate the fault list to be collapsed.

Journal ArticleDOI
TL;DR: To release the limitations of a fully connected network and a single faulty type, the problem is reconsidered in a general network and the proposed protocol uses the minimum number of message exchanges and can tolerate the maximum number of allowable faulty components to make each fault-free processor reach an agreement.
Abstract: In early stage, the Byzantine agreement (BA) problem was studied with single faults on processors in either a fully connected network or a nonfully connected network. Subsequently, the single fault assumption was extended to mixed faults (also referred to as hybrid fault model) on processors. For the case of both processor and link failures, the problem has been examined in a fully connected network with a single faulty type, namely an arbitrary fault. To release the limitations of a fully connected network and a single faulty type, the problem is reconsidered in a general network. The processors and links in such a network can both be subjected to different types of fault simultaneously. The proposed protocol uses the minimum number of message exchanges and can tolerate the maximum number of allowable faulty components to make each fault-free processor reach an agreement.

Journal ArticleDOI
TL;DR: A structure dependent method for the systematic design of a self-checking circuit which is well adapted to the fault model of single gate faults and which can be used in test mode is proposed.
Abstract: In this paper we propose a structure dependent method for the systematic design of a self-checking circuit which is well adapted to the fault model of single gate faults and which can be used in test mode.

Journal ArticleDOI
TL;DR: In this article, an accurate algorithm for locating double phase-to-earth faults on transmission lines of nondirect ground neutral systems is presented, which employs the faulted phase network and zero-sequence network as the fault location model, effectively eliminates the effect of load flow and fault resistance on the accuracy of fault location.
Abstract: An accurate algorithm for locating double phase-to-earth faults on transmission lines of nondirect ground neutral systems is presented. The algorithm employs the faulted phase network and zero-sequence network as the fault location model. It effectively eliminates the effect of load flow and fault resistance on the accuracy of fault location. The technique embodies an accurate location by measuring only one local end data. The algorithm is used in a procedure that provides the automatic determination of faulted line and phase, rather than requires engineer to specify them. Simulation results have shown the effectiveness of the algorithm under the condition of earth faults.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: An approach to fault diagnosis that is robust, comprehensive, extendable, and practical is outlined, designed to incorporate disparate diagnostic algorithms, different sets of data, and a mixture of fault models into a single diagnostic result.
Abstract: Previously-proposed strategies for VLSI fault diagnosis have suffered from a variety of self-imposed limitations. Some techniques are limited to a specific fault model, and many will fail in the face of any unmodeled behavior or unexpected data. Others apply ad-hoc or arbitrary scoring mechanisms to fault candidates, making the results difficult to interpret or to compare with the results from other algorithms. This paper outlines an approach to fault diagnosis that is robust, comprehensive, extendable, and practical. By introducing a probabilistic framework for diagnostic prediction, it is designed to incorporate disparate diagnostic algorithms, different sets of data, and a mixture of fault models into a single diagnostic result. Results from diagnosis experiments on a Hewlett-Packard ASIC and FIB inserted defects are presented.

Proceedings ArticleDOI
16 Dec 1998
TL;DR: In this paper, a fault-tolerant control method for additive or multiplicative actuator and component faults is developed. But the main advantage of this approach is that it is very easy to implement and it does not require a fault diagnosis module which reduce the computation time and avoid the problems caused by false alarms and delays in the fault detection and isolation.
Abstract: In this paper, a fault-tolerant control method is developed. This method makes possible the compensation of additive or multiplicative actuator and component faults. Its principle is based on the online estimation of a quantity which is equal to zero in the fault-free case and equal to the fault magnitude when a fault occurs on the system. Then, a new control law is added to the nominal one in order to compensate the fault effect on the system. The main advantage of this approach is that it is very easy to implement and it does not require a fault diagnosis module which reduce the computation time and avoid the problems caused by false alarms and delays in the fault detection and isolation. The performances of this method are tested in simulation on a pilot plant.

Proceedings ArticleDOI
16 Feb 1998
TL;DR: A variant of the communicating finite state machine model is introduced for the specification of networks in order to determine correct or faulty behavior using passive testing and approaches for fault detection and fault location are discussed.
Abstract: We introduce a variant of the communicating finite state machine model for the specification of networks in order to determine correct or faulty behavior using passive testing. The appropriateness of the model is first argued, followed by an initial study of how the passive testing procedures developed for finite state machines could be applied. Approaches for, and limitations of, fault detection and fault location using this approach are discussed.

01 Jan 1998
TL;DR: A survey of software fault-tolerant clock synchronization algorithms: deterministic, probabilistic and statistical ; internal and external ; and resilient from crash to Byzantine failures is proposed.
Abstract: Clock synchronization algorithms ensure that physically dispersed processors have a common knowledge of time. This report proposes a survey of software fault-tolerant clock synchronization algorithms: deterministic, probabilistic and statistical ; internal and external ; and resilient from crash to Byzantine failures. Our survey is based on a classification of clock synchronization algorithms (according to their internal structure and to three orthogonal and independent basic building blocks we have identified), and on a performance evaluation of algorithms constructed from these building blocks. The performance evaluation is achieved through the simulation of a panel of fault-tolerant clock synchronization algorithms (LL88, ST87, PB95, GZ89). The algorithms behavior is analyzed in the presence of various kinds of failures (crash, omission, timing, performance, Byzantine), both when the number and type of failures respect the fault assumptions made by the algorithm and when fault assumptions are exceeded. Our survey will help the designer in choosing the most appropriate structure of algorithm and the best building blocks suited to his/her hardware architecture, failure model, quality of synchronized clocks and message cost induced. Moreover, our classification uses a uniform notation that allows to compare existing clock synchronization algorithms with respect to their fault model, the building blocks they use, the properties they ensure and their cost in terms of message exchanges.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: By staying away from physical details stuck-at fault model remains effective with changing technologies and design styles and is also the reason that it will be the model of choice in the next millenium.
Abstract: One of the common misconceptions about a stuck-at fault model is that it does not model a physical defect accurately and therefore is not adequate for testing defects in advancing technologies. Stuck-at fault model can be described as anything but a physical defect model. You can call it abstract, logical, Boolean, symbolic, functional or behavioral model, but don't call it a physical defect model! But that is not a weakness of the stuck-at fault model. To the contrary, abstraction is the main strength of this model and the reason for its longevity. Abstraction is also the reason that it will be the model of choice in the next millenium. By staying away from physical details stuck-at fault model remains effective with changing technologies and design styles. Stuck-at fault model operates in the logic domain while most physical level models operate in analog domain or even in the electromagnetic domain. With the rapidly growing number of transistors on a chip, abstraction is a necessity to manage complexity. So if anything the trend for the next millenium is likely to be a higher level of abstraction. Does the abstraction come at a cost in loss of defect detection? To answer this let us consider two major classes of defects: defects internal to a gate and defects external to a gate.

Proceedings ArticleDOI
14 Dec 1998
TL;DR: A new conceptual model, the XBW-model, which describes the time behavior and distribution properties of a system in such a way that static scheduling and systematic fault tolerance can be applied and is developed within in the European Brite-EuRam III project X-By-Wire.
Abstract: This paper presents a new conceptual model, the XBW-model. Distributed computing is becoming a cost effective way to implement safety critical control systems. To support the development of such systems the XBW conceptual model was developed. The model describes the time behavior and distribution properties of a system in such a way that static scheduling and systematic fault tolerance can be applied. The conceptual model also enables the definition of an appropriate fault model. This fault model along with the XBW-model allow efficient and systematic use of well known software based error detection methods. A distributed steer-by-wire control system is described, which is developed according to the model. The XBW-model is developed within in the European Brite-EuRam III project X-By-Wire.

Proceedings ArticleDOI
26 Apr 1998
TL;DR: An extension of the n-detection model is proposed that alleviates the problem of the same set of faults to be detected by several different tests, by considering m-tuples of faults and requiring that different tests would detect different m- tuples.
Abstract: N-detection stuck-at test sets were shown to be effective in achieving high defect coverages for benchmark circuits. However, the definition of n-detection rest sets allows the same set of faults to be detected by several different tests, thus potentially detecting the same defects. We propose an extension of the n-detection model that alleviates this problem by considering m-tuples of faults and requiring that different tests would detect different m-tuples. We present experimental results to support this model.

Proceedings ArticleDOI
18 Oct 1998
TL;DR: Practical experiences in applying a bridging fault based diagnosis technique to a TI ASIC design are presented for units into which known bridging defects have been introduced via a focused ion beam (FIB) machine.
Abstract: Automated fault diagnosis based on the stuck-at fault model is not always effective. This paper presents practical experiences in applying a bridging fault based diagnosis technique to a TI ASIC design. Results are presented for units into which known bridging defects have been introduced via a focused ion beam (FIB) machine.

Journal ArticleDOI
TL;DR: The authors describe EXFI, a prototypical system implementing the approach, and provide data about some sample benchmark applications, showing the main advantages of EXFI are the low cost, the good portability, and the high efficiency.
Abstract: Evaluating the faulty behavior of low-cost embedded microprocessor-based boards is an increasingly important issue, due to their adoption in many safety critical systems. The architecture of a complete Fault Injection environment is proposed, integrating a module for generating a collapsed list of faults, and another for performing their injection and gathering the results. To address this issue, the paper describes a software-implemented Fault Injection approach based on the Trace Exception Mode available in most microprocessors. The authors describe EXFI, a prototypical system implementing the approach, and provide data about some sample benchmark applications. The main advantages of EXFI are the low cost, the good portability, and the high efficiency

Proceedings ArticleDOI
04 Jan 1998
TL;DR: A comparison between delay fault models, namely, gate delay, transition, path delay, line delay and segment delay faults, shows their benefits and limitations.
Abstract: Failures that cause logic circuits to malfunction at the desired clock rate and thus violate timing specifications are currently receiving much attention. Such failures are modeled as delay faults. They facilitate delay testing. The use of delay fault models in VLSI test generation is very likely to gain industry acceptance in the near future. In this paper, we review delay fault models, discuss their classifications and examine fault coverage metrics that have been proposed in the recent literature. A comparison between delay fault models, namely, gate delay, transition, path delay, line delay and segment delay faults, shows their benefits and limitations. Various classifications of the path delay fault model, that have received the most attention in recent years, are reviewed. We believe an understanding of delay fault models is essential in today's VLSI design and test environment.

Journal ArticleDOI
TL;DR: Test results for a sample electrical distribution network have shown that the developed mathematical model for the fault diagnosis problem is correct, and the adopted GA based method is efficient.

Proceedings ArticleDOI
02 Nov 1998
TL;DR: This paper presents accurate fault models, an accurate fault simulation technique, and a new fault coverage metric for resistive bridging faults in gate level combinational circuits at nominal and reduced power supply voltages.
Abstract: This paper presents accurate fault models, an accurate fault simulation technique, and a new fault coverage metric for resistive bridging faults in gate level combinational circuits at nominal and reduced power supply voltages. We show that some faults have unusual behavior, which has been observed in practice. On the ISCAS85 benchmark circuits we show that a zeroohm bridge fault model can be quite optimistic in terms of coverage of voltage-testable bridging faults.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: This work studies the behavior of algorithm based fault tolerance techniques under faults injected according to a quite general fault model, and proposes the robust ABFT technique, whose effectiveness is shown by fault injection experiments on a realistic control application.
Abstract: We study the behavior of algorithm based fault tolerance (ABFT) techniques under faults injected according to a quite general fault model Besides the problem of roundoff error in floating point arithmetic we identify two further weakpoints, namely lack of protection of data during input and output, and incorrect execution of the correctness checks We propose the robust ABFT technique to handle those weakpoints We then generalize it to programs that use assertions, where similar problems arise, leading to the technique of robust assertions, whose effectiveness is shown by fault injection experiments on a realistic control application With this technique a system follows a new failure model, that we call fail-bounded, where with high probability all results produced are either correct or, if wrong, they are within a certain bound of the correct value, whose exact value depends on the output assertions used We claim that this failure model is very useful to describe the behavior of many low redundancy systems

Proceedings ArticleDOI
04 Nov 1998
TL;DR: A fault surrogate is developed based on changes in relative complexity, a synthetic measure which has been successfully used as a fault surrogate in previous work and shows that changes in the relative complexity can be used to estimate the rates at which faults are inserted into a system between successive revisions.
Abstract: In developing a software system, we would like to be able to estimate the way in which the fault content changes during its development, as well as determining the locations having the highest concentration of faults. In the phases prior to test, however, there may be very little direct information regarding the number and location of faults. This lack of direct information requires the development of a fault surrogate from which the number of faults and their location can be estimated. We develop a fault surrogate based on changes in relative complexity, a synthetic measure which has been successfully used as a fault surrogate in previous work. We show that changes in the relative complexity can be used to estimate the rates at which faults are inserted into a system between successive revisions. These rates can be used to continuously monitor the total number of faults inserted into a system, the residual fault content, and identify those portions of a system requiring the application of additional fault detection and removal resources.

Book ChapterDOI
24 Sep 1998
TL;DR: The amount of memory needed by the fault detectors for some specific tasks, and the number of views that a processor has to maintain to ensure a quick detection, is studied to give the implementation designer hints concerning the techniques and resources that are required for implementing a task.
Abstract: In this paper we present failure detectors that detect transient failures, i.e. corruption of the system state without corrupting the program of the processors. We distinguish task which is the problem to solve, from implementation which is the algorithm that solve the problem. A task is specified as a desired output of the distributed system. The mechanism used to produce this output is not a concern of the task but a concern of the implementation.