scispace - formally typeset
Search or ask a question

Showing papers on "Fault detection and isolation published in 1998"


Book
01 Jan 1998
TL;DR: In this article, a fault detection and diagnosis framework for discrete linear systems with residual generators and residual generator parameters is presented for additive and multiplicative faults by parameter estimation using a parity equation.
Abstract: Introduction to fault detection and diagnosis discrete linear systems random variables parameter estimation fundamentals analytical redundancy concepts parity equation implementation of residual generators design for structured residuals design for directional residuals residual generation for parametric faults robustness in residual generation statistical testing of residuals model identification for the diagnosis of additive faults diagnosing multiplicative faults by parameter estimation

2,188 citations


Proceedings ArticleDOI
31 Aug 1998
TL;DR: In this article, the authors present a tutorial overview of induction motors signature analysis as a medium for fault detection, and introduce the fundamental theory, main results, and practical applications of motor signature analysis for the detection and the localization of abnormal electrical and mechanical conditions that indicate, or may lead to, a failure of inductive motors.
Abstract: This paper is intended as a tutorial overview of induction motors signature analysis as a medium for fault detection. The purpose is to introduce in a concise manner the fundamental theory, main results, and practical applications of motor signature analysis for the detection and the localization of abnormal electrical and mechanical conditions that indicate, or may lead to, a failure of induction motors. The paper is focused on the so-called motor current signature analysis (MCSA) which utilizes the results of spectral analysis of the stator current. The paper is purposefully written without "state of the art" terminology for the benefit of practicing engineers in facilities today who may not be familiar with signal processing.

612 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: In this paper, two new algorithms, redundant vector elimination (RVE) and essential fault reduction (EFR), were proposed for generating compact test sets for combinational circuits under the single stuck at fault model.
Abstract: This paper presents two new algorithms, Redundant Vector Elimination (RVE) and Essential Fault Reduction (EFR), for generating compact test sets for combinational circuits under the single stuck at fault model, and a new heuristic for estimating the minimum single stuck at fault test set size. These algorithms together with the dynamic compaction algorithm are incorporated into an advanced ATPG system for combinational circuits, called MinTest. MinTest found better lower bounds and generated smaller test sets than the previously published results for the ISCAS85 and full scan version of the ISCAS89 benchmark circuits.

451 citations


Journal ArticleDOI
TL;DR: In this paper, the fundamental issues of detectability, reconstructability, and isolatability for multidimensional faults are studied using principal component analysis (PCA) and partial least squares.
Abstract: Fault detection and process monitoring using principal-component analysis (PCA) and partial least squares were studied intensively and applied to industrial processes. The fundamental issues of detectability, reconstructability, and isolatability for multidimensional faults are studied. PCA is used to define an orthogonal partition of the measurement space into two orthogonal subspaces, a principal-component subspace, and a residual subspace. Each multidimensional fault is also described by a subspace on which the fault displacement occurs. Fault reconstruction leads to fault identification and consists of finding a new vector in the fault subspace with minimum distance to the principal-component subspace. The unreconstructed variance is proposed to measure the reliability of the reconstruction procedure and determine the PCA model for best reconstruction. Based on the fault subspace, fault magnitude, and the squared prediction error, necessary and sufficient conditions are provided to determine if the faults are detectable, reconstructable, and isolatable.

381 citations


Proceedings ArticleDOI
16 Mar 1998
TL;DR: An experiment in which the costs and benefits of minimizing test suites of various sizes for several programs are compared reveals that the fault detection capabilities of test suites can be severely compromised by minimization.
Abstract: Test suite minimization techniques attempt to reduce the cost of saving and reusing tests during software maintenance, by eliminating redundant tests from test suites. A potential drawback of these techniques is that in minimizing a test suite, they might reduce the ability of that test suite to reveal faults in the software. A study showed that minimization can reduce test suite size without significantly reducing the fault detection capabilities of test suites. To further investigate this issue, we performed an experiment in which we compared the costs and benefits of minimizing test suites of various sizes for several programs. In contrast to the previous study, our results reveal that the fault detection capabilities of test suites can be severely compromised by minimization.

312 citations


Journal ArticleDOI
TL;DR: The proposed approach is based on the interacting multiple-model (IMM) estimation algorithm, which is one of the most cost-effective adaptive estimation techniques for systems involving structural as well as parametric changes.
Abstract: An approach to detection and diagnosis of multiple failures in a dynamic system is proposed. It is based on the interacting multiple-model (IMM) estimation algorithm, which is one of the most cost-effective adaptive estimation techniques for systems involving structural as well as parametric changes. The proposed approach provides an integrated framework for fault detection, diagnosis, and state estimation. It is able to detect and isolate multiple faults substantially more quickly and more reliably than many existing approaches. Its superiority is illustrated in two aircraft examples for single and double faults of both sensors and actuators, in the forms of "total", "partial", and simultaneous failures. Both deterministic and random fault scenarios are designed and used for testing and comparing the performance fairly. Some new performance indices are presented. The robustness of the proposed approach to the design of model transition probabilities, fault modeling errors, and the uncertainties of noise statistics are also evaluated.

291 citations


Journal ArticleDOI
TL;DR: Both the key principles and real application examples of a unified theory which allows us to perform the on-board incipient fault detection and isolation tasks involved in monitoring for condition-based maintenance are described.

260 citations


Journal ArticleDOI
TL;DR: A general framework for model-based fault detection and diagnosis of a class of incipient faults is developed and an automated fault diagnosis architecture using nonlinear online approximators with an adaptation scheme is designed and analyzed.
Abstract: Detection of incipient (slowly developing) faults is crucial in automated maintenance problems where early detection of worn equipment is required. In this paper, a general framework for model-based fault detection and diagnosis of a class of incipient faults is developed. The changes in the system dynamics due to the fault are modeled as nonlinear functions of the state and input variables, while the time profile of the failure is assumed to be exponentially developing. An automated fault diagnosis architecture using nonlinear online approximators with an adaptation scheme is designed and analyzed. A simulation example of a simple nonlinear mass-spring system is used to illustrate the results.

240 citations


Journal ArticleDOI
TL;DR: In this article, a unified approach to process and sensor fault detection, identification, and reconstruction via principal component analysis is presented, which partitions the measurement space into a principal component subspace where normal variation occurs, and a residual subspace that faults may occupy.

235 citations


Proceedings ArticleDOI
28 Jul 1998
TL;DR: A fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications, using well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates.
Abstract: The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.

214 citations


Journal ArticleDOI
TL;DR: As the size of a test set is reduced, while the code coverage is kept constant, there is little or no reduction in the fault detection effectiveness of the new test set so generated.
Abstract: Given a test set T to test a program P, there are at least two attributes of T that determine its fault detection effectiveness. One attribute is the size of T measured as the number of test cases in T. Another attribute is the code coverage measured when P is executed on all elements of T. The fault detection effectiveness of T is the ratio of the number of faults guaranteed to result in program failure when P is executed on T to the total number of faults present in P. An empirical study was conducted to determine the relative importance of the size and coverage attributes in affecting the fault detection effectiveness of a randomly selected test set for some program P. Results from this study indicate that as the size of a test set is reduced, while the code coverage is kept constant, there is little or no reduction in the fault detection effectiveness of the new test set so generated. For the study reported, of the two attributes mentioned above, the code coverage attribute of a test set is more important than its size attribute. © 1998 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Simulation results on a microbial growth process are provided, which illustrate the relevance of the proposed FDI method.

Journal ArticleDOI
TL;DR: This approach to real-time fault detection and classification in power transmission systems by using fuzzy-neuro techniques can be used as an effective tool for high speed digital relaying, as the correct detection is achieved in less than 10 ms.
Abstract: This paper presents a new approach to real-time fault detection and classification in power transmission systems by using fuzzy-neuro techniques. The integration with neural network technology enhances fuzzy logic systems on learning capabilities. The symmetrical components in combination with three line currents are utilized to detect fault types such as single line-to ground, line-to-line, double line-to-ground and three line-to-ground, and then to define the faulty line. Computer simulation results are shown in this paper and they indicate this approach can be used as an effective tool for high speed digital relaying, as the correct detection is achieved in less than 10 ms.

Journal ArticleDOI
TL;DR: In this article, important faults and their performance impacts for rooftop air conditioners were identified and their impact on several performance indices were quantified through transient testing for a range of conditions and fault levels.
Abstract: This paper identifies important faults and their performance impacts for rooftop air conditioners. The frequencies of occurrence and the relative costs of service for different faults were estimated through analysis of service records. Several of the important and difficult to diagnose refrigeration cycle faults were simulated in the laboratory. Also, the impacts on several performance indices were quantified through transient testing for a range of conditions and fault levels. The transient test results indicated that fault detection and diagnostics could be performed using methods that incorporate steady-state assumptions and models. Furthermore, the fault testing led to a set of generic rules for the impacts of faults on measurements that could be used for fault diagnoses. The average impacts of the faults on cooling capacity and coefficient of performance (COP) were also evaluated. Based upon the results, all of the faults are significant at the levels introduced, and should be detected and diagnosed ...

Journal ArticleDOI
TL;DR: In this article, a fault detection and diagnostics (FDD) and fault tolerant control (FTC) strategy for nonlinear stochastic systems in closed loops based on a continuous stirred tank reactor (CSTR) is presented.
Abstract: A novel simultaneous fault detection and diagnostics (FDD) and fault tolerant control (FTC) strategy for nonlinear stochastic systems in closed loops based on a continuous stirred tank reactor (CSTR) is presented. The purpose of control is to track the reactant concentration setpoint. Instead of output feedback we propose here to use proportional-integral-derivative (PID) state feedback, which is shown essential to achieve FTC against sensor faults. A new concept of "equivalent bias" is proposed to model the sensor faults. Both the states and the equivalent bias are on-line estimated by a pseudo separate-bias estimation algorithm. The estimated equivalent bias is then evaluated via a modified Bayes' classification based algorithm to detect and diagnose the sensor faults. Many kinds of sensor faults are tested by Monte Carlo simulations, which demonstrate that the proposed strategy has definite fault tolerant ability against sensor faults, moreover the sensor faults can be on-line detected, isolated, and estimated simultaneously.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a monitoring methodology for a process with multiple operation modes based on hierarchical clustering and a super PCA model, which has shown better performances than a single PCA for all operation modes and local PCA models developed for each operation mode.

Proceedings ArticleDOI
13 Oct 1998
TL;DR: A system architecture is presented for the general problem of failure detection and identification in mobile robots and the MMAE algorithm is demonstrated on a Pioneer I robot in the case of three different sensor failures.
Abstract: Multiple model adaptive estimation (MMAE) is used to detect and identify sensor failures in a mobile robot. Each estimator is a Kalman filter with a specific embedded failure model. The filter bank also contains one filter which has the nominal model embedded within it. The filter residuals are postprocessed to produce a probabilistic interpretation of the operation of the system. The output of the system at any given time is the confidence in the correctness of the various embedded models. As an additional feature the standard assumption that the measurements are available at a constant, common frequency, is relaxed. Measurements are assumed to be asynchronous and of varying frequency. The particularly difficult case of 'soft' sensor failure is also handled successfully. A system architecture is presented for the general problem of failure detection and identification in mobile robots. As an example, the MMAE algorithm is demonstrated on a Pioneer I robot in the case of three different sensor failures.

Journal ArticleDOI
Z.Q. Bo1
TL;DR: In this article, a multi-channel filter unit is applied to the captured signals to extract desired bands of high frequency signals, and a comparison between the spectral energies of different bands of the filter outputs determines whether a fault is internal or external to the protected zone.
Abstract: This paper proposes a new noncommunication protection technique for transmission line protection. The technique relies on firstly the detection of fault generated high frequency current transient signals. A specially designed multi-channel filter unit is then applied to the captured signals to extract desired bands of high frequency signals. Comparison between the spectral energies of different bands of the filter outputs determines whether a fault is internal or external to the protected zone. In addition to the saving in costs through negating the need for a communication link, the technique also retains many advantages of the 'transient based protection' technology, such as insensitivity to fault type, fault position, fault path resistance and fault inception angle. It is also not affected by CT saturation, the power frequency short-circuit level at the terminating busbar or the precise configuration of the source side networks.

Proceedings ArticleDOI
26 May 1998
TL;DR: Two primitive components, namely detectors and correctors, provide a basis for achieving the different types of fault tolerance properties required in computing systems and it is argued that they sometimes offer the potential for better designs than those obtained from extant methods.
Abstract: Two primitive components, namely detectors and correctors, provide a basis for achieving the different types of fault tolerance properties required in computing systems. We develop the theory of these primitive tolerance components, characterizing precisely their role in achieving the different types of fault tolerance. Also, we illustrate how they can be used to formulate extant design methods and argue that they sometimes offer the potential for better designs than those obtained from extant methods.

Proceedings ArticleDOI
11 Oct 1998
TL;DR: A new methodology is presented for developing a diagnostic system using waveform signals with limited or with no prior fault information, and the system diagnostic performance is continuously improved as the knowledge of process faults is automatically accumulated during production.
Abstract: In this paper, a new methodology is presented for developing a diagnostic system using waveform signals with limited or with no prior fault information. The key issues studied in this paper are automatic fault detection, optimal feature extraction, optimal feature subset selection, and diagnostic performance assessment. By using this methodology, the system diagnostic performance is continuously improved as the knowledge of process faults is automatically accumulated during production. As a real example, the tonnage signal analysis for stamping process monitoring is provided to demonstrate the implementation of this methodology.

Proceedings ArticleDOI
16 Dec 1998
TL;DR: In this article, the authors describe an approach to sensor/actuator failure detection and identification and fault tolerant control based on the interacting multiple model (IMM) Kalman filter approach.
Abstract: We describe a novel approach to sensor/actuator failure detection and identification and fault tolerant control based on the interacting multiple model (IMM) Kalman filter approach. Failures are mapped into different (and unique) state-space model representations. The IMM algorithm computes (online) the posterior probability of each failure model, that can be interpreted as a failure indicator. The fault tolerant control approach presented is based on a multiple model control law, where an optimal controller is designed for each actuator failure model, and the control action is a combination of the individual outputs of each controller weighted by the posterior probability associated with that model. The new FDI-FTC approach was tested on a linear simulation of Bell Helicopter's Eagle-Eye unmanned air vehicle. All single sensor and actuator failures were detected and properly identified, as well as some simultaneous failures.

Journal ArticleDOI
TL;DR: Compared to other techniques for fault tolerance in FPGAs, these methods are shown to provide significantly greater yield improvement, and a 35 percent non-FT chip yield for a 16/spl times/16 FPGA is more than doubled.
Abstract: The very high levels of integration and submicron device sizes used in current and emerging VLSI technologies for FPGAs lead to higher occurrences of defects and operational faults. Thus, there is a critical need for fault tolerance and reconfiguration techniques for FPGAs to increase chip yields (with factory reconfiguration) and/or system reliability (with field reconfiguration). We first propose techniques utilizing the principle of node-covering to tolerate logic or cell faults in SRAM-based FPGAs. A routing discipline is developed that allows each cell to cover-to be able to replace-its neighbor in a row. Techniques are also proposed for tolerating wiring faults by means of replacement with spare portions. The replaceable portions can be individual segments, or else sets of segments, called "grids". Fault detection in the FPGAs is accomplished by separate testing, either at the factory or by the user. If reconfiguration around faulty cells and wiring is performed at the factory (with laser-burned fuses, for example), it is completely transparent to the user. In other words, user configuration data loaded into the SRAM remains the same, independent of whether the chip is detect-free or whether it has been reconfigured around defective cells or wiring-a major advantage for hardware vendors who design and sell FPGA-based logic (e.g., glue logic in microcontrollers, video cards, DSP cards) in production-scale quantities. Compared to other techniques for fault tolerance in FPGAs, our methods are shown to provide significantly greater yield improvement, and a 35 percent non-FT chip yield for a 16/spl times/16 FPGA is more than doubled.

Journal ArticleDOI
TL;DR: A theory of detectors, correctors, and their interference free composition with intolerant programs is developed, which enables stepwise addition of components to provide tolerance to a new fault class while preserving the tolerances to the previously added fault classes.
Abstract: The concept of multitolerance abstracts problems in system dependability and provides a basis for improved design of dependable systems. In the abstraction, each source of undependability in the system is represented as a class of faults, and the corresponding ability of the system to deal with that undependability source is represented as a type of tolerance. Multitolerance thus refers to the ability of the system to tolerate multiple fault classes, each in a possibly different way. We present a component based method for designing multitolerance. Two types of components are employed by the method, namely detectors and correctors. A theory of detectors, correctors, and their interference free composition with intolerant programs is developed, which enables stepwise addition of components to provide tolerance to a new fault class while preserving the tolerances to the previously added fault classes. We illustrate the method by designing a fully distributed multitolerant program for a token ring.

Book
01 Jan 1998
TL;DR: The approach not only generalizes previous work in the literature on asymptotically optimal detection-isolation far beyond the relatively simple models treated but also suggests alternative performance criteria which are more tractable and more appropriate for general stochastic systems.
Abstract: This paper develops information-theoretic bounds for sequential multihypothesis testing and fault detection in stochastic systems. Making use of these bounds and likelihood methods, it provides a new unified approach to efficient detection of abrupt changes in stochastic systems and isolation of the source of the change upon its detection. The approach not only generalizes previous work in the literature on asymptotically optimal detection-isolation far beyond the relatively simple models treated but also suggests alternative performance criteria which are more tractable and more appropriate for general stochastic systems.

Journal ArticleDOI
01 Apr 1998
TL;DR: The stability and performance properties of the proposed fault detection scheme in the presence of system failure are rigorously established, and simulation examples are presented to illustrate the ability of the neural network based fault diagnosis methodology described in this paper to detect and accommodate faults in a simple two-link robotic system.
Abstract: Fault detection, diagnosis, and accommodation play a key role in the operation of autonomous and intelligent robotic systems. System faults, which typically result in changes in critical system parameters or even system dynamics, may lead to degradation in performance and unsafe operating: conditions. This paper investigates the problem of fault diagnosis in rigid-link robotic manipulators. A learning architecture, with neural networks as online approximators of the off-nominal system behaviour, is used for monitoring the robotic system for faults. The approximation (by the neural network) of the off-nominal behaviour provides a model of the fault characteristics which can be used for detection and isolation of faults. The stability and performance properties of the proposed fault detection scheme in the presence of system failure are rigorously established, simulation examples are presented to illustrate the ability of the neural network based fault diagnosis methodology described in this paper to detect and accommodate faults in a simple two-link robotic system.

Patent
27 Feb 1998
TL;DR: In this paper, a distributed technique for isolating faults in a communication network is described, which can be used in a variety of network restoration paradigms, including but not limited to automatic protection switching and loopback protection and provides proper network operation reduced and bounded delay time regardless of the location of the attack or the physical span of the network.
Abstract: A technique for isolating faults in a communication network is described. The techniques can be utilized in high speed communications networks such as all-optical networks (AONs). The technique is distributed, requires only local network node information and can localize attacks for a variety of network applications. The technique is particularly well suited to the problem of attack propagation which arises in AONs. The technique finds application in a variety of network restoration paradigms, including but not limited to automatic protection switching and loopback protection and provides proper network operation reduced, or in some cases no data loss and bounded delay time regardless of the location of the attack or the physical span of the network. Since the technique is distributed, and its associated delays do not depend on the number of nodes in the network. Hence the technique avoids the computational complexity inherent to centralized approaches. It is thus scalable and relatively rapid. Furthermore, the delays in attack isolation do not depend on the transmission delays in the network. A network management system can therefore offer hard upper-bounds on the loss of data due to failures or attacks. Fault localization with centralized algorithms depends on transmission delays, which are proportional to the distance traversed by the data. Since the described techniques for fault localization are not dependent on centralized computations, the techniques are equally applicable to local area networks, metropolitan area networks, or wide area networks.

Journal ArticleDOI
TL;DR: In this article, a two-stage Adaptive Line Enhancer (ALE) was proposed to aid the measurement and characterization of impulsive sound and vibration signals in machinery, which exploits two adaptive filter structures in series to obtain simultaneous spectral and temporal information.

Proceedings ArticleDOI
18 May 1998
TL;DR: In this paper, the state of the art of residual generation techniques adopted in instrument fault detection and isolation is presented, both traditional and innovative methods are described evidencing their advantages and their limits and the improvement of analytical redundancy technique performances for better dealing with high-dynamic systems and/or with on-line applications are pointed out as the most interesting needs on which to focus the research efforts.
Abstract: The paper presents the state of the art of residual generation techniques adopted in instrument fault detection and isolation. Both traditional and innovative methods are described evidencing their advantages and their limits. The improvement of analytical redundancy technique performances for better dealing with high-dynamic systems and/or with on-line applications are pointed out as the most interesting needs on which to focus the research efforts.

Journal ArticleDOI
TL;DR: In this paper, a general model of faulty rolling element bearing vibration signals is established and the efficacy of the envelope-autocorrelation technique for condition monitoring of such bearings leads to derive the envelopeautocorerelation function of the model in the case of very low shaft speed.

01 Jan 1998
TL;DR: In this article, the authors present a survey on Fault Detection and Diagnosis for Gas Turbine Applications and Fault Tolerant and Reconfigurable Control in Automotive System Diagnostics.
Abstract: Selected chapter headings: Plenary Paper I. Invited Papers - Fault Detection and Diagnosis for Gas Turbine Applications. Quantitative Model-based Approaches to FDI. Invited Papers - Robust FDI I. Statistical Techniques for FDI. Invited Papers - FDI for Induction Motors. Parameter Estimation for FDI: Theory and Applications. Fuzzy Logic and Neural Network Techniques in FDI I. Industrial Applications I. Nonlinear Observers for FDI. Qualitative Reasoning I. Integration of Qualitative and Quantitative Approaches. Invited Papers - Automotive System Diagnostics. Parity Equation Approaches. Power Systems Fault Diagnosis. Vibration Analysis in FDI. Fault Tolerant Control. Multiple-Model Approaches to FDI. Plant Management and Maintenance. Fault Tolerant and Reconfigurable Control. Human Factors in Fault Diagnosis and Supervision.