Showing papers on "Fault detection and isolation published in 1982"
••
TL;DR: It is shown that for most practical ALU implementations, including the carry-lookahead adders, the RESO technique will detect all errors caused by faults in a bit-slice or a specific subcircuit of the bit slice.
Abstract: A new method of concurrent error detection in the Arithmetic and Logic Units (ALU's) is proposed. This method, called "Recomputing with Shifted Operands" (RESO), can detect errors in both the arithmetic and logic operations. RESO uses the principle of time redundancy in detecting the errors and achieves its error detection capability through the use of the already existing replicated hardware in the form of identical bit slices. It is shown that for most practical ALU implementations, including the carry-lookahead adders, the RESO technique will detect all errors caused by faults in a bit-slice or a specific subcircuit of the bit slice. The fault model used is more general than the commonly assumed stuck-at fault model. Our fault model assumes that the faults are confined to a small area of the circuit and that the precise nature of the faults is not known. This model is very appropriate for the VLSI circuits.
344 citations
••
TL;DR: The use of process computers and microcomputers permits the application of methods which result in an earlier detection of process faults than is possible by conventional limit and trend checks as mentioned in this paper...
316 citations
••
TL;DR: In this article, the authors demonstrate how to detect instrument faults in non-linear time-varying processes that include uncertainties such as modelling error, parameter ambiguity, and input and output noise.
Abstract: We demonstrate how to detect instrument faults in non-linear time-varying processes that include uncertainties such as modelling error, parameter ambiguity, and input and output noise. The design of state estimation filters with minimum sensitivity to the uncertainties and maximum sensitivity to the instrument faults is described together with existence conditions for such filters. Simulations based on a non-linear chemical reactor with heat exchange and feedback control illustrate the validity of the proposed method.
235 citations
••
TL;DR: In this article, a microcomputer-based prototype relay was constructed and installed on a typical utility feeder to detect most staged faults while not indicating a false trip during a three month demonstration.
Abstract: This paper describes work performed by Texas A&M University on the detection of high impedance faults on distribution primary conductors. Some grounded distribution primary conductors may exhibit a very low fault current such that they may not be cleared by over-current protection. These faults may persist indefinitely, possibly causing a fire hazard or a hazard to humans by contact with an energized line. The paper begins with an examination of the high impedance fault problem from the perspective of system protection. The fault detection theory is presented next. The system utilizes a fault-generated increase in the 2-10 kHz component of feeder current for fault detection. EPRI funding enabled the verification and demonstration of this fault detection concept. Measurements were made on several faulted and unfaulted feeders to develop a representative data base of signals to which a relay would be subject. Analysis of these data provided a time and frequency domain signature of these faults. A microcomputer-based prototype relay was constructed and installed on a typical utility feeder. It successfully detected most staged faults while not indicating a false trip during a three month demonstration.
153 citations
••
TL;DR: This correspondence analyzes the computational complexity of fault detection problems for combinational circuits and proposes an approach to design for testability, and shows that for k-level (k ≥ 3) monotone/unate circuits these problems are still NP-complete, but that these are solvable in polynomial time for 2-level monot one/ unate circuits.
Abstract: In this correspondence we analyze the computational complexity of fault detection problems for combinational circuits and propose an approach to design for testability. Although major fault detection problems have been known to be in general NP-complete, they were proven for rather complex circuits. In this correspondence we show that these are still NP-complete even for monotone circuits, and thus for unate circuits. We show that for k-level (k ≥ 3) monotone/unate circuits these problems are still NP-complete, but that these are solvable in polynomial time for 2-level monotone/unate circuits. A class of circuits for which these fault detection problems are solvable in polynomial time is presented. Ripple-carry adders, decoder circuits, linear circuits, etc., belong to this class. A design approach is also presented in which an arbitrary given circuit is changed to such an easily testable circuit by inserting a few additional test-points.
151 citations
••
TL;DR: In this article, the authors describe the design and test results of a Kalman filtering based digital distance protection scheme, which is tested for all types of faults at different locations using simulated data.
Abstract: This paper describes the design and test results of a Kalman filtering based digital distance protection scheme. A brief review of voltage and current Kalman filters is followed by sensitivity analysis due to incorrect model parameters. The complete scheme for fault detection, classification, zone computation, and fault location calculation is tested for all types of faults at different locations using simulated data. The major parts of the scheme were tested using the available recorded data. The test results indicated that the proposed scheme has a mean operating time of a quarter of a cycle, insensitive to incorrect model parameters, and represents a low computer burden.
94 citations
••
TL;DR: In this article, an analysis of several protection coordination problems that may result from the integration of small wind turbines (less than 100 kVA) into the electric distribution system is presented, including the characteristic contributions of fault current, fault detection ability, effects of increased shortcircuit capacities, interaction with line reclosers, and islanding of dispersed generators.
Abstract: This paper is an analysis of several protection coordination problems that may result from the integration of small wind turbines (less than 100 kVA) into the electric distribution system. Such problems include the characteristic contributions of fault current, fault detection ability, effects of increased short-circuit capacities, interaction with line reclosers, and islanding of dispersed generators. Examples are shown using actual utility line and equipment data. The wind turbines considered include small synchronous and induction generators as well as generation sources utilizing line-commutated or force-commutated inverter interfaces.
74 citations
•
16 Jun 1982
TL;DR: In this article, the authors present a fault detection and isolation approach for flight control computers by cross channel comparing sensor data, command signals, model/surface position information and incorporating channel monitors for detecting generic or common mode flight control computer failures.
Abstract: Each of three redundant sensor sets (16, 18, 20) provides flight status data to a corresponding one of three flight control computers (22, 24, 26). Each of the computers shares sensor data as well as computed control surface command signals with all other system computers. The command signal outputs from two computers are transduced (34, 36) to mechanical commands which are combined and applied to the appropriate aircraft control surface. The remaining computer output is a mathematical model of the mechanical outputs of the other channels. Fault detection and isolation is accomplished by cross channel comparing sensor data, command signals, model/surface position information and by incorporating channel monitors for detecting generic or common mode flight control computer failures.
72 citations
••
TL;DR: In this paper, a failure detection scheme consists of a set of five Kalman filters and a logical means for combining estimated state variables with instrument signals to produce decision functions, which identify faults, as they occur, in each of five instruments.
Abstract: A technique of functional redundancy (as opposed to hardware redundancy) for detecting incipient failures in process instruments is applied to a simulation of the loss-of-fluid test pressurizer. The failure detection scheme consists of a set of five Kalman filters and a logical means for combining estimated state variables with instrument signals to produce decision functions, which identify faults, as they occur, in each of five instruments. Test data from the simulated plant show that prompt detection of both bias faults and high noise faults is possible during small transient fluctuations in the pressurizer from its nominal operating state.
57 citations
••
TL;DR: In this article, the authors monitor the control, in particular, software crashes in a multiprocessor machine control system to prevent machine malfunctions and monitor key operations such as the number of tasks to be completed by the control.
54 citations
••
IBM1
TL;DR: The model developed in this paper allows the system designer to project the dynamic error-detection and fault-isolation coverages of the system as a function of the failure rates of components and the types and placement of error checkers, which has resulted in significant improvements to both detection and isolation in the IBM 3081 Processor Unit.
Abstract: As computer technologies advance to achieve higher performance and density, intermittent failures become more dominant than solid failures, with the result that the effectiveness of any diagnostic procedure which relies on reproducing failures is greatly reduced. This problem is solved at the system level by a new strategy of dynamic error detection and fault isolation based on error checking and analysis of captured information. The model developed in this paper allows the system designer to project the dynamic error-detection and fault-isolation coverages of the system as a function of the failure rates of components and the types and placement of error checkers, which has resulted in significant improvements to both detection and isolation in the IBM 3081 Processor Unit. The model has also resulted in new probabilistic isolation strategies based on the likelihood of failures. Our experiences with this model on several IBM products, including the 3081, show good correlation between the model and practical experiments.
••
01 Jan 1982TL;DR: It is shown that such a test set can be derived using a set of simple, easily implementable algorithms and be derived from the product term specification of the PLA.
Abstract: The problem of fault detection and test generation for programmable logic arrays (PLAs) is investigated. The effect of actual physical failures is viewed in terms of the logical changes of the product terms (growth, shrinkage, appearance and disappearance) constituting the PLA. Methods to generate a minimal single fault detection test set (T /sub S/) from the product term specification of the PLA, are presented. It is shown that such a test set can be derived using a set of simple, easily implementable algorithms. Methods to augment Ts in order to obtain a multiple fault detection test set (T /sub M/) are also presented.
••
IBM1
TL;DR: The concepts of automated diagnostics that were developed for and that are implemented in the IBM 3081 Processor Complex are presented and very good correlation between projected and measured effectiveness is found.
Abstract: The concepts of automated diagnostics that were developed for and that are implemented in the IBM 3081 Processor Complex are presented in this paper. Significant features of the 3081 diagnostics methodology are the capability to isolate intermittent as well as solid hardware failures, and the automatic isolation of a failure to the failing field-replaceable unit (FRU) in a high percentage of the cases. These features, which permit a considerable reduction in the time to repair a failure as compared to previous systems, are achieved by designing a machine which has a very high level of error-detection capability as well as special functions to facilitate fault isolation using Level-Sensitive Scan Design (USD), and which includes a Processor Controller to implement diagnostic microprograms. Intermittent failures are isolated by analyzing data captured at the detection of the error, and the analysis is concurrent with customer operations if the error is recoverable. A further improvement in the degree of isolation is achieved for solid failures by using automatically generated validation tests which detect and isolate stuck faults in the logic. The diagnostic package was designed to meet a specified value of isolation effectiveness, stated as the average number of FRUs replaced per failure. The technique used to estimate the isolation effectiveness of the diagnostic package and to evaluate proposals for improving isolation is described. Testing of the diagnostic package by hardware bugging indicates very good correlation between projected and measured effectiveness.
•
01 Mar 1982TL;DR: Using the rectangular transform technique, a fast and accurate algorithm has been developed for transformer protection using the Fourier coefficients by addition and subtraction routines only, which is quite suitable for microprocessor-based protection of power systems.
Abstract: A simple algorithm for fast computation is an important requirement for the efficient application of microprocessors in power-system relaying Using the rectangular transform technique, a fast and accurate algorithm has been developed for transformer protection The algorithm generates the Fourier coefficients by addition and subtraction routines only The method provides good discrimination between the nonzero differential current produced by the transformer energisation and that produced by internal faults The algorithm provides fast fault detection and yields a large blocking signal, using second or higher harmonic components The digital simulation and test results on a variety of fault and inrush conditions, including the periodic inrush ones, clearly demonstrate the efficacy of this algorithm for the protection of the power transformer Furthermore, owing to the relative simplicity and absence of time consuming multiplication and division calculations, the algorithm is quite suitable for microprocessor-based protection of power systems
••
TL;DR: The main tool of the approach is the Deduction Algorithm, which deduces internal values in the circuit under test based upon the test results that are used for fault diagnosis, which encompasses both fault detection and location.
Abstract: In this paper we present a new approach to fault diagnosis in sequential circuits based on an effect–cause analysis. This represents an extension of our previous work dealing with combinational circuits [1]. The main tool of our approach is the Deduction Algorithm, which deduces internal values in the circuit under test based upon the test results. The deduced values are used for fault diagnosis, which encompasses both fault detection and location.
••
TL;DR: NBS has investigated the feasibility of using reflectometry techniques in extruded polyethylene cables to detect and locate incipient fault sites in underground transmission lines.
Abstract: The location and repair of faults in underground transmission lines is a difficult and time-consuming operation. The Department of Energy has sponsored research in the development of instrumentation to detect and locate incipient fault sites. Some of these methods rely on reflectometry techniques in either the time or frequency domain. NBS has investigated the feasibility of using such methods in extruded polyethylene cables.
••
[...]
TL;DR: A new approach to test pattern generation which is particularly suitable for self-test is described, and all irredundant multiple as well as single stuck faults are detected.
Abstract: A new approach to test pattern generation which is particularly suitable for self-test is described. Required computation time is much less than for present-day automatic test pattern generation (ATPG) programs. Fault simulation is not required. More patterns may be obtained than from standard ATPG programs. However, fault coverage is much higher - all irredundant multiple as well as single stuck faults are detected. Test length is easily controlled. The test patterns are easily generated algorithmically either by program or hardware.
•
03 May 1982
TL;DR: In this paper, each redundant sensor signal is compensated with respect to a mid-value selected signal in equalizer circuitry, and a midvalue selector selects the midvalue one of the compensated input signals.
Abstract: Each of several redundant sensor signals is compensated with respect to a midvalue selected signal in equalizer circuitry. A midvalue selector selects the midvalue one of the compensated input signals. Fault detection circuitry identifies as a failure any compensated input signal which deviates from the midvalue selected signal by a specified limit and, in response thereto, insolates the identified signal and substitutes therefor as an input to the midvalue selector the selected midvalue signal.
••
TL;DR: The principles developed are applied to a simulation of the pitch axis autopilot of the A7 jet aircraft and indicate where some hardware redundancy can be introduced into the system to improve the fault detection capability of the DOS.
Abstract: Instrument failure detection using the dedicated observer scheme (DOS) depends on partial state observability through each instrument which is monitored. For instrument fault detection by the DOS technique, a quantitative measure of partial state observability is established for each instrument and used to determine a necessary condition on the output structure of the system. This measure, called the internal redundancy of the instrument, indicates the complexity of the logic required for failure detection, and it also indicates where some hardware redundancy can be introduced into the system to improve the fault detection capability of the DOS. The principles developed are applied to a simulation of the pitch axis autopilot of the A7 jet aircraft.
•
02 Feb 1982
TL;DR: In this article, the authors proposed a scheme to improve the reliability of a radio system by switching the transmitter of a party station by a discrimination signal from a reception side when it is found, on the basis of outputs of a preceding-station fault and an output and input circuit state detecting means, that the preceding station fault is caused by a party-station transmitter.
Abstract: PURPOSE:To improve the reliability of a radio system by switcing the transmitter of a party station by a discrimination signal from a reception side when it is found, on the basis of outputs of a preceding-station fault and an output and input circuit state detecting means, that the preceding-station fault is caused by a party-station transmitter. CONSTITUTION:A transmission device S permits the 1st transmitter 1 to operate as an in-use transmitter and if a fault which can not be detected by a fault detection part 1-0 occurs, the fault detection parts 15-0 and 16-0 of the 1st and 2nd receivers 15 and 16 of a reception device R detect the error. Their derection signals are inputted to a fault discrimination part 17, which discriminates the fault of the transmitter 1 on the basis of an input signal level from the preceding station. The result of the discrimination is sent to the reception device R' of the preceding station via the transmission device S' of this side station. This signal is trnsferred to th switching control part 3 of the device S and under the control of a control part 3, a relay driving circuit 4 is driven to achieve transmission by switching the transmitter 1 to the 2nd transmitter 2 as stand-by one though a switch 4, thus improving the reliability of a radio system.
01 Jul 1982
TL;DR: This RFC describes the portion of fault isolation and recovery which is the responsibility of the host.
Abstract: This RFC describes the portion of fault isolation and recovery which
is the responsibility of the host.
•
13 Dec 1982TL;DR: In this paper, a fault detection and clearance system for paralleled electrically similar power transmission lines was proposed, where at each end of the line matched separate directional relays respectively connected to trip associated fault clearance circuit breakers have polarized windings and operate windings.
Abstract: A fault detection and clearance system for paralleled electrically similar power transmission lines wherein at each end of the line matched separate directional relays respectively connected to trip associated fault clearance circuit breakers have polarized windings and operate windings. The polarized windings are energized by an alternating current proportional to the summation while the operate windings are energized by an alternating current proportional to the difference (phase considered) of fault currents going into the breakers in both lines at the adjacent end in the event of a fault.
••
22 Nov 1982TL;DR: In this paper, a three dimensional representation of a part is reconstructed from multiple camera views and measurements are collected from this three dimensional data and can be used to detect faults in the manufacturing process.
Abstract: A three dimensional representation of a part is reconstructed from multiple camera views. Measurements are then collected from this three dimensional data and can be used to detect faults in the manufacturing process. The manufacturing faults are detected as visual abnormalities in the final parts. These abnormalities correspond to error conditions in earlier phases of manufacturing and could represent equipment failure, equipment wear or the use of a faulty control algorithm. A gage station which collects visual information is discussed. The algorithm which converts the visual information into a three dimensional representation of the part is presented and compared to other similar reconstruction strategies. Once the data have been collected and reconstructed, measurements are taken and correlated with possible error conditions. New correlations between the part measurements and manufacturing errors can be added to the control system as problems occur. For example, hammer wear in an open-die forge can be discovered by measuring the length of a work piece after it was struck. Along with each casual relationship there is a suggested course of action which is intended to be an immediate remedy for the error condition. In the forge example, a simple corrective action would be to move the hammers closer together to account for their wear. This makes it possible for the overall system to approach immunity to catastrophic errors while minimizing the number of defective parts.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
••
TL;DR: In this article, the authors present the results of a study of non-invasive fault location techniques in gas insulated substations, which can be applied to existing substation without retrofitting.
Abstract: This paper presents the results of a study of non-invasive fault location techniques in gas insulated substations. The applications of infrared thermal imaging system and temperature sensitive paint to locate in-service faults with power follow-through were demonstrated. The infrared fault location system can be applied to existing substation without retrofitting. Detection sensitivities and system parameters are presented.
•
17 May 1982TL;DR: In this paper, a fault signal indicating system which contains a bistable memory device which changes from a first to a second stable state in response to a faulty signal is described.
Abstract: A fault signal indicating system which contains a bistable memory device which changes from a first to a second stable state in response to a fault signal. Simple logic circuitry is provided for disabling the memory device while it is in the second state so that it will not be affected by secondary fault signals. In systems where a plurality of bistable memory devices are used, each being associated with a respective one of plural fault signal outputs, logic circuitry is employed to disable predetermined ones of the memories in response to any one of them being in the second state. Voltage regulation and power interruption circuitry is provided to disable the memory devices when power supply voltages fall below predetermined levels. Circuitry is provided for resetting all of the memories simultaneously to the first stable state.
••
14 Jun 1982Abstract: Computer-aided diagnostic techniques are successfully applied to on-line signal validation in an operating nuclear reactor. To avoid installation of additional redundant sensors for the sole purpose of fault isolation, a real-time model of nuclear instrumentation and the thermal-hydraulic process in the primary coolant loop is developed and experimentally validated. The model provides analytically redundant information sufficient for isolation of failed sensors as well as for detection of abnormal plant operation and component malfunctioning.
••
TL;DR: In this paper, a transformation has been introduced allowing an estimation of the fault coverage of a particular test strategy to be made without assumptions about the characteristics of likely faults, which has been used for the design of fault experiments capable of detecting intermittent faults having specified transition probabilities.
Abstract: Fault-detection experiments for electronic components and systems are usually designed for the detection of permanent or data-dependent faults. Treatment of intermittent faults in the literature has been restricted to the use of Markov models for the design of fault experiments capable of detecting intermittent faults having specified transition probabilities. In the letter a transformation has been introduced allowing an estimation of the fault coverage of a particular test strategy to be made without assumptions about the characteristics of likely faults.
••
TL;DR: In this paper, the authors consider that a fault is actually a change in relationship between some variables, and they use on-line identification and thus observe model parameters to detect faults.
•
08 Jan 1982
TL;DR: In this paper, the fault detection rate of the module where a fault occurred first even if faults occur in two modules was measured by measuring the defect detection rate in a majority circuit.
Abstract: PURPOSE:To select a normal module with a high probability, by measuring the fault detection rate of the module where a fault occurred first even if faults occur in two modules. CONSTITUTION:If a fault occurs in, for example, a module 3 out of modules 1-3 having the same function, the output of a majority circuit 7 coincides with outputs of modules 1 and 2, and therefore, discordance is detected in an exclusive OR gate 10, and it is stored in an FF 16 that the module 3 is faulty. Then, a counter 20 counts the frequency of fault detection hereafter. If a fault occurs in either of rest two modules and discordance occurs in the output value, the output value of the module 3 which is faulty already is referred to. At this time, the values of the counter 20 and a counter 21 are compared with each other by a comparator 24; and if the value of the counter 20 is the half or less of the value of the counter 21, the output value of the module 3 is used for comparison as it is because the fault detection rate of the module 3 is <1/2.