scispace - formally typeset
Search or ask a question

Showing papers on "Stuck-at fault published in 2001"


Journal ArticleDOI
TL;DR: A differential geometric approach to the problem of fault detection and isolation for nonlinear systems derived in terms of an unobservability distribution, which is computable by means of suitable algorithms.
Abstract: We present a differential geometric approach to the problem of fault detection and isolation for nonlinear systems. A necessary condition for the problem to be solvable is derived in terms of an unobservability distribution, which is computable by means of suitable algorithms. The existence and regularity of such a distribution implies the existence of changes of coordinates in the state and in the output space which induce an "observable" quotient subsystem unaffected by all fault signals but one. For this subsystem, a fault detection filter is designed.

802 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a fault detection approach based on the vectors of movement of a fault in both the model space and the residual space, which are then compared to the corresponding vector directions of known faults in the fault library.

359 citations


Journal ArticleDOI
TL;DR: A fault tolerant controller is established which guarantees the stability of the closed loop system and the proposed algorithm is applied to a combined pH and consistency control system of a pilot paper machine, to show the effectiveness of the proposed approach.
Abstract: This paper presents a set of algorithms for fault diagnosis and fault tolerant control strategy for affine nonlinear systems subjected to an unknown time-varying fault vector. First, the design of fault diagnosis filter is performed using nonlinear observer techniques, where the system is decoupled through a nonlinear transformation and an observer is used to generate the required residual signal. By introducing an extra input to the observer, a direct estimation of the time-varying fault is obtained when the residual is controlled, by this extra input, to zero. The stability analysis of this observer is proved and some relevant sufficient conditions are obtained. Using the estimated fault vector, a fault tolerant controller is established which guarantees the stability of the closed loop system. The proposed algorithm is applied to a combined pH and consistency control system of a pilot paper machine, where simulations are performed to show the effectiveness of the proposed approach.

167 citations


Journal ArticleDOI
TL;DR: In this article, a fault location algorithm based on phasor measurement units (PMUs) for series compensated lines is proposed, which does not utilize the series device model or knowledge of the operation mode of the series devices to compute the voltage drop during the fault period.
Abstract: This work presents a new fault location algorithm based on phasor measurement units (PMUs) for series compensated lines. Traditionally, the voltage drop of a series device is computed by the device model in the fault locator of series compensated lines, but by using this approach errors are induced by the inaccuracy of the series device model or the uncertainty operation mode of the series device. The proposed algorithm does not utilize the series device model or knowledge of the operation mode of the series device to compute the voltage drop during the fault period. Instead, the proposed algorithm uses the two-step algorithm, prelocation step and correction step, to calculate the voltage drop and fault location. The proposed technique can be easily applied to any series FACTS compensated line. EMTP generated data using a 300 km, 345 kV transmission line has been used to test the accuracy of the proposed algorithm. The tested cases include various fault types, fault locations, fault resistances, fault inception angles, etc. The study also considers the effect of various operation modes of the compensated device during the fault period. Simulation results indicate that the proposed algorithm can achieve up to 99.95% accuracy for most tested cases.

161 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: The proposed technique handles both stuck-at and timing failures (transition faults and hold time faults) and improves the diagnostic resolution by ranking the suspect scan cells inside a range of scan cells.
Abstract: In this paper, we present a scan chain fault diagnosis procedure. The diagnosis for a single scan chain fault is performed in three steps. The first step uses special chain test patterns to determine both the faulty chain and the fault type in the faulty chain. The second step uses a novel procedure to generate special test patterns to identify the suspect scan cell within a range of scan cells. Unlike previously proposed methods that restrict the location of the faulty scan cell only from the scan chain output side, our method restricts the location of the faulty scan cell from both the scan chain output side and the scan chain input side. Hence the number of suspect scan cells is reduced significantly in this step. The final step further improves the diagnostic resolution by ranking the suspect scan cells inside this range. The proposed technique handles both stuck-at and timing failures (transition faults and hold time faults). The extension of the procedure to diagnose multiple faults is discussed. The experimental results show the effectiveness of the proposed method.

147 citations


Journal ArticleDOI
TL;DR: The Poirot tool isolates and diagnoses defects through fault modeling and simulation, and functional and sequential test pattern applications show success with circuits having a high degree of observability.
Abstract: The Poirot tool isolates and diagnoses defects through fault modeling and simulation. Along with a carefully selected partitioning strategy, functional and sequential test pattern applications show success with circuits having a high degree of observability.

100 citations


Proceedings ArticleDOI
10 Nov 2001
TL;DR: A source-to-source compiler supporting a software-implemented hardware fault tolerance approach is proposed, based on a set of source code transformation rules, which hardens a program against transient memory errors by introducing software redundancy.
Abstract: Over the last years, an increasing number of safety-critical tasks have been demanded for computer systems. In particular, safety-critical computer-based applications are hitting market areas where cost is a major issue, and thus solutions are required which conjugate fault tolerance with low costs. A source-to-source compiler supporting a software-implemented hardware fault tolerance approach is proposed, based on a set of source code transformation rules. The proposed approach hardens a program against transient memory errors by introducing software redundancy: every computation is performed twice and results are compared, and control flow invariants are checked explicitly. By exploiting the tool's capabilities, several benchmark applications have been hardened against transient errors. Fault injection campaigns have been performed to evaluate the fault detection capability of the hardened applications. In addition, we analyzed the proposed approach in terms of space and time overheads.

97 citations


Proceedings ArticleDOI
20 May 2001
TL;DR: This work introduces the concept of fail-stutter fault tolerance, a realistic and yet tractable fault model that accounts for both absolute failure and a new range of performance failures common in modern components.
Abstract: Traditional fault models present system designers with two extremes: the Byzantine fault model, which is general and therefore difficult to apply, and the fail-stop fault model, which is easier to employ but does not accurately capture modern device behavior To address this gap, we introduce the concept of fail-stutter fault tolerance, a realistic and yet tractable fault model that accounts for both absolute failure and a new range of performance failures common in modern components. Systems built under the fail-stutter model will likely perform well, be highly reliable and available, and be easier to manage when deployed.

83 citations


Proceedings ArticleDOI
04 Nov 2001
TL;DR: A method for identifying X inputs of test vectors in a given test set by using fault simulation and procedures similar to implication and justification of ATPG algorithms is proposed.
Abstract: Given a test set for stuck at faults, some of primary input values may be changed to opposite logic values without losing fault coverage. We can regard such input values as don't care (X). In this paper, we propose a method for identifying X inputs of test vectors in a given test set. While there are many combinations of X inputs in the test set generally, the proposed method finds one including X inputs as many as possible, by using fault simulation and procedures similar to implication and justification of ATPG algorithms. Experimental results for ISCAS benchmark circuits show that approximately 66% of inputs of un-compacted test sets could be X in average. Even for compacted test sets, the method found that approximately 47% of inputs are X. Finally, we discuss how logic values are reassigned to the identified X inputs where several applications exist to make test vectors more desirable.

80 citations


Proceedings ArticleDOI
24 Oct 2001
TL;DR: Compares different VHDL-based fault injection techniques: simulator commands, saboteurs and mutants for the validation of fault tolerant systems and preliminary results show that coverages for transient faults can be obtained quite accurately with any of the three techniques.
Abstract: Compares different VHDL-based fault injection techniques: simulator commands, saboteurs and mutants for the validation of fault tolerant systems. Some extensions and implementation designs of these techniques have been introduced. Also, a wide set of non-usual fault models have been implemented. As an application, a fault tolerant microcomputer system has been validated. Faults have been injected using an injection tool developed by the GSTF. We have injected both transient and permanent faults on the system model, using two different workloads. We have studied the pathology of the propagated errors, measured their latencies, and calculated both detection and recovery coverages. Preliminary results show that coverages for transient faults can be obtained quite accurately with any of the three techniques. This enables the use of different abstraction level models for the same system. We have also verified significant differences in implementation and simulation cost between the studied injection techniques.

78 citations


Proceedings ArticleDOI
13 Mar 2001
TL;DR: The analysis shows the existence of previously defined memory fault models, and establishes new ones, and investigates the concept of dynamic faulty behavior and establishes its importance for memory devices.
Abstract: Fault analysis of memory devices using defect injection and simulation is becoming increasingly important as the complexity of memory faulty behavior increases. In this paper this approach is used to study the effects of opens and shorts on the faulty behavior of embedded DRAM (eDRAM) devices produced by Infineon Technologies. The analysis shows the existence of previously defined memory fault models, and establishes new ones. The paper also investigates the concept of dynamic faulty behavior and establishes its importance for memory devices. Conditions to test the newly established fault models are also given.

Journal ArticleDOI
TL;DR: The proposed improvement allows us to drop tests without simulating them based on the fact that the faults they detect will be detected by tests that will be simulated later, hence the name of the improved procedure: forward-looking fault simulation.
Abstract: Fault simulation of a test set in an order different from the order of generation (e.g., reverse- or random-order fault simulation) is used as a fast and effective method to drop unnecessary tests from a test set in order to reduce its size. We propose an improvement to this type of fault simulation process that makes it even more effective in reducing the test-set size. The proposed improvement allows us to drop tests without simulating them based on the fact that the faults they detect will be detected by tests that will be simulated later, hence the name of the improved procedure: forward-looking fault simulation. We present experimental results to demonstrate the effectiveness of the proposed improvement.

Proceedings ArticleDOI
15 May 2001
TL;DR: In this paper, a ground-fault relay is used to detect the fault, identify the faulted phase, and measure the electrical distance away from the substation, where the remote fault indicators are used to visually indicate where the fault is located.
Abstract: One of the most common and difficult problems to solve in industrial power systems is the location and elimination of the ground fault. Ground faults that occur in ungrounded and high-resistance grounded systems do not draw enough current to trigger circuit breaker or fuse operation, making them difficult to localize. Techniques currently used to track down faults are time consuming and cumbersome. A new approach developed for ground-fault localization on ungrounded and high-resistance grounded low-voltage systems is described. The system consists of a novel ground-fault relay that operates in conjunction with low-cost fault indicators permanently mounted in the circuit. The ground-fault relay employs digital signal processing techniques to detect the fault, identify the faulted phase, and measure the electrical distance away from the substation. The remote fault indicators are used to visually indicate where the fault is located. The resulting system provides a fast, easy, economical, and safe detection system for ground-fault localization.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: The study focuses on the location and distribution of probable bridging defects and attempts to explain the findings in the context of the characteristics of the design and its implementation.
Abstract: Presents an experimental study of bridging fault locations on the Intel Pentium (TM) 4 CPU as determined by an inductive fault analysis tool. The study focuses on the location and distribution of probable bridging defects and attempts to explain the findings in the context of the characteristics of the design and its implementation. The coverage obtained against these faults by manually generated functional patterns is compared against that achieved by ATPG vectors.

Proceedings ArticleDOI
30 Sep 2001
TL;DR: In this article, the authors developed the foundations of a technique for detection and categorization of dynamic/static eccentricities and bar/end-ring connector breakages in squirrel-cage induction motors.
Abstract: This paper develops the foundations of a technique for detection and categorization of dynamic/static eccentricities and bar/end-ring connector breakages in squirrel-cage induction motors that is not based on the traditional Fourier transform frequency-domain spectral analysis concepts. Hence, this approach can distinguish between the "fault signatures" of each of the following faults: eccentricities, broken bars, and broken end-ring connectors in such induction motors. Furthermore, the techniques presented here can extensively and economically predict and characterize faults from the induction machine adjustable-speed drive design data without the need to have had actual fault data from field experience. This is done through the development of dual-track studies of fault simulations and, hence, simulated fault signature data. These studies are performed using our proven time-stepping coupled finite-element-state-space method to generate fault case performance data, which contain phase current waveforms and time-domain torque profiles. Then, from this data, the fault cases are classified by their inherent characteristics, so-called "signatures" or "fingerprints." These fault signatures are extracted or "mined" here from the fault case data using our novel time-series data mining technique. The dual track of generating fault data and mining fault signatures was tested here on dynamic and static eccentricities of 10% and 30% of air-gap height as well as cases of one, three, six, and nine broken bars and three, six, and nine broken end-ring connectors. These cases were studied for proof of principle in a 208 V 60 Hz four-pole 1.2 hp squirrel-cage three-phase induction motor. The paper presents faulty and healthy performance characteristics and their corresponding so-called phase space diagnoses that show distinct fault signatures of each of the above-mentioned motor faults.

Journal ArticleDOI
TL;DR: In this article, a robust fault isolation scheme for a class of non-linear systems with unstructured modelling uncertainty and partial state measurement is presented, which consists of a fault detection and approximation estimator and a bank of isolation estimators.
Abstract: The design and analysis of fault diagnosis methodologies for non-linear systems has received significant attention recently. This paper presents a robust fault isolation scheme for a class of non-linear systems with unstructured modelling uncertainty and partial state measurement. The proposed fault diagnosis architecture consists of a fault detection and approximation estimator and a bank of isolation estimators. Each isolation estimator corresponds to a particular type of fault in the fault class. A fault isolation decision scheme is presented with guaranteed performance. If at least one component of the output estimation error of a particular fault isolation estimator exceeds the corresponding adaptive threshold at some finite time, then the occurrence of that type of fault can be excluded. Fault isolation is achieved if this is valid for all but one isolation estimator. Based on the class of non-linear systems under consideration, fault isolability conditions are rigorously investigated, characterizin...

Proceedings ArticleDOI
30 Oct 2001
TL;DR: A static compaction procedure to reduce test set size for scan designs and a procedure to order test patterns in order to steepen the fault coverage curve are presented.
Abstract: A static compaction procedure to reduce test set size for scan designs and a procedure to order test patterns in order to steepen the fault coverage curve are presented. The computational effort for both procedures is linearly proportional to the computational effort required for standard fault simulation with fault dropping. Experimental results on large industrial circuits demonstrate both the efficiency and effectiveness of the proposed procedures.

Journal ArticleDOI
TL;DR: A defective-part-level model combined with a method for choosing test patterns that use site observation can predict defect levels in submicron ICs more accurately than simple stuck-at fault analysis.
Abstract: After an integrated circuit (IC) design is complete, but before first silicon arrives from the manufacturing facility, the design team prepares a set of test patterns to isolate defective parts. Applying this test pattern set to every manufactured part reduces the fraction of defective parts erroneously sold to customers as defect-free parts. This fraction is referred to as the defect level (DL). However, many IC manufacturers quote defective part level, which is obtained by multiplying the defect level by one million to give the number of defective parts per million. Ideally, we could accurately estimate the defective part level by analyzing the circuit structure, the applied test-pattern set, and the manufacturing yield. If the expected defective part level exceeded some specified value, then either the test pattern set or (in extreme cases) the design could be modified to achieve adequate quality. Although the IC industry widely accepts stuck-at fault detection as a key test-quality figure of merit, it is nevertheless necessary to detect other defect types seen in real manufacturing environments. A defective-part-level model combined with a method for choosing test patterns that use site observation can predict defect levels in submicron ICs more accurately than simple stuck-at fault analysis.

Journal ArticleDOI
TL;DR: In this article, a fault detection, isolation and fault tolerant control for an spark ignition engine is investigated for an IC engine, where the integrated design of control and diagnostics is achieved by combining the integral sliding mode control methodology and observers with hypothesis testing.
Abstract: Fault detection, isolation and fault tolerant control are investigated for an spark ignition engine. Fault tolerant control refers to a strategy in which the desired stability and robustness of the control system are guaranteed in the presence of faults. In an attempt to realize fault tolerant control, a methodology for integrated design of control and fault diagnostics is proposed. Specifically, the integrated design of control and diagnostics is achieved by combining the integral sliding mode control methodology and observers with hypothesis testing. Information obtained from integral sliding mode control and from observers with hypothesis testing is utilized so that a fault can be detected, isolated and compensated. As an application example, the air and fuel dynamics of an IC engine are considered. A mean value engine model is developed and implemented in Simulink®. The air and fuel dynamics of the engine are identified using experimental data. The proposed algorithm for integration of control and diagnostics is then validated using the identified engine model. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An integrated method for the detection and isolation of incipient faults in common field devices, such as sensors and actuators, using plant operational data using principal component analysis (PCA), a multivariate data-driven technique, is presented.
Abstract: An integrated method for the detection and isolation of incipient faults in common field devices, such as sensors and actuators, using plant operational data is presented. The approach is based on the premise that data for normal operation lie on a surface and abnormal situations lead to deviations from the surface in a particular way. Statistically significant deviations from the surface result in the detection of faults, and the characteristic directions of deviations are used for isolation of one or more faults from the set of typical faults. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data and fit a hyperplane to the data. The fault direction for each of the scenarios is obtained using the singular value decomposition on the state and control function prediction errors, and fault isolation is then accomplished from projections on the fault directions. This approach is demonstrated for a simulated pressurized water reactor steam generator system and for a laboratory process control system under single device fault conditions. Enhanced fault isolation capability is also illustrated by incorporating realistic nonlinear terms in the PCA data matrix.

Proceedings ArticleDOI
09 Jul 2001
TL;DR: New techniques for exploiting FPGAs to speed-up fault injection in VLSI circuits and allows performing fault injection campaigns that are comparable to those performed with hardware-based techniques in terms of speed, but shows a much higher flexibility in Terms of supported fault models.
Abstract: The widespread adoption of VLSI devices for safety-critical applications asks for effective tools for the evaluation and validation of their reliability. Fault injection is commonly adopted for this task, and the effectiveness of the adopted techniques is therefore a key factor for the reliability of the final products. In this paper we present new techniques for exploiting FPGAs to speed-up fault injection in VLSI circuits. Thanks to the suitable circuitry added to the original circuit, transient faults affecting memory elements in the circuit can be considered. The proposed approach allows performing fault injection campaigns that are comparable to those performed with hardware-based techniques in terms of speed, but shows a much higher flexibility in terms of supported fault models.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: An efficient genetic algorithm-based searching scheme is developed for obtaining a solution that is globally optimal in this article, where the fault location and fault resistances are unknown variables and an optimization problem is formulated.
Abstract: Prompt and accurate location of the faults in a large-scale transmission system is critical when system reliability is considered and usually is the first step in the system restoration. The accuracy of fault location estimation essentially depends on the information available. In this paper, the fault location estimation is mathematically formulated as an optimization problem of which the fault location and fault resistances are unknown variables. An efficient genetic algorithm-based searching scheme is developed for obtaining a solution that is globally optimal.

Journal ArticleDOI
TL;DR: The results of the work presented here clearly do not support the claim that this architecture is not suitable for analog fault diagnosis and indicate this architecture can provide a robust fault diagnostic system.
Abstract: A neural-network based analog fault diagnostic system is developed for nonlinear circuits. This system uses wavelet and Fourier transforms, normalization and principal component analysis as preprocessors to extract an optimal number of features from the circuit node voltages. These features are then used to train a neural network to diagnose soft and hard faulty components in nonlinear circuits. Our neural network architecture has as many outputs as there are fault classes where these outputs estimate the probabilities that input features belong to different fault classes. Application of this system to two sample circuits using SPICE simulations shows its capability to correctly classify soft and hard faulty components in 95% of the test data. The accuracy of our proposed system on test data to diagnose a circuit as faulty or fault-free, without identifying the fault classes, is 99%. Because of poor diagnostic accuracy of backpropagation neural networks reported in the literature (Yu et al., Electron. Lett., Vol. 30, 1994), it has been suggested that such an architecture is not suitable for analog fault diagnosis (Yang et al., IEEE Trans. on CAD, Vol. 19, 2000). The results of the work presented here clearly do not support this claim and indicate this architecture can provide a robust fault diagnostic system.

Proceedings ArticleDOI
17 Jun 2001
TL;DR: In this paper, the authors propose a fault-tolerant induction motor drive system, where the fault mode is the case in which a misfiring does occur in one of the power switches and the fault tolerance is obtained by reconfiguration of the inverter topology.
Abstract: This paper proposes the possibility of developing fault diagnosis and remedial operating strategies, which enable a fault tolerant induction motor drive system. The fault mode investigated is the case in which a misfiring does occur in one of the power switches. The fault diagnosis is achieved by using a strategy that permits both identification and isolation of the faulty components. The fault tolerance is obtained by reconfiguration of the inverter topology. This allows for continuous free operation of the drive even with complete loss of one of the legs of the inverter. Experimental results demonstrate the validity of the system proposed.

Proceedings ArticleDOI
12 Jul 2001
TL;DR: The results show that not only does the evolutionary process produce useful redundancy, it is also possible to reconfigure the system in real-time on the Virtex device.
Abstract: Redundancy is a critical component to the design of fault tolerant systems; both hardware and software. This paper explores the possibilities of using evolutionary techniques to first produce a processing system that will perform a required function, and then consider its applicability for producing useful redundancy that can be made use of in the presence of faults, ie is it fault tolerant? Results obtained using evolutionary strategies to automatically create redundancy as part of the "design" process are given. The experiments are undertaken on a Virtex FPGA with intrinsic evolution taking place. The results show that not only does the evolutionary process produce useful redundancy, it is also possible to reconfigure the system in real-time on the Virtex device.

Patent
16 Oct 2001
TL;DR: In this article, the fault identification system includes a first logic circuit which is responsive to conventional protective elements which recognize the presence of low resistance single line-to-ground faults for the A, B and C phases on a power transmission line.
Abstract: The fault identification system includes a first logic circuit which is responsive to conventional protective elements which recognize the presence of low resistance single line-to-ground faults for the A, B and C phases on a power transmission line. The first logic circuit includes a portion thereof for recognizing and providing an output indication of single line-to-ground faults, faults involving two phases and three-phase faults, in response to the occurrence of different combinations of outputs from the protective elements. A calculation circuit, when enabled, is used to determine the angular difference between the total zero sequence current and the total negative sequence current for high resistance faults when the protective elements themselves cannot identify fault conditions. The angular difference is in one of three pre-selected angular sectors. An angular difference in the first sector indicates an A phase-to-ground fault or a BC phase-to-phase to ground fault; an angular difference in the second sector indicates a B phase-to-ground fault or a C phase-to-phase to ground fault; and a signal in the third sector indicates a C phase-to-ground fault or an AB phase-to-phase to ground fault. A processor is used to determine which of the two possible for each angle determination is the actual fault type. An output indication of the actual fault type is then provided.

Proceedings ArticleDOI
04 Dec 2001
TL;DR: In this article, the design of detection filters for fault detection and isolation in linear systems by means of dynamic inversion is addressed, based on the left inverse and arrives at detector architectures whose outputs are the fault signals while the inputs are the measured system inputs and outputs and possible their time derivatives.
Abstract: In this paper the design of detection filters for fault detection and isolation in linear systems by means of dynamic inversion is addressed. This approach is based on the left inverse and arrives at detector architectures whose outputs are the fault signals while the inputs are the measured system inputs and outputs and possible their time derivatives. This will make not only the detection and isolation but also the estimation of the fault signals possible.

Journal ArticleDOI
TL;DR: In this paper, it is shown that the binary decision diagram (BDD) method can overcome some of the difficulties in the analysis of non-coherent fault trees, and what potential benefits can be derived from the incorporation of NOT logic.
Abstract: Risk and safety assessments carried out on potentially hazardous industrial systems commonly employ fault tree analysis to predict the probability or frequency of system failure. Causes of the system failure mode are developed in an inverted tree structure where the events are linked using logic gates. The type of logic is usually restricted to AND and OR gates which makes the fault tree structure coherent. The use, directly or indirectly, of the NOT logic gate is generally discouraged as this can result in a non-coherent structure. Non-coherent structures mean that components' working states contribute to the failure of the system. The qualitative and quantitative analysis of such fault trees can present additional difficulties when compared to the coherent versions. This paper examines some of the difficulties that can occur, and what potential benefits can be derived from the incorporation of NOT logic. It is shown that the binary decision diagram (BDD) method can overcome some of the difficulties in the analysis of non-coherent fault trees. Copyright © 2001 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: An artificial neural network (ANN) approach to simulate a complete scheme for distance protection of a transmission line, subdivided into different neural network modules for fault detection, fault classification as well as fault location in different protection zones is presented.
Abstract: This work presents an artificial neural network (ANN) approach to simulate a complete scheme for distance protection of a transmission line. In order to perform this simulation, the distance protection task was subdivided into different neural network modules for fault detection, fault classification as well as fault location in different protection zones. A complete integration amongst these different modules is then essential for the correct behaviour of the proposed technique. The three-phase voltages and currents sampled at 1 kHz, in pre and post-fault conditions, were utilised as inputs for the proposed scheme. The Alternative Transients Program (ATP) software was used to generate data for a 400 kV transmission line in a faulted condition. The NeuralWorks software was used to set up the ANN topology, train it and obtain the weights as an output. The NeuralWorks software provides a flexible environment for research and the application of techniques involving ANNs. Moreover, the supervised backpropagation algorithm was utilised during the training process.

Patent
08 Jun 2001
TL;DR: In this article, a fault circuit interrupter with functionality for reset can include a relay that trips a first circuit when a ground fault or other error is detected in the first circuit.
Abstract: A fault circuit interrupter with functionality for reset can include a relay that trips a first circuit when a ground fault or other error is detected in the first circuit. The relay can be a bistable type of relay that is caused to change state by the detection of a ground fault (or other error) in the first circuit. To reset the fault circuit interrupter after it has tripped, a reset mechanism can include means for simulating a ground fault (or other error). A signal can be sent to the relay when a simulated ground fault (or other simulated fault) is output, such that the signal causes the relay to change state to re-close the first circuit after the trip. Accordingly, the interrupter is automatically tested for functionality when it is reset. Moreover, the fault circuit interrupter cannot be reset if the circuitry of the fault circuit interrupter is not operational.