scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2015"


Journal ArticleDOI
TL;DR: The three-part survey paper aims to give a comprehensive review of real-time fault diagnosis and fault-tolerant control, with particular attention on the results reported in the last decade.
Abstract: With the continuous increase in complexity and expense of industrial systems, there is less tolerance for performance degradation, productivity decrease, and safety hazards, which greatly necessitates to detect and identify any kinds of potential abnormalities and faults as early as possible and implement real-time fault-tolerant operation for minimizing performance degradation and avoiding dangerous situations. During the last four decades, fruitful results have been reported about fault diagnosis and fault-tolerant control methods and their applications in a variety of engineering systems. The three-part survey paper aims to give a comprehensive review of real-time fault diagnosis and fault-tolerant control, with particular attention on the results reported in the last decade. In this paper, fault diagnosis approaches and their applications are comprehensively reviewed from model- and signal-based perspectives, respectively.

2,026 citations


Proceedings ArticleDOI
09 Nov 2015
TL;DR: Three state-of-the-art unit test generation tools for Java (Randoop, EvoSuite, and Agitar) are applied to the 357 real faults in the Defects4J dataset and investigated how well the generated test suites perform at detecting these faults.
Abstract: Rather than tediously writing unit tests manually, tools can be used to generate them automatically --- sometimes even resulting in higher code coverage than manual testing. But how good are these tests at actually finding faults? To answer this question, we applied three state-of-the-art unit test generation tools for Java (Randoop, EvoSuite, and Agitar) to the 357 real faults in the Defects4J dataset and investigated how well the generated test suites perform at detecting these faults. Although the automatically generated test suites detected 55.7% of the faults overall, only 19.9% of all the individual test suites detected a fault. By studying the effectiveness and problems of the individual tools and the tests they generate, we derive insights to support the development of automated unit test generators that achieve a higher fault detection rate. These insights include 1) improving the obtained code coverage so that faulty statements are executed in the first instance, 2) improving the propagation of faulty program states to an observable output, coupled with the generation of more sensitive assertions, and 3) improving the simulation of the execution environment to detect faults that are dependent on external factors such as date and time.

193 citations


Journal ArticleDOI
01 Feb 2015
TL;DR: The simulation results show that the detection accuracy, false alarm rate and false positive rate performance of the DSFD algorithm is much better in adverse environment where the traditional methods fails to detect the fault.
Abstract: Distributed self diagnosis is an important problem in wireless sensor networks (WSNs) where each sensor node needs to learn its own fault status The classical methods for fault finding using mean, median, majority voting and hypothetical test based approaches are not suitable for large scale WSNs due to large deviation in inaccurate data transmission by different faulty sensor nodes In this paper, a modified three sigma edit test based self fault diagnosis algorithm is proposed which diagnose both hard and soft faulty sensor nodes The proposed distribute self fault diagnosis (DSFD) algorithm is simulated in NS3 and the performances are compared with the existing distributed fault detection algorithms The simulation results show that the detection accuracy, false alarm rate and false positive rate performance of the DSFD algorithm is much better in adverse environment where the traditional methods fails to detect the fault The other parameters such as detection latency, energy consumption and the network life time are also determined

99 citations


Journal ArticleDOI
TL;DR: The fault analysis reveals that unique faults occur in addition to some conventional memory faults, and the detection of such unique faults cannot be guaranteed with just the application of traditional march tests, so a new Design-for-Testability (DfT) concept is presented to facilitate the Detection of the unique faults.
Abstract: Memristor-based memory technology, also referred to as resistive RAM (RRAM), is one of the emerging memory technologies potentially to replace conventional semiconductor memories such as SRAM, DRAM, and flash. Existing research on such novel circuits focuses mainly on the integration between CMOS and non-CMOS, fabrication techniques, and reliability improvement. However, research on (manufacturing) test for yield and quality improvement is still in its infancy stage. This paper presents fault analysis and modeling for open defects based on electrical simulation, introduces fault models, and proposes test approaches for RRAMs. The fault analysis reveals that unique faults occur in addition to some conventional memory faults, and the detection of such unique faults cannot be guaranteed with just the application of traditional march tests. The paper also presents a new Design-for-Testability (DfT) concept to facilitate the detection of the unique faults. Two DfT schemes are developed by exploiting the access time duration and supply voltage level of the RRAM cells, and their simulation results show that the fault coverage can be increased with minor circuit modification. As the fault behavior may vary due to process variations, the DfT schemes are extended to be programmable to track the changes and further improve the fault/defect coverage.

97 citations


Journal ArticleDOI
TL;DR: This work evaluates the effectiveness of test suites generated to satisfy four coverage criteria through counterexample-based test generation and a random generation approach-where tests are randomly generated until coverage is achieved-contrasted against purely random test suites of equal size.
Abstract: A number of structural coverage criteria have been proposed to measure the adequacy of testing efforts. In the avionics and other critical systems domains, test suites satisfying structural coverage criteria are mandated by standards. With the advent of powerful automated test generation tools, it is tempting to simply generate test inputs to satisfy these structural coverage criteria. However, while techniques to produce coverage-providing tests are well established, the effectiveness of such approaches in terms of fault detection ability has not been adequately studied. In this work, we evaluate the effectiveness of test suites generated to satisfy four coverage criteria through counterexample-based test generation and a random generation approach—where tests are randomly generated until coverage is achieved—contrasted against purely random test suites of equal size. Our results yield three key conclusions. First, coverage criteria satisfaction alone can be a poor indication of fault finding effectiveness, with inconsistent results between the seven case examples (and random test suites of equal size often providing similar—or even higher—levels of fault finding). Second, the use of structural coverage as a supplement—rather than a target—for test generation can have a positive impact, with random test suites reduced to a coverage-providing subset detecting up to 13.5 percent more faults than test suites generated specifically to achieve coverage. Finally, Observable MC/DC, a criterion designed to account for program structure and the selection of the test oracle, can—in part—address the failings of traditional structural coverage criteria, allowing for the generation of test suites achieving higher levels of fault detection than random test suites of equal size. These observations point to risks inherent in the increase in test automation in critical systems, and the need for more research in how coverage criteria, test generation approaches, the test oracle used, and system structure jointly influence test effectiveness.

93 citations


Journal ArticleDOI
TL;DR: In this paper, three separate fuzzy inference systems are designed for complete protection scheme for transmission line, which is able to accurately detect the fault (both forward and reverse), locate and also identify the faulty phase(s) involved in all ten types of shunt faults that may occur in a transmission line under different fault inception angle, fault resistances and fault location.
Abstract: This study aims to improve the performance of transmission line directional relaying, fault classification and fault location schemes using fuzzy system. Three separate fuzzy inference system are designed for complete protection scheme for transmission line. The proposed technique is able to accurately detect the fault (both forward and reverse), locate and also identify the faulty phase(s) involved in all ten types of shunt faults that may occur in a transmission line under different fault inception angle, fault resistances and fault location. The proposed method needs current and voltage measurements available at the relay location and can perform the fault detection and classification in about a half-cycle time. The proposed fuzzy logic based relay has less computation complexity and is better than other AI based methods such as artificial neural network, support vector machine, and decision tree (DT) etc. which require training. The percentage error in fault location is within 1 km for most of the cases. Fault location scheme has been validated using χ2 test with 5% level of significance. Proposed scheme is a setting free method and is suitable for wide range of parameters, fault detection time is less than half cycle and relay does not show any reaching mal-operation so it is reliable, accurate and secure.

92 citations


Journal ArticleDOI
TL;DR: This work provides a systematic study of DFA of AES and shows that an attacker can inject biased faults to improve the success rate of the attacks and proposes fault entropy (FE) and fault differential entropy (FDE) to evaluate CEDs.
Abstract: Differential fault analysis (DFA) poses a significant threat to advanced encryption standard (AES). Only a single faulty ciphertext is required to extract the secret key. Concurrent error detection (CED) is widely used to protect AES against DFA. Traditionally, these CEDs are evaluated with uniformly distributed faults, the resulting fault coverage indicates the security of CEDs against DFA. However, DFA-exploitable faults, which are a small subspace of the entire fault space, are not uniformly distributed. Therefore, fault coverage does not accurately measure the security of the CEDs against DFA. We provide a systematic study of DFA of AES and show that an attacker can inject biased faults to improve the success rate of the attacks. We propose fault entropy (FE) and fault differential entropy (FDE) to evaluate CEDs. We show that most CEDs with high fault coverage are not secure when evaluated with FE and FDE. This work challenges the traditional use of fault coverage for uniformly distributed faults as a metric for evaluating the security of CEDs against DFA.

85 citations


Journal ArticleDOI
TL;DR: In this article, a fast and robust wide-area backup protection scheme to detect the faulty condition and to identify the faulted line in a large power network is presented. But the proposed methodology uses positive-sequence synchrophasor data captured by either digital relays with synchronization capability or phasor measurement units dispersed over the network.
Abstract: This paper presents a fast and robust wide-area backup protection scheme to detect the faulty condition and to identify the faulted line in a large power network The proposed methodology uses positive-sequence synchrophasor data captured by either digital relays with synchronization capability or phasor measurement units dispersed over the network The basic idea behind the new protection scheme is the comparison of bus voltage values calculated through dissimilar paths Upon occurrence of a fault, the faulty condition is first detected and the bus(es) connected to the faulted line is(are) determined Among transmission lines connected to the suspected bus(es), the faulted one is thereafter identified In addition to two-terminal transmission lines, multiterminal lines are also incorporated The performance of the proposed method is validated on the IEEE 57-bus test system in different fault conditions (fault type, fault location, and fault resistance) Discrimination of faulty and normal conditions is simulated by examining various stressed conditions, for example, load encroachment, generator outage, and power swing The data requirement of the proposed technique is analyzed as well To do so, a mathematical model for the optimal placement of measurement devices is developed and applied to different IEEE standard test systems

80 citations


Journal ArticleDOI
TL;DR: In this article, a fault location algorithm which does not need to classify the fault type before location estimation is presented, which can locate all types of shunt faults including the cross-country and evolving faults.

69 citations


Journal ArticleDOI
TL;DR: A synergistic technique framework is proposed that integrates both the ECC and FM techniques to address simultaneously the permanent and transient faults of STT-MRAM, and shows good performance in terms of repair rate and hardware overhead.
Abstract: The emerging spin transfer torque magnetic random access memory (STT-MRAM) promises many attractive features, such as nonvolatile, high speed and low power etc, which enable it to be a promising candidate for the next-generation logic and memory circuits. However with the continuous scaling technology process, the chip yield and reliability of STT-MRAM face severe challenges due to the increasing permanent and transient faults. Due to the intrinsic fault features and the targeted application requirements of STT-MRAM, traditional fault tolerant design solutions, such as error correction code (ECC), redundancy repair (RR), and fault masking (FM) techniques, cannot be employed straightforwardly for STT-MRAM. In this paper, we propose a synergistic technique framework, named sECC, that integrates both the ECC and FM techniques to address simultaneously the permanent and transient faults. With such approach, permanent faults are masked while transient faults are corrected with the same codeword. Moreover taking into consideration the fact that most permanent faults are sparse [about 60%–70% single isolated faults (SIFs)], we propose further integrating the RR and sECC (named iRRsECC) to optimize the system performance. In this scenario, all the SIFs are masked and the transient faults are corrected with the proposed sECC, while other permanent faulty types (e.g., faulty rows or columns) are repaired with redundant rows or columns. A simulation tool is developed to evaluate the proposed techniques and the evaluation results show their good performance in terms of repair rate and hardware overhead.

65 citations


Proceedings ArticleDOI
22 Jun 2015
TL;DR: This paper analyzes the process of evaluating programs, which are hardened by software-based hardware fault-tolerance mechanisms, under a uniformly distributed soft-error model, and demonstrates that the fault coverage metric must be abolished for comparing programs.
Abstract: Since the first identification of physical causes for soft errors in memory circuits, fault injection (FI) has grown into a standard methodology to assess the fault resilience of computer systems. A variety of FI techniques trying to mimic these physical causes has been developed to measure and compare program susceptibility to soft errors. In this paper, we analyze the process of evaluating programs, which are hardened by software-based hardware fault-tolerance mechanisms, under a uniformly distributed soft-error model. We identify three pitfalls in FI result interpretation widespread in the literature, even published in renowned conference proceedings. Using a simple machine model and transient single-bit faults in memory, we find counterexamples that reveal the unfitness of common practices in the field, and substantiate our findings with real-world examples. In particular, we demonstrate that the fault coverage metric must be abolished for comparing programs. Instead, we propose to use extrapolated absolute failure counts as a valid comparison metric.

Journal ArticleDOI
TL;DR: A flexible and robust numerical algorithm for fault location on transmission lines based on emerging synchronized measurement technology, using synchronized data sampling at both line terminals, and the “SynchroCheck” procedure is also proposed as a means of accurately locating faults when data sampling synchronization has been lost.
Abstract: This paper presents a flexible and robust numerical algorithm for fault location on transmission lines. The algorithm does not require line parameters to locate the fault, which is an advance over fault locators that do require such information. Line parameters are only approximately constant; they vary with different loading and weather conditions, which affects the accuracy of existing fault location algorithms. Thus, an approach which does not require line parameters would be more robust, accurate, and flexible. Accurately, locating faults on transmission lines is vital for expediting their repair, so the proposed solution could lead to an improvement in the security and quality of the energy supply. Development of the proposed algorithm was facilitated by new smart grid technologies in the field of wide-area monitoring, protection, and control. The proposed algorithm is based on emerging synchronized measurement technology, using synchronized data sampling at both line terminals; however, the “SynchroCheck” procedure is also proposed as a means of accurately locating faults when data sampling synchronization has been lost. This paper presents the algorithm derivation and the results of thorough testing using alternative transients program-electromagnetic transients program (ATP-EMTP) fault simulations.

Journal ArticleDOI
TL;DR: This work considers an output of a fault localization tool to be effective if the root cause appears in the top 10 most suspicious program elements, and building upon advances in machine learning, learns a discriminative model that is able to predict the effectiveness of a Fault localization tool output.
Abstract: Debugging is a crucial yet expensive activity to improve the reliability of software systems. To reduce debugging cost, various fault localization tools have been proposed. A spectrum-based fault localization tool often outputs an ordered list of program elements sorted based on their likelihood to be the root cause of a set of failures (i.e., their suspiciousness scores). Despite the many studies on fault localization, unfortunately, however, for many bugs, the root causes are often low in the ordered list. This potentially causes developers to distrust fault localization tools. Recently, Parnin and Orso highlight in their user study that many debuggers do not find fault localization useful if they do not find the root cause early in the list. To alleviate the above issue, we build an oracle that could predict whether the output of a fault localization tool can be trusted or not. If the output is not likely to be trusted, developers do not need to spend time going through the list of most suspicious program elements one by one. Rather, other conventional means of debugging could be performed. To construct the oracle, we extract the values of a number of features that are potentially related to the effectiveness of fault localization. Building upon advances in machine learning, we process these feature values to learn a discriminative model that is able to predict the effectiveness of a fault localization tool output. In this work, we consider an output of a fault localization tool to be effective if the root cause appears in the top 10 most suspicious program elements. We have evaluated our proposed oracle on 200 faulty versions of Space, NanoXML, XML-Security, and the 7 programs in Siemens test suite. Our experiments demonstrate that we could predict the effectiveness of 9 fault localization tools with a precision, recall, and F-measure (harmonic mean of precision and recall) of up to 74.38 %, 90.00 % and 81.45 %, respectively. The numbers indicate that many ineffective fault localization instances are identified correctly, while only few effective ones are identified wrongly.

Journal ArticleDOI
TL;DR: This paper presents multiple empirical experiments that investigate the impact of fault quantity and fault type on statistical, coverage-based fault localization techniques and fault-localization interference and suggests that fault- localization interference is prevalent and exerts a meaningful influence that may cause a fault’s localizability to vary greatly.
Abstract: This paper presents multiple empirical experiments that investigate the impact of fault quantity and fault type on statistical, coverage-based fault localization techniques and fault-localization interference. Fault-localization interference is a phenomenon revealed in earlier studies of coverage-based fault localization that causes faults to obstruct, or interfere, with other faults' ability to be localized. Previously, it had been asserted that a fault-localization technique's effectiveness was negatively correlated to the quantity of faults in the program. To investigate these beliefs, we conducted an experiment on six programs consisting of more than 72,000 multiple-fault versions. Our data suggests that the impact of multiple faults exerts a significant, but slight influence on fault-localization effectiveness. In addition, faults were categorized according to four existing fault-taxonomies and found no correlation between fault type and fault-localization interference. In general, even in the presence of many faults, at least one fault was found by fault localization with similar effectiveness. Additionally, our data exhibits that fault-localization interference is prevalent and exerts a meaningful influence that may cause a fault's localizability to vary greatly. Because almost all real-world software contains multiple faults, these results affect the practical use and understanding of statistical fault-localization techniques.

Proceedings ArticleDOI
03 Aug 2015
TL;DR: This paper studied several existing and standard control and data flow coverage criteria on a set of developer-written fault-revealing test cases from several releases of five open source projects and found that a) basic criteria such as statement coverage is very weak (detecting only 10% of the faults), and b) combining several control-flow coverage together is better than the strongest criterion alone.
Abstract: Code coverage is one of the main metrics to measure the adequacy of a test case/suite. It has been studied a lot in academia and used even more in industry. However, a test case may cover a piece of code (no matter what coverage metric is being used) but miss its faults. In this paper, we studied several existing and standard control and data flow coverage criteria on a set of developer-written fault-revealing test cases from several releases of five open source projects. We found that a) basic criteria such as statement coverage is very weak (detecting only 10% of the faults), b) combining several control-flow coverage together is better than the strongest criterion alone (28% vs. 19%), c) a basic data-flow coverage can detect many undetected faults (79% of the undetected faults by control-flow coverage can be detected by a basic def/use pair coverage), and d) on average 15% of the faults may not be detected by any of the standard control and data-flow coverage criteria. Classification of the undetected faults showed that they are mostly to do with specification (missing logic).

Journal ArticleDOI
TL;DR: In this article, a fault location technique for power distribution systems considering distributed generation (DG) and uncertainties associated to the fault resistance, load magnitude and model, fault type and faulted node is presented.
Abstract: A proposal of a fault location technique for power distribution systems considering distributed generation (DG) and uncertainties associated to the fault resistance, load magnitude and model, fault type and faulted node is presented in this study. The approach presented here uses only single end measurements at the main power substation and at the DG source. To demonstrate the good performance of the proposed approach, tests on the 24.9 kV IEEE 34 nodes test feeder, which includes three-phase and single-phase laterals, are performed. According to the results, a high performance is obtained in the fault distance estimation, considering the shunt faults and the uncertainties in the proposed test scenarios. Finally, it is demonstrated that the proposed approach helps to maintain high continuity indexes by speeding up the fault location task in modern power distribution utilities.

Journal ArticleDOI
TL;DR: An imperfect software debugging model is proposed that considers a log-logistic distribution fault content function, which can capture the increasing and decreasing characteristics of the fault introduction rate per fault.

Journal ArticleDOI
TL;DR: In this paper, modular concept of artificial neural network (ANN)-based technique is introduced to identify and locate all type of shunt faults (120) in a six-phase transmission line.
Abstract: Summary In this paper, modular concept of artificial neural network (ANN)-based technique is introduced to identify and locate all type of shunt faults (120) in a six-phase transmission line. The proposed algorithm is composed of two stages. In the first stage, ANN-based algorithm has been developed to detect and classify all possible types of shunt faults within one cycle time. Then, the second stage dispenses the location of shunt faults. Total 11 numbers of modular ANNs have been developed in both the stages which are based on the fundamental components of voltage and current signals at the sending end of transmission line only. For validation of the proposed scheme, simulation studies have been carried out on a six-phase transmission system. The test results of ANN-based fault detector/classifier and locator indicate that the proposed algorithm correctly detects/classifies and locates the shunt faults. The results demonstrate high speed, reliability and suitability of the proposed technique and its adaptability to changing system conditions viz. fault type, fault inception angle, fault location, high fault resistance, short circuit capacity of source and its X/R ratio. Even the cases of slight variations in system frequency, generated voltage and initial power flow angle are also taken into account. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work proposes the use of only approximate logic modules to compose the TMR in order to reduce the area overhead close to minimal values and uses a Boolean factorization based method to compute approximate functions and to select the best composition of approximate logic.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed the current literature on advanced application of fault diagnosis in power systems and introduced a novel unsupervised technique of quarter-sphere support vector machine for power system fault detection and classification.
Abstract: Power systems frequently experience variations in their operation, which are mostly manifested as transmission line faults. Over the past decade, various techniques of fault diagnosis have been developed to ensure reliable and stable operation of power systems. This paper reviews the current literature on advanced application of fault diagnosis in power systems. Application of different fault diagnosis schemes is presented, with emphasis on reliable fault detection and classification of power system faults. The motivation behind applications of emerging process history, or pattern recognition, techniques in power system fault diagnosis has been reviewed. An extensive review of advanced mathematical techniques, in pattern recognition methods, involving wavelet transform, artificial neural networks and support vector machines has been presented. The paper also introduces a novel unsupervised technique of quarter-sphere support vector machine for power system fault detection and classification and reviews its application as future research in the developing area of fault diagnosis.

Journal ArticleDOI
TL;DR: SETA is a new control-flow software-only technique that uses assertions to detect errors affecting the program flow that is combined with previously proposed data-flow techniques that aim at reducing performance and memory overheads.
Abstract: Software-based techniques offer several advantages to increase the reliability of processor-based systems at very low cost, but they cause performance degradation and an increase of the code size. To meet constraints in performance and memory, we propose SETA, a new control-flow software-only technique that uses assertions to detect errors affecting the program flow. SETA is an independent technique, but it was conceived to work together with previously proposed data-flow techniques that aim at reducing performance and memory overheads. Thus, SETA is combined with such data-flow techniques and submitted to a fault injection campaign. Simulation and neutron induced SEE tests show high fault coverage at performance and memory overheads inferior to the state-of-the-art.

Journal ArticleDOI
TL;DR: In this article, a fault location method using the lowvoltage-side smart metering unit recorded data is proposed to locate and isolate the fault sections without interrupting the healthy system operations.
Abstract: Ground fault location and isolation are challenging in compensated distribution systems due to the fact that the fault current is too small to be measured. Considering that massive intelligent electronic devices and “smart meter” are installed in the distribution networks with the development of the “smart grid,” this paper proposes a fault location method using the low-voltage-side smart metering unit recorded data. The developed method uses the faulted negative-sequence voltage and locates the faulted sections by applying the relationship between fault distance and the clustered measurement groups. The method is tested using multibranched distribution systems, and results show that it is suitable for systems with both passive loads and distributed generations and not influenced by the system harmonic distortions, measurement noises, and loss of communication from one or even more of the metering units. The proposed method uses the wide-area measured data but does not require any synchronization. Accompanied with the low-cost contactors, the method can locate and isolate the fault sections without interrupting the healthy system operations, and this makes it feasible for industry application.

Journal ArticleDOI
TL;DR: In this article, a two-step incipient fault detection strategy was proposed for monitoring the complex industrial process, where the first step aims at the significant fault detection using the traditional multivariate statistical process monitoring methods, and the residual generation is optimization based on the robustness and sensitivity index, which can be realized directly using test data.
Abstract: Process variables can be classified into three stages: normal operation, incipient fault, and significant fault stage. A two-step incipient fault detection strategy was proposed for monitoring the complex industrial process. The first step aims at the significant fault detection using the traditional multivariate statistical process monitoring methods. Then a method combining the wavelet analysis with the residual evaluation was carried out for monitoring the incipient fault. Wavelet analysis aims at extracting the incipient fault features from process noise. The residual generation is optimization based on the robustness and sensitivity index, which can be realized directly using the test data. An improved kernel density estimation based on signal to noise ratio is proposed to adaptively determine the detection threshold. The proposed incipient fault detection scheme is tested on a numerical example and the Tennessee Eastman process. Compared to other traditional fault detection methods, good monitoring ...

Journal ArticleDOI
TL;DR: The system fault diagnosis based on the improved dependency model is formulated as an optimization problem with binary logic operations where all the fault hypotheses are tested and the optimal solution is obtained.

Proceedings ArticleDOI
23 Nov 2015
TL;DR: In this article, a fault location identification method based on the measurements available from the smart grid devices such as Advanced Metering Infrastructure (AMI), Reclosers, Distributed Generators (DG) and other IEDs is proposed that can accurately identify the fault location.
Abstract: In this paper, based on the measurements available from the smart grid devices such as Advanced Metering Infrastructure (AMI), Reclosers, Distributed Generators (DG) and other IEDs, a fault location identification method is proposed that can accurately identify the fault location. The algorithm is suitable for distribution networks with DG and smart measurement infrastructure that can transmit event-driven data such as pre and post-fault voltages or currents of scarce number of meters. A MATLAB-based state estimation (SE) technique is applied to identify the fault location, and the open source tool OpenDSS is used to perform the offline validation by creating faults in simulation. The IEEE 37 node test feeder was modified to add DGs at different locations and was used for the validation of the proposed algorithm. Multiple faults (both symmetrical and unsymmetrical) were created for the exhaustive validation of the technique. In more than 90% of the cases the location was identified in the first guess and for the remaining 10% it took second guess and the faulted node was indeed near to the first guess.

Journal ArticleDOI
Bing Xia1, Yang Wang1, E. Vazquez1, Wilsun Xu1, Daniel Wong, Michael Tong 
TL;DR: In this article, the authors proposed an offline method for estimating the fault resistance of transmission lines using the COMTRADE fault files available from digital distance relays, but synchronization of the data sets is not required.
Abstract: Information about typical fault resistances of transmission lines will help improve the settings of distance relays. This paper proposes an offline method for estimating the fault resistance of transmission lines using the COMTRADE fault files available from digital distance relays. The proposed method uses the fault records at both ends of a line, but synchronization of the data sets is not required. Simulation and lab experimental results show that the estimated fault resistance has good accuracy with 2% average error. Finally, the method was applied to estimate the fault resistances of more than 50 phase-to-ground faults of a utility company. The ranges of fault resistance values are determined for faults on the company's 240 kV and 138 kV lines.

Journal ArticleDOI
TL;DR: An LP test compression method that allows shaping the test power envelope in a fully predictable, accurate, and flexible fashion by adapting the PRESTO-based logic BIST (LBIST) infrastructure is proposed.
Abstract: This paper describes a low-power (LP) programmable generator capable of producing pseudorandom test patterns with desired toggling levels and enhanced fault coverage gradient compared with the best-to-date built-in self-test (BIST)-based pseudorandom test pattern generators. It is comprised of a linear finite state machine (a linear feedback shift register or a ring generator) driving an appropriate phase shifter, and it comes with a number of features allowing this device to produce binary sequences with preselected toggling (PRESTO) activity. We introduce a method to automatically select several controls of the generator offering easy and precise tuning. The same technique is subsequently employed to deterministically guide the generator toward test sequences with improved fault-coverage-to-pattern-count ratios. Furthermore, this paper proposes an LP test compression method that allows shaping the test power envelope in a fully predictable, accurate, and flexible fashion by adapting the PRESTO-based logic BIST (LBIST) infrastructure. The proposed hybrid scheme efficiently combines test compression with LBIST, where both techniques can work synergistically to deliver high quality tests. Experimental results obtained for industrial designs illustrate the feasibility of the proposed test schemes and are reported herein.

Journal ArticleDOI
Avagaddi Prasad1, J. Belwin Edwar1, C. Shashank Roy1, G. Divyansh1, Abhay Kumar1 
TL;DR: A new approach to distinctly identify and classify ground and phase faults by using two separate fuzzy classifiers is described, indicating that the accuracy in fault classification increases because of two fuzzy classifier is used for fault analysis.
Abstract: Transmission lines safeguard against exposed fault is the most critical task in the protection of power system. The purpose of a protective relaying is to identify the abnormal signals representing faults on a power transmission system. So fault classification is necessary for reliable and high speed protective relaying. This paper uses fuzzy logic technique for fault classification and this study describes a new approach to distinctly identify and classify ground and phase faults by using two separate fuzzy classifiers. Samples of post fault currents from all three phases at one end of the transmission system are being used to classify the nature of the faults. To demonstrate the effectiveness of this method, simulations considering various operating conditions have been performed on MATLAB. The simulation studies of the proposed technique indicate that the accuracy in fault classification increases because of two fuzzy classifiers is used for fault analysis.

Proceedings ArticleDOI
22 Jun 2015
TL;DR: This paper presents a virtual fault injection framework that simulates safety-standard aligned fault models and supports OTS software components as well as widely-used embedded processors such as ARM cores and shows how to integrate the framework into various software development stages.
Abstract: Ever more dependable embedded systems are built with commercial off-the-shelf hardware components that are not intended for highly reliable applications. Consequently, software-based fault tolerance techniques have to maintain a safe operation despite underlying hardware faults. In order to efficiently develop fault tolerant software, fault injection is needed in early development stages. However, common fault injection approaches require manufactured products or detailed hardware models. Thus, these techniques are typically not applicable if software and hardware providers are separate vendors. Additionally, the rise of third-party OTS software components limits the means to inject faults. In this paper, we present a virtual fault injection framework that simulates safety-standard aligned fault models and supports OTS software components as well as widely-used embedded processors such as ARM cores. Additionally, we show how to integrate the framework into various software development stages. Finally, we illustrate the practicability of the approach by exemplifying the integration of the framework in the development of an industrial safety-critical system.

Journal ArticleDOI
TL;DR: This paper presents a distributed fault diagnosis scheme able to deal with process and sensor faults in an integrated way for a class of interconnected input–output nonlinear uncertain discrete-time systems.
Abstract: This paper presents a distributed fault diagnosis scheme able to deal with process and sensor faults in an integrated way for a class of interconnected input–output nonlinear uncertain discrete-time systems. A robust distributed fault detection scheme is designed, where each interconnected subsystem is monitored by its respective fault detection agent, and according to the decisions of these agents, further information regarding the type of the fault can be deduced. As it is shown, a process fault occurring in one subsystem can only be detected by its corresponding detection agent whereas a sensor fault in a subsystem can be detected by either its corresponding detection agent or the detection agent of another subsystem that is affected by the subsystem where the sensor fault occurred. This discriminating factor is exploited for the derivation of a high-level isolation scheme. Moreover, process and sensor fault detectability conditions characterising quantitatively the class of detectable faults are deriv...