scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2011"


Book ChapterDOI
01 Jun 2011
TL;DR: In this paper, the AES key can be deduced using a single random byte fault at the input of the eighth round using a two-stage algorithm, with a statistical expectation of reducing the possible key hypotheses to 232 and a mere 28.
Abstract: In this paper we present a differential fault attack that can be applied to the AES using a single fault. We demonstrate that when a single random byte fault is induced at the input of the eighth round, the AES key can be deduced using a two stage algorithm. The first step has a statistical expectation of reducing the possible key hypotheses to 232, and the second step to a mere 28.

274 citations


Journal ArticleDOI
TL;DR: The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig, and the scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level.
Abstract: This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.

246 citations


Patent
Kirk H. Drees1
31 Mar 2011
TL;DR: In this paper, a controller for a building management system is configured to analyze faults in the building management systems using a system of rules, and the controller determines a conditional probability for each of a plurality of possible fault causes given the detected fault.
Abstract: A controller for a building management system is configured to analyze faults in the building management system. The controller detects a fault in the building management system by evaluating data of building management system using a system of rules. The controller determines a conditional probability for each of a plurality of possible fault causes given the detected fault. The controller determines the most likely fault cause by comparing the determined probabilities and electronically reports the most likely fault cause.

183 citations


Proceedings ArticleDOI
29 Sep 2011
TL;DR: This work thoroughly analyse how clock glitches affect a commercial low-cost processor by performing a large number of experiments on five devices, and explains how typical fault attacks can be mounted on this device, and describes a new attack for which the fault injection is easy and the cryptanalysis trivial.
Abstract: The literature about fault analysis typically describes fault injection mechanisms, e.g. glitches and lasers, and cryptanalytic techniques to exploit faults based on some assumed fault model. Our work narrows the gap between both topics. We thoroughly analyse how clock glitches affect a commercial low-cost processor by performing a large number of experiments on five devices. We observe that the effects of fault injection on two-stage pipeline devices are more complex than commonly reported in the literature. While injecting a fault is relatively easy, injecting an exploitable fault is hard. We further observe that the easiest to inject and reliable fault is to replace instructions, and that random faults do not occur. Finally we explain how typical fault attacks can be mounted on this device, and describe a new attack for which the fault injection is easy and the cryptanalysis trivial.

161 citations


Journal ArticleDOI
TL;DR: In this paper, the discrete hidden Markov model (HMM) is applied to detect and diagnose mechanical faults in machining processes and rotating machinery, which is tested and validated successfully using two scenarios: tool wear/fracture and bearing faults.

147 citations


Journal ArticleDOI
TL;DR: This paper proposes a scheme of layout-aware as well as coverage-driven ILS design, where the partitioning of the flip-flops into ILS segments is determined by their geometric locations, whereas the set of the flips to be placed in parallel are determined by the minimum incompatibility relations among the corresponding bits of a test set.
Abstract: The Illinois Scan Architecture (ILS) consists of several scan path segments and is useful in reducing test application time and test data volume for high density chips. In this paper, we propose a scheme of layout-aware as well as coverage-driven ILS design. The partitioning of the flip-flops into ILS segments is determined by their geometric locations, whereas the set of the flip-flops to be placed in parallel is determined by the minimum incompatibility relations among the corresponding bits of a test set, to enhance fault coverage in broadcast mode. As a result, the number of serial test patterns also reduces.

113 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a fault detection method based on the three-phase current and voltage waveforms measured when fault events occur in the power transmission-line network, which is able to rapidly detect and locate a fault on power transmission lines.
Abstract: Bridging the gap between the theoretical modeling and the practical implementation is always essential for fault detection, classification, and location methods in a power transmission-line network. In this paper, a novel hybrid framework that is able to rapidly detect and locate a fault on power transmission lines is presented. The proposed algorithm presents a fault discrimination method based on the three-phase current and voltage waveforms measured when fault events occur in the power transmission-line network. Negative-sequence components of the three-phase current and voltage quantities are applied to achieve fast online fault detection. Subsequently, the fault detection method triggers the fault classification and fault-location methods to become active. A variety of methods-including multilevel wavelet transform, principal component analysis, support vector machines, and adaptive structure neural networks-are incorporated into the framework to identify fault type and location at the same time. This paper lays out the fundamental concept of the proposed framework and introduces the methodology of the analytical techniques, a pattern-recognition approach via neural networks and a joint decision-making mechanism. Using a well-trained framework, the tasks of fault detection, classification, and location are accomplished in 1.28 cycles, significantly shorter than the critical fault clearing time.

111 citations


Journal ArticleDOI
TL;DR: In this article, a wide-area backup protection algorithm based on the fault component voltage distribution is proposed to overcome the problems of complex setting and maloperation under flow transfer of conventional backup protection.
Abstract: A new wide-area backup protection algorithm based on the fault component voltage distribution is proposed in this paper. It is helpful to overcome the problems of complex setting and maloperation under flow transfer of conventional backup protection. The measured values of fault component voltage and current at one terminal of the transmission line are applied to estimate the fault component voltage at the other terminal. Then, the fault element can be identified by the ratio between the measured values and the estimated values. In addition, the speed of fault element identification can be accelerated by a faulted area detection scheme. The proposed method has the advantage of easy setting and low requirement for synchronized wide-area data. The studies performed on the IEEE 39-bus system validate the proposed algorithm under various faults and flow transfer.

105 citations


Journal ArticleDOI
TL;DR: It is reported that the reference frame theory approach can successfully be applied to real-time fault diagnosis of electric machinery systems as a powerful toolbox to find the magnitude and phase quantities of fault signatures with good precision as well.
Abstract: The reference frame theory constitutes an essential aspect of electric machine analysis and control. In this study, apart from the conventional applications, it is reported that the reference frame theory approach can successfully be applied to real-time fault diagnosis of electric machinery systems as a powerful toolbox to find the magnitude and phase quantities of fault signatures with good precision as well. The basic idea is to convert the associated fault signature to a dc quantity, followed by the computation of the signal's average in the fault reference frame to filter out the rest of the signal harmonics, i.e., its ac components. As a natural consequence of this, neither a notch filter nor a low-pass filter is required to eliminate fundamental component or noise content. Since the incipient fault mechanisms have been studied for a long time, the motor fault signature frequencies and fault models are very well-known. Therefore, ignoring all other components, the proposed method focuses only on certain fault signatures in the current spectrum depending on the examined motor fault. Broken rotor bar and eccentricity faults are experimentally tested online using a TMS320F2812 digital signal processor (DSP) to prove the effectiveness of the proposed method. In this application, only the readily available drive hardware is used without employing additional components such as analog filters, signal conditioning board, external sensors, etc. As the motor drive processing unit, the DSP is utilized both for motor control and fault detection purposes, providing instantaneous fault information. The proposed algorithm processes the measured data in real time to avoid buffering and large-size memory needed in order to enhance the practicability of this method. Due to the short-time convergence capability of the algorithm, the fault status is updated in each second. The immunity of the algorithm against non-ideal cases such as measurement offset errors and phase unbalance is theoretically and experimentally verified. Being a model-independent fault analyzer, this method can be applied to all multiphase and single-phase motors.

105 citations


Journal ArticleDOI
TL;DR: On testing 28,800 fault cases with varying fault resistance, fault inception angle, fault distance, load angle, percentage compensation level and source impedance, the performance of the proposedWT-ELM technique is found to be quite promising and the results indicate that the proposed method is robust to wide variation in system and operating conditions.

78 citations


Journal ArticleDOI
TL;DR: A novel wide area backup protection algorithm to identify fault branch based on the fault steady state component is proposed and the simulation results for the 10-generator 39-bus system verify that this method is able to easily identify fault Branch with limited measurement points.
Abstract: A novel wide area backup protection algorithm to identify fault branch based on the fault steady state component is proposed. Under normal conditions of the power system, subsets of buses called protection correlation regions (PCRs) are formed on the basis of the network topology and phasor measurement unit (PMU) placement. After the fault occurs, by analyzing the fault steady state component of differential current in each PCR, the fault correlation region is confirmed and then a fault correlation factor (FCF), is calculated in real time to locate the fault branch. The simulation results for the 10-generator 39-bus system verify that this method is able to easily identify fault branch with limited measurement points.

Proceedings ArticleDOI
24 Jul 2011
TL;DR: In this article, the authors investigated the fault behavior of inverter-interfaced distributed generators in stand-alone networks and showed that the rapid transient response of the inverter control system allows its fault behaviour to be characterised by quasi steady-state equivalent fault models.
Abstract: This paper investigates the fault behaviour of inverter-interfaced distributed generators in stand-alone networks. It is shown that the rapid transient response of the inverter control system allows its fault behaviour to be characterised by quasi steady-state equivalent fault models. The choice of inverter control strategy, control reference frame and the method of active current limiting dominate the fault response, especially in case of unbalanced faults. The proposed fault models can be directly incorporated in conventional fault analysis methods of which an example is given for a faulty islanded microgrid. Model validation is carried out by comparing experimental measurements with results of analytical fault analysis using the developed fault models and PSCAD time domain simulations.

Journal ArticleDOI
TL;DR: In this paper, symbolic dynamic filtering is proposed to mask the effects of sensor noise level variation and magnify the system fault signatures for fault detection in aircraft gas turbine engines, which is tested and validated on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS ) test-bed developed by NASA for noisy (i.e., increased variance) sensor signals.
Abstract: An inherent difficulty in sensor-data-driven fault detection is that the detection performance could be drastically reduced under sensor degradation (e.g., drift and noise). Complementary to traditional model-based techniques for fault detection, this paper proposes symbolic dynamic filtering by optimally partitioning the time series data of sensor observation. The objective here is to mask the effects of sensor noise level variation and magnify the system fault signatures. In this regard, the concepts of feature extraction and pattern classification are used for fault detection in aircraft gas turbine engines. The proposed methodology of data-driven fault detection is tested and validated on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS ) test-bed developed by NASA for noisy (i.e., increased variance) sensor signals.

Journal ArticleDOI
TL;DR: In this article, a method to resolve fault localization problems in power system was developed based on real-time measurement of phasor measurement units, which used mainly pattern classification technology and linear discrimination principle of pattern recognition theory to search for laws of electrical quantity marked changes.

Journal ArticleDOI
TL;DR: In this paper, the authors present the application of calculated non-linear voltage sag profiles and voltage sag measurement at primary substation to locate a fault in distribution networks, and the proposed method has been tested under different fault scenarios that include various fault resistance, loading variation and data measurement errors.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: This work presents RAPTOR, a test prioritization algorithm for fault localization, based on reducing the similarity between statement execution patterns as the testing progresses, which is much less complex than previous diagnostic prioritization algorithms.
Abstract: In practically all development processes, regression tests are used to detect the presence of faults after a modification. If faults are detected, a fault localization algorithm can be used to reduce the manual inspection cost. However, while using test case prioritization to enhance the rate of fault detection of the test suite (e.g., statement coverage), the diagnostic information gain per test is not optimal, which results in needless inspection cost during diagnosis. We present RAPTOR, a test prioritization algorithm for fault localization, based on reducing the similarity between statement execution patterns as the testing progresses. Unlike previous diagnostic prioritization algorithms, RAPTOR does not require false negative information, and is much less complex. Experimental results from the Software Infrastructure Repository's benchmarks show that RAPTOR is the best technique under realistic conditions, with average cost reductions of 40% with respect to the next best technique, with negligible impact on fault detection capability.

Journal ArticleDOI
TL;DR: This paper deals with subspace method aided data-driven design of robust fault detection and isolation systems and the application of the method proposed is illustrated by a simulation study on the vehicle lateral dynamic system.

Proceedings ArticleDOI
01 May 2011
TL;DR: A new distributed on-line test mechanism for NoCs is proposed which scales to large-scale networks with general topologies and routing algorithms and achieves 100% fault coverage for the data-path and 85% for the control paths including routing logic, FIFO's control path and the arbiter of a 5×5 router.
Abstract: A new distributed on-line test mechanism for NoCs is proposed which scales to large-scale networks with general topologies and routing algorithms. Each router and its links are tested using neighbors in different phases. Only the router under test is in test mode and all other parts of the NoC are in functional mode. Experimental results show that our on-line test approach can detect stuck-at and short-wire faults in the routers and links. Our approach achieves 100% fault coverage for the data-path and 85% for the control paths including routing logic, FIFO's control path and the arbiter of a 5×5 router. Synthesis results show that the hardware overhead of our test components with TMR (Triple Module Redundancy) support is 20% for covering both stuck-at and short-wire faults and 7% for covering only stuck-at faults in the 5×5 router. Simulation results show that our on-line testing approach has an average latency overhead of 20% and 3% in synthetic traffic and PARSEC traffic benchmarks on an 8×8 NoC, respectively.

Journal ArticleDOI
TL;DR: Evidence-based fusion strategies such as weighted voting, Bayesian, and Dempster–Shafer based fusion can provide complete fault coverage and significant improvement in monitoring performance in situations where no single FDI method offers adequate performance.

Proceedings ArticleDOI
04 Jun 2011
TL;DR: This paper proves an ultimate upper bound exists on total missed errors and develops a probabilistic model to analyze the distribution of the number of undetected errors and detection latency and introduces a system paradigm of restricting all permanent faults' effects to small finite windows of error occurrence.
Abstract: With technology scaling, manufacture-time and in-field permanent faults are becoming a fundamental problem. Multi-core architectures with spares can tolerate them by detecting and isolating faulty cores, but the required fault detection coverage becomes effectively 100% as the number of permanent faults increases. Dual-modular redundancy(DMR) can provide 100% coverage without assuming device-level fault models, but its overhead is excessive. In this paper, we explore a simple and low-overhead mechanism we call Sampling-DMR: run in DMR mode for a small percentage (1% of the time for example) of each periodic execution window (5 million cycles for example). Although Sampling-DMR can leave some errors undetected, we argue the permanent fault coverage is 100% because it can detect all faults eventually. Sampling-DMR thus introduces a system paradigm of restricting all permanent faults' effects to small finite windows of error occurrence. We prove an ultimate upper bound exists on total missed errors and develop a probabilistic model to analyze the distribution of the number of undetected errors and detection latency. The model is validated using full gate-level fault injection experiments for an actual processor running full application software. Sampling-DMR outperforms conventional techniques in terms of fault coverage, sustains similar detection latency guarantees, and limits energy and performance overheads to less than 2%.

Proceedings ArticleDOI
27 Jun 2011
TL;DR: This paper describes a framework to automatically generate static fault trees from system models specified with SysML and proposes a static fault tree model (SFTM), which can avoid the problems of the dynamic FDEP and PAND gates and can reduce the cost of analysis based on a combinatorial model.
Abstract: Fault tree analysis (FTA) is a traditional reliability analysis technique. In practice, the manual development of fault trees could be costly and error-prone, especially in the case of fault tolerant systems due to the inherent complexities such as various dependencies and interactions among components. Some dynamic fault tree gates, such as Functional Dependency (FDEP) and Priority AND (PAND), are proposed to model the functional and sequential dependencies, respectively. Unfortunately, the potential semantic troubles and limitations of these gates have not been well studied before. In this paper, we describe a framework to automatically generate static fault trees from system models specified with SysML. A reliability configuration model (RCM) and a static fault tree model (SFTM) are proposed to embed system configuration information needed for reliability analysis and error mechanism for fault tree generation, respectively. In the SFTM, the static representations of functional and sequential dependencies with standard Boolean AND and OR gates are proposed, which can avoid the problems of the dynamic FDEP and PAND gates and can reduce the cost of analysis based on a combinatorial model. A fault-tolerant parallel processor (FTTP) example is used to demonstrate our approach.

Journal ArticleDOI
TL;DR: In this article, an iterative fault analysis algorithm for unbalanced three-phase distribution systems that considers a fault resistance estimate is presented, which is composed of two sub-routines, namely the fault resistance and the bus impedance.

Journal ArticleDOI
TL;DR: Fault-handling methods not requiring modification of the FPGA device architecture or user intervention to recover from faults are examined and evaluated against overhead-based and sustainability-based performance metrics such as additional resource requirements, throughput reduction, fault capacity, and fault coverage.
Abstract: The capabilities of current fault-handling techniques for Field Programmable Gate Arrays (FPGAs) develop a descriptive classification ranging from simple passive techniques to robust dynamic methods. Fault-handling methods not requiring modification of the FPGA device architecture or user intervention to recover from faults are examined and evaluated against overhead-based and sustainability-based performance metrics such as additional resource requirements, throughput reduction, fault capacity, and fault coverage. This classification alongside these performance metrics forms a standard for confident comparisons.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a fault feature extraction method for power system transmission lines based on single-end measurements using time shift invariant property of a sinusoidal waveform.
Abstract: This study proposes a novel fault feature extraction that could be used in fault detection and classification schemes for power system transmission lines, based on single-end measurements using time shift invariant property of a sinusoidal waveform. Various types of faults at different locations, fault resistance and fault inception angles on a 400 kV 361.65 km power system transmission line are investigated. The determinant function is used to extract distinctive fault features over various data window sizes namely, 1/4, 1/2 and a cycle of post-fault data. In addition, various delays were introduced before taking the post-fault measurements. The performance of the feature extraction scheme was tested on a machine intelligent platform WEKA by using three types of feature selection techniques: information gain, gain ratio and SVM. The result shows that, the determinant function defined over the phase current and neutral current is sufficient to classify ten types of short-circuit faults on doubly fed transmission lines; however, the scheme did not differentiate between 3 phase line faults (LLL) and 3 phase lines to ground faults (LLLG), the two types of faults are treated as the same type of fault, balanced fault. An accuracy between 95.95 and 100 is achieved.

Journal Article
TL;DR: The Bee Colony Optimization (BCO) algorithm for the fault coverage regression test suite prioritization has been presented and average Percentage of Fault Detection (APFD) metrics and charts has been used to show the effectiveness of proposed algorithm.
Abstract: The process of verifying the modified software in the maintenance phase is called Regression Testing. The size of the regression test suite and its selection process is a complex task for regression testers because of time and budget constraints. In this research paper, the Bee Colony Optimization (BCO) algorithm for the fault coverage regression test suite prioritization has been presented. In the natural bee colony, there are of two types of worker bees; Scout bees and forager bee, who are responsible for the development and maintenance of the colony. The BCO algorithm developed for the fault coverage regression test suite is based on the behavior of these two bees. The BCO algorithm has been formulated for fault coverage to attain maximum fault coverage in minimal units of execution time of each test case, using two examples whose results are comparable to optimal solution. Average Percentage of Fault Detection (APFD) metrics and charts has been used to show the effectiveness of proposed algorithm.

Proceedings ArticleDOI
01 May 2011
TL;DR: A novel ATPG technique where all fault models of interest are concurrently targeted in a single ATPG run is proposed, independent of any special ATPG tool or scan compression technique and requires no change or additional support in an existing ATPG system.
Abstract: ATPG tool generated patterns are a major component of test data for large SOCs. With increasing sizes of chips, higher integration involving IP cores and the need for patterns targeting multiple fault models for better defect coverage in newer technologies, the issues of adequate coverage and reasonable test data volume and application time dominate the economics of test. We address the problem of generating compact set of test patterns across multiple fault models. Traditional approaches use separate ATPG for each fault models and minimize patterns either during pattern generation through static or dynamic compaction, or after pattern generation by simulating all patterns over all fault models for static compaction. We propose a novel ATPG technique where all fault models of interest are concurrently targeted in a single ATPG run. Patterns are generated in small intervals, each consisting of 16, 32 or 64 patterns. In each interval fault model specific ATPG setups generate separate pattern sets for their respective fault model. An effectiveness criterion then selects exactly one of those pattern sets. The selected set covers untargeted faults that would have required the most additional patterns. Pattern generation intervals are repeated until required coverage for faults of all models of interest is achieved. The sum total of all selected interval pattern sets is the overall test set for the DUT. Experiments on industrial circuits show pattern count reductions of 21% to 68%. The technique is independent of any special ATPG tool or scan compression technique and requires no change or additional support in an existing ATPG system.

Journal ArticleDOI
TL;DR: In this article, an approach for fault localization in closed-loop Discrete Event Systems is proposed, which allows fault localization using a fault-free system model to describe the expected system behavior.

Patent
Xijiang Lin1, Kun-Han Tsai2, Mark Kassab1, Chen Wang1, Janusz Rajski1 
31 Oct 2011
TL;DR: In this paper, a timing-aware automatic test pattern generation (ATPG) is proposed to improve the quality of a test set generated for detecting delay defects or holding time defects.
Abstract: Disclosed herein are exemplary methods, apparatus, and systems for performing timing-aware automatic test pattern generation (ATPG) that can be used, for example, to improve the quality of a test set generated for detecting delay defects or holding time defects. In certain embodiments, timing information derived from various sources (e.g. from Standard Delay Format (SDF) files) is integrated into an ATPG tool. The timing information can be used to guide the test generator to detect the faults through certain paths (e.g., paths having a selected length, or range of lengths, such as the longest or shortest paths). To avoid propagating the faults through similar paths repeatedly, a weighted random method can be used to improve the path coverage during test generation. Experimental results show that significant test quality improvement can be achieved when applying embodiments of timing-aware ATPG to industrial designs.

Proceedings ArticleDOI
01 Sep 2011
TL;DR: This paper focuses on a new approach to significantly improve the overall defect coverage for CMOS-based designs with the final goal to eliminate any system-level test.
Abstract: This paper focuses on a new approach to significantly improve the overall defect coverage for CMOS-based designs with the final goal to eliminate any system-level test. This methodology describes the pattern generation flow for detecting cell-internal small-delay defects caused by cell-internal resistive bridges. Results have been evaluated on 1,900 library cells of a 32-nm technology. First production test results are presented from evaluating additional defect detections achieved with different fault models on a 45-nm design.

Journal ArticleDOI
TL;DR: A proactive maintenance scheme for fault detection, diagnosis and prediction in electrical valves, based in self-organizing maps, embedded into an FPGA-based platform, validated with a case study, considering a specific valve used for controlling the oil flow in a distribution network.
Abstract: This paper presents a proactive maintenance scheme for fault detection, diagnosis and prediction in electrical valves. The proposed scheme is validated with a case study, considering a specific valve used for controlling the oil flow in a distribution network. The scheme is based in self-organizing maps, which perform fault detection and diagnosis, and temporal self-organizing maps for fault prediction. The adopted fault model considers deviations either in torque, in the valve's gate position or in the opening or closing time. The map which performs the fault detection, diagnosis and prediction, is trained with the energy spectral density information, obtained from the torque and position signals by applying the wavelet packet transform. These signals are provided by a mathematical model devised for the electrical valve. The training is performed by fault injection based on parameter deviations over this same mathematical model. The proposed system is embedded into an FPGA-based platform. Experimental results demonstrate the effectiveness of the proposed approaches.