scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2010"


Book ChapterDOI
17 Aug 2010
TL;DR: It is shown that WDDL-AES is not perfectly secure against setup-time violation attacks, and a masking technique is discussed as a potential countermeasure against the proposed fault-based attack.
Abstract: This paper proposes a new fault-based attack called the Fault Sensitivity Analysis (FSA) attack, which unlike most existing fault-based analyses including Differential Fault Analysis (DFA) does not use values of faulty ciphertexts. Fault sensitivity means the critical condition when a faulty output begins to exhibit some detectable characteristics, e.g., the clock frequency when fault operation begins to occur. We explain that the fault sensitivity exhibits sensitive-data dependency and can be used to retrieve the secret key. This paper presents two practical FSA attacks against two AES hardware implementations on SASEBO-R, PPRM1-AES and WDDL-AES. Different from previous work, we show that WDDL-AES is not perfectly secure against setup-time violation attacks. We also discuss a masking technique as a potential countermeasure against the proposed fault-based attack.

219 citations


Journal ArticleDOI
TL;DR: The use of set-membership methods in fault diagnosis (FD) and fault tolerant control (FTC) using a deterministic unknown-but-bounded description of noise and parametric uncertainty (interval models).
Abstract: This paper reviews the use of set-membership methods in fault diagnosis (FD) and fault tolerant control (FTC). Setmembership methods use a deterministic unknown-but-bounded description of noise and parametric uncertainty (interval models). These methods aims at checking the consistency between observed and predicted behaviour by using simple sets to approximate the exact set of possible behaviour (in the parameter or the state space). When an inconsistency is detected between the measured and predicted behaviours obtained using a faultless system model, a fault can be indicated. Otherwise, nothing can be stated. The same principle can be used to identify interval models for fault detection and to develop methods for fault tolerance evaluation. Finally, some real applications will be used to illustrate the usefulness and performance of set-membership methods for FD and FTC.

157 citations


Proceedings ArticleDOI
12 Jul 2010
TL;DR: A new, "directed" test-generation technique, which aims to maximize the similarity between the path constraints of the generated tests and those of faulty executions, reaches this level of effectiveness with much smaller test suites, when compared to test generation based on standard concolic execution techniques.
Abstract: Fault-localization techniques that apply statistical analyses to execution data gathered from multiple tests are quite effective when a large test suite is available. However, if no test suite is available, what is the best approach to generate one? This paper investigates the fault-localization effectiveness of test suites generated according to several test-generation techniques based on combined concrete and symbolic (concolic) execution. We evaluate these techniques by applying the Ochiai fault-localization technique to generated test suites in order to localize 35 faults in four PHP Web applications. Our results show that the test-generation techniques under consideration produce test suites with similar high fault-localization effectiveness, when given a large time budget. However, a new, "directed" test-generation technique, which aims to maximize the similarity between the path constraints of the generated tests and those of faulty executions, reaches this level of effectiveness with much smaller test suites. On average, when compared to test generation based on standard concolic execution techniques that aims to maximize code coverage, the new directed technique preserves fault-localization effectiveness while reducing test-suite size by 86.1% and test-suite generation time by 88.6%.

143 citations


Journal ArticleDOI
TL;DR: In this article, the authors used particle swarm optimisation (PSO) for an effective training of ANN and the application of wavelet transforms for predicting the type of fault in electric power system.
Abstract: Fault classification in electric power system is vital for secure operation of power systems. It has to be accurate to facilitate quick repair of the system, improve system availability and reduce operating costs due to mal-operation of relay. Artificial neural networks (ANNs) can be an effective technique to help to predict the fault, when it is provided with characteristics of fault currents and the corresponding past decisions as outputs. This paper describes the use of particle swarm optimisation (PSO) for an effective training of ANN and the application of wavelet transforms for predicting the type of fault. Through wavelet analysis, faults are decomposed into a series of wavelet components, each of which is a time-domain signal that covers a specific octave frequency band. The parameters selected for fault classification are the detailed coefficients of all the phase current signals, measured at the sending end of a transmission line. The information is then fed into ANN for classifying the faults. The proposed PSO-based multi-layer perceptron neural network gives 99.91% fault classification accuracy. Moreover, it is capable of producing fast and more accurate results compared with the back-propagation ANN. Extensive simulation studies were carried out and a set of results taken from the simulation studies are presented in this paper. The proposed technique when combined with a wide-area monitoring system would be an effective tool for detecting and identifying the faults in any part of the system.

81 citations


Book ChapterDOI
14 Sep 2010
TL;DR: The MODIFI (MODel-Implemented Fault Injection) tool is presented, currently targeting behaviour models in Simulink and the fault injection algorithm uses the concept of minimal cut sets (MCS) generation.
Abstract: Fault injection is traditionally divided into simulation-based and physical techniques depending on whether faults are injected into hardware models, or into an actual physical system or prototype. Another classification is based on how fault injection mechanisms are implemented. Well known techniques are hardware-implemented fault injection (HIFI) and softwareimplemented fault injection (SWIFI). For safety analyses during model-based development, fault injection mechanisms can be added directly into models of hardware, models of software or models of systems. This approach is denoted by the authors as model-implemented fault injection. This paper presents the MODIFI (MODel-Implemented Fault Injection) tool. The tool is currently targeting behaviour models in Simulink. Fault models used by MODIFI are defined using XML according to a specific schema file and the fault injection algorithm uses the concept of minimal cut sets (MCS) generation. First, a user defined set of single faults are injected to see if the system is tolerant against single faults. Single faults leading to a failure, i.e. a safety requirement violation, are stored in a MCS list together with the corresponding counterexample. These faults are also removed from the fault space used for subsequent experiments. When all single faults have been injected, the effects of multiple faults are investigated, i.e. two or more faults are introduced at the same time. The complete list of MCS is finally used to automatically generate test cases for efficient fault injection on the target system.

77 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an offline fault diagnosis method for industrial gas turbines in a steady-state, where multiple Bayesian models tailored to various fault situations are implemented in one hierarchical model.
Abstract: This paper presents an offline fault diagnosis method for industrial gas turbines in a steady-state. Fault diagnosis plays an important role in the efforts for gas turbine owners to shift from preventive maintenance to predictive maintenance, and consequently to reduce the maintenance cost. Ever since its birth, numerous techniques have been researched in this field, yet none of them is completely better than the others and perfectly solves the problem. Fault diagnosis is a challenging problem because there are numerous fault situations that can possibly happen to a gas turbine, and multiple faults may occur in multiple components of the gas turbine simultaneously. An algorithm tailored to one fault situation may not perform well in other fault situations. A general algorithm that performs well in overall fault situations tends to compromise its accuracy in the individual fault situation. In addition to the issue of generality versus accuracy, another challenging aspect of fault diagnosis is that, data used in diagnosis contain errors. The data is comprised of measurements obtained from gas turbines. Measurements contain random errors and often systematic errors like sensor biases as well. In this paper, to maintain the generality and the accuracy together, multiple Bayesian models tailored to various fault situations are implemented in one hierarchical model. The fault situations include single faults occurring in a component, and multiple faults occurring in more than one component. In addition to faults occurring in the components of a gas turbine, sensor biases are explicitly included in the multiple models so that the magnitude of a bias, if any, can be estimated as well. Results from these multiple Bayesian models are averaged according to how much each model is supported by data. Gibbs sampling is used for the calculation of the Bayesian models. The presented method is applied to fault diagnosis of a gas turbine that is equipped with a faulty compressor and a biased fuel flow sensor. The presented method successfully diagnoses the magnitudes of the compressor fault and the fuel flow sensor bias with limited amount of data. It is also shown that averaging multiple models gives rise to more accurate and less uncertain results than using a single general model. By averaging multiple models, based on various fault situations, fault diagnosis can be general yet accurate. DOI: 10.1115/1.3204508

74 citations


Journal ArticleDOI
TL;DR: A novel fault detection and diagnosis method using a dynamic independent component analysis-based approach is introduced that is able to accurately detect and isolate the root causes for each individual fault.
Abstract: In this paper, we introduce a novel fault detection and diagnosis method using a dynamic independent component analysis-based approach. We also present an innovative mechanism for detecting and diagnosing the faults. The proposed approach is able to accurately detect and isolate the root causes for each individual fault. The Tennessee Eastman challenge process is used to demonstrate the much improved performance of our proposed technique in comparison with other currently existing statistical monitoring and fault detection methods.

73 citations


Journal IssueDOI
TL;DR: It is hypothesizes that information flows present a good model for such interactions and presents a new fault localization technique based on information flow coverage, which was compared to three other coverage techniques that use similar style metrics that are defined for statements, branches and def-use pairs, respectively.
Abstract: Failures triggered by hard to debug defects usually involve complex interactions between many program elements. This paper hypothesizes that information flows present a good model for such interactions and presents a new fault localization technique based on information flow coverage. Using a test suite, the technique ranks the statements in a program in terms of their likelihood of being faulty by comparing the information flows induced by the failing runs with the ones induced by the passing runs. The ranking of the statements associated with a given flow is primarily determined by contrasting the percentage of failing runs to the percentage of passing runs that induced it. Generally, a higher percentage of failing runs implies a higher rank. To show its potential, the technique was applied to several open-source Java programs and was compared, with respect to its fault localization effectiveness, with three other coverage techniques that use similar style metrics that are defined for statements, branches, and def–use pairs, respectively. The results revealed that information flow, branch, and def–use coverage performed consistently better than statement coverage. In addition, in a considerable number of cases information flow coverage performed better than branch and def–use coverage. Specifically, it was always safer but not always more precise. Copyright © 2009 John Wiley & Sons, Ltd. This paper presents a new fault localization technique based on information flow coverage. The technique was compared to three other coverage techniques that use similar style metrics that are defined for statements, branches and def-use pairs, respectively. The results revealed that information flow, branch, and def-use coverage performed consistently better than statement coverage, and in a considerable number of cases information flow coverage performed better than branch and def-use coverage. Specifically, it was always safer but not always more precise. Copyright © 2009 John Wiley & Sons, Ltd.

71 citations


Journal ArticleDOI
01 Mar 2010
TL;DR: This work proposes three strategies to reduce the test inputs in an existing test collection for result inspection and shows that this approach can help developers inspect a smaller subset of test inputs, whose fault-localization effectiveness is close to that of the whole test collection.
Abstract: Testing-based fault-localization (TBFL) approaches often require the availability of high-statement-coverage test suites that sufficiently exercise the areas around the faults. However, in practice, fault localization often starts with a test suite whose quality may not be sufficient to apply TBFL approaches. Recent capture/replay or traditional test-generation tools can be used to acquire a high-statement-coverage test collection (i.e., test inputs only) without expected outputs. But it is expensive or even infeasible for developers to manually inspect the results of so many test inputs. To enable practical application of TBFL approaches, we propose three strategies to reduce the test inputs in an existing test collection for result inspection. These three strategies are based on the execution traces of test runs using the test inputs. With the three strategies, developers can select only a representative subset of the test inputs for result inspection and fault localization. We implemented and applied the three test-input-reduction strategies to a series of benchmarks: the Siemens programs, DC, and TCC. The experimental results show that our approach can help developers inspect the results of a smaller subset (less than 10%) of test inputs, whose fault-localization effectiveness is close to that of the whole test collection.

62 citations


Journal ArticleDOI
01 May 2010
TL;DR: A fault diagnosis method using a timed discrete-event approach based on interval observers that improves the integration of fault detection and isolation tasks and is motivated by the problem of detecting and isolating faults of the Barcelona's urban sewer system limnimeters.
Abstract: This paper proposes a fault diagnosis method using a timed discrete-event approach based on interval observers that improves the integration of fault detection and isolation tasks. The interface between fault detection and fault isolation considers the activation degree and the occurrence time instant of the diagnostic signals using a combination of several theoretical fault signature matrices that store the knowledge of the relationship between diagnostic signals and faults. The fault isolation module is implemented using a timed discrete-event approach that recognizes the occurrence of a fault by identifying a unique sequence of observable events (fault signals). The states and transitions that characterize such a system can directly be inferred from the relation between fault signals and faults. The proposed fault diagnosis approach has been motivated by the problem of detecting and isolating faults of the Barcelona's urban sewer system limnimeters (level meter sensors). The results obtained in this case study illustrate the benefits of using the proposed approach in comparison with the standard fault detection and isolation approach.

57 citations


Proceedings ArticleDOI
14 Jul 2010
TL;DR: A grouping-based strategy that can be applied to various techniques in order to boost their fault localization effectiveness and does not require the technique to be modified in any way.
Abstract: Fault localization is one of the most expensive activities of program debugging, which is why the recent years have witnessed the development of many different fault localization techniques. This paper proposes a grouping-based strategy that can be applied to various techniques in order to boost their fault localization effectiveness. The applicability of the strategy is assessed over – Tarantula and a radial basis function neural network-based technique; across three different sets of programs (the Siemens suite, grep and gzip). Results are suggestive that the grouping-based strategy is capable of significantly improving the fault localization effectiveness and is not limited to any particular fault localization technique. The proposed strategy does not require any additional information than what was already collected as input to the fault localization technique, and does not require the technique to be modified in any way.

Proceedings ArticleDOI
07 Nov 2010
TL;DR: A new similarity- based selection technique for state machine-based test case selection is proposed, which includes a new similarity function using triggers and guards on transitions of state machines and a genetic algorithm-based selection algorithm.
Abstract: In recent years, Model-Based Testing (MBT) has attracted an increasingly wide interest from industry and academia. MBT allows automatic generation of a large and comprehensive set of test cases from system models (e.g., state machines), which leads to the systematic testing of the system. However, even when using simple test strategies, applying MBT in large industrial systems often leads to generating large sets of test cases that cannot possibly be executed within time and cost constraints. In this situation, test case selection techniques are employed to select a subset from the entire test suite such that the selected subset conforms to available resources while maximizing fault detection. In this paper, we propose a new similarity-based selection technique for state machine-based test case selection, which includes a new similarity function using triggers and guards on transitions of state machines and a genetic algorithm-based selection algorithm. Applying this technique on an industrial case study, we show that our proposed approach is more effective in detecting real faults than existing alternatives. We also assess the overall benefits of model-based test case selection in our case study by comparing the fault detection rate of the selected subset with the maximum possible fault detection rate of the original test suite.

Proceedings ArticleDOI
07 Jun 2010
TL;DR: PeerWatch, a fault detection and diagnosis tool specially designed for virtualized consolidation systems, is proposed that is robust to system dynamics, compared to traditional fault detection techniques and thus can avoid a lot of false alarms.
Abstract: Server virtualization is now becoming an effective means to consolidate numerous applications into a small number of machines. While such a strategy can lead to significant savings in power and hardware cost, it may complicate the fault management task due to the increasing scalability and complexity in the virtualized environment. In this paper, we propose PeerWatch, a fault detection and diagnosis tool specially designed for virtualized consolidation systems. Based on the observation that each application usually reveals itself in multiple instances in the virtualized data center, PeerWatch introduces a statistical technique, canonical correlation analysis (CCA), to extract the correlated characteristics between multiple application instances. The extracted correlations are utilized to examine the status of each application instance. If some correlations drop significantly during the operation, PeerWatch regards that the system is in faulty situation and produces alarms. PeerWatch is robust to system dynamics, compared to traditional fault detection techniques and thus can avoid a lot of false alarms. Once the fault has been detected, PeerWatch proposes a diagnosis process that also takes advantage of the multiple instances feature in the virtualized systems. The diagnosis combines the spatial and temporal analysis on the measurement data across multiple instances before and after the failure. As a result, PeerWatch can obtain much accurate clues about the fault root cause. Experimental results in our virtualized testbed system have demonstrated the effectiveness of the proposed detection and diagnosis tool.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: A diagnostic automatic test pattern generation (DATPG) system is constructed by adding new algorithmic capabilities to conventional ATPG and fault simulation programs to generate tests to distinguish fault pairs, i.e., two faults must have different output responses.
Abstract: A diagnostic automatic test pattern generation (DATPG) system is constructed by adding new algorithmic capabilities to conventional ATPG and fault simulation programs. The DATPG aim to generate tests to distinguish fault pairs, i.e., two faults must have different output responses. Given a fault pair, by modifying circuit netlist a new single fault is modeled. Then we use a conventional ATPG to target that fault. If a test is generated it distinguishes the given fault pair. A fast diagnostic fault simulation algorithm is implemented to find undistinguished fault pairs from a fault list for a given test vector set. We use a proposed diagnostic coverage (DC) metric, defined as the ratio of the number of fault groups to the number of total faults. The diagnostic ATPG system starts by first generating conventional fault coverage vectors. Those vectors are then simulated to determine the DC, followed by repeated applications of diagnostic test generation and simulation. We observe improved DC in all benchmark circuits.

Proceedings ArticleDOI
11 Sep 2010
TL;DR: DAFT is presented, a fast, safe, and memory efficient transient fault detection framework for commodity multicore systems that replicates computation across multiple cores and schedules fault detection off the critical path.
Abstract: Higher transistor counts, lower voltage levels, and reduced noise margin increase the susceptibility of multicore processors to transient faults. Redundant hardware modules can detect such errors, but software transient fault detection techniques are more appealing for their low cost and flexibility. Recent software proposals double register pressure or memory usage, or are too slow in the absence of hardware extensions, preventing widespread acceptance. This paper presents DAFT, a fast, safe, and memory efficient transient fault detection framework for commodity multicore systems. DAFT replicates computation across multiple cores and schedules fault detection off the critical path. Where possible, values are speculated to be correct and only communicated to the redundant thread at essential program points. DAFT is implemented in the LLVM compiler framework and evaluated using SPEC CPU2000 and SPEC CPU2006 benchmarks on a commodity multicore system. Results demonstrate DAFT's high performance and broad fault coverage. Speculation allows DAFT to reduce the perfor- mance overhead of software redundant multithreading from an average of 200% to 38% with no degradation of fault coverage.

Journal ArticleDOI
TL;DR: In this article, a new one-end fault location method for overhead transmission lines embedded in a general n-bus interconnected power system is presented, which is achieved by using both an accurate distributed parameters model for the faulted transmission line, and a two-bus Thevenin equivalent network model that accurately accounts for its interconnectivity.

Journal ArticleDOI
TL;DR: An approach for conformance testing of implementations required to enforce access control policies specified using the Temporal Role-Based Access Control (TRBAC) model is proposed, which uses Timed Input-Output Automata to model the behavior specified by a TRBAC policy.
Abstract: We propose an approach for conformance testing of implementations required to enforce access control policies specified using the Temporal Role-Based Access Control (TRBAC) model. The proposed approach uses Timed Input-Output Automata (TIOA) to model the behavior specified by a TRBAC policy. The TIOA model is transformed to a deterministic se-FSA model that captures any temporal constraint by using two special events Set and Exp. The modified W-method and integer-programming-based approach are used to construct a conformance test suite from the transformed model. The conformance test suite so generated provides complete fault coverage with respect to the proposed fault model for TRBAC specifications.

Journal ArticleDOI
TL;DR: Main advantages of the proposed test implementation are an architecture with no visible scan chain, 100% fault coverage on crypto-cores with negligible area overhead, availability of pseudorandom test sources, and very low aliasing response compaction for other cores.
Abstract: This paper describes a generic built-in self-test strategy for devices implementing symmetric encryption algorithms. Taking advantage of the inner iterative structures of crypto-cores, test facilities are easily set-up for circular self-test of the crypto-cores, built-in pseudorandom test generation and response analysis for other cores in the host device. Main advantages of the proposed test implementation are an architecture with no visible scan chain, 100% fault coverage on crypto-cores with negligible area overhead, availability of pseudorandom test sources, and very low aliasing response compaction for other cores.

Book ChapterDOI
Allon Adir1, Amir Nahir1, Avi Ziv1, Charles Meissner1, Schumann John A1 
04 Oct 2010
TL;DR: This work proposes a new method for reaching coverage closure in post-silicon validation, based on executing the post- silicon exercisers on a pre-Silicon acceleration platform, collecting coverage information from these runs, and harvesting important test templates based on their coverage.
Abstract: Obtaining coverage information in post-silicon validation is a difficult task. Adding coverage monitors to the silicon is costly in terms of timing, power, and area, and thus even if feasible, is limited to a small number of coverage monitors. We propose a new method for reaching coverage closure in post-silicon validation. The method is based on executing the post-silicon exercisers on a pre-silicon acceleration platform, collecting coverage information from these runs, and harvesting important test templates based on their coverage. This method was used in the verification of IBM's POWER7 processor. It contributed to the overall high-quality verification of the processor, and specifically to the post-silicon validation and bring-up.

Proceedings ArticleDOI
17 Dec 2010
TL;DR: An algorithm is proposed for system level test case prioritization (TCP) from software requirement specification to improve user satisfaction with quality software and also to improve the rate of severe fault detection.
Abstract: Test case prioritization involves scheduling test cases in an order that increases the effectiveness in achieving some performance goals. One of the most important performance goals is the rate of fault detection. Test cases should run in an order that increases the possibility of fault detection and also that detects faults at the earliest in its testing life cycle. In this paper, an algorithm is proposed for system level test case prioritization (TCP) from software requirement specification to improve user satisfaction with quality software and also to improve the rate of severe fault detection. The proposed model prioritizes the system test cases based on the three factors: customer priority, changes in requirement, implementation complexity. The proposed prioritization technique is validated with two different sets of industrial projects and the results show that the proposed prioritization technique improves the rate of severe fault detection.

Journal ArticleDOI
TL;DR: A new FDI principle is proposed which exploits the separation of sets that characterise healthy system operation from sets thatcharacterise transitions from healthy to faulty behaviour to provide pre-checkable conditions for guaranteed fault tolerance of the overall multi-controller scheme.
Abstract: We present a fault tolerant control strategy based on a new principle for actuator fault diagnosis. The scheme employs a standard bank of observers which match the different fault situations that can occur in the plant. Each of these observers has an associated estimation error with distinctive dynamics when an estimator matches the current fault situation of the plant. Based on the information from each observer, a fault detection and isolation (FDI) module is able to reconfigure the control loop by selecting the appropriate control law from a bank of controllers, each of them designed to stabilise and achieve reference tracking for one of the given fault models. The main contribution of this article is to propose a new FDI principle which exploits the separation of sets that characterise healthy system operation from sets that characterise transitions from healthy to faulty behaviour. The new principle allows to provide pre-checkable conditions for guaranteed fault tolerance of the overall multi-control...

Journal ArticleDOI
TL;DR: An algorithm that checks whether a given test suite is complete is given and it is demonstrated that the algorithm can be used for relatively large FSMs and test suites.
Abstract: In testing from a Finite State Machine (FSM), the generation of test suites which guarantee full fault detection, known as complete test suites, has been a long-standing research topic. In this paper, we present conditions that are sufficient for a test suite to be complete. We demonstrate that the existing conditions are special cases of the proposed ones. An algorithm that checks whether a given test suite is complete is given. The experimental results show that the algorithm can be used for relatively large FSMs and test suites.

Journal ArticleDOI
TL;DR: Solving the generalized test derivation problem, sufficient conditions for test suite completeness weaker than the existing ones are formulated and used to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation.
Abstract: In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.

Proceedings ArticleDOI
Sejun Kim1, Jongmoon Baik1
16 Sep 2010
TL;DR: Wang et al. as discussed by the authors introduced a new test case prioritization technique that considers both coverage and historical fault information by incorporating fault localization technique, which adjusts the priorities of fault-found test cases while maintaining test cases with high coverage in high priority.
Abstract: Prior coverage-based test case prioritization techniques aim to increase fault detection rates by ordering the test cases according to some coverage criteria. However, in practice, since detected faults are typically removed, test cases that already covered the previously executed areas might not perform well as expected, irrespective of their coverage. In this case, the ordering of test cases based on coverage information might not be effective. In this paper, we introduce a new test case prioritization technique that considers both coverage and historical fault information by incorporating fault localization technique. Using the historical fault detection information of test cases, our approach adjusts the priorities of fault-found test cases while maintaining test cases with high coverage in high priority. Our approach can reduce the total cost of executing entire test suite(s) and enables to detect faults earlier in a testing process by improving the testing effectiveness compared to the prior coverage-based techniques.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: In this article, the authors present nGFSIMIMIM, a GPU-based fault simulator for stuck-at faults which can report the fault coverage of one-to-n-detection for any specified integer n using only a single run of fault simulation.
Abstract: We present nGFSIM, a GPU-based fault simulator for stuck-at faults which can report the fault coverage of one-to n-detection for any specified integer n using only a single run of fault simulation. nGFSIM, which explores the massive parallelism in the GPU architecture and optimizes the memory access and usage, enables accelerated fault simulation without the need of fault dropping. We show that nGFSIM offers a 25X speedup in comparison with a commercial tool and enables new applications in test selection.

Journal ArticleDOI
TL;DR: This paper proposes a semiautomated methodology to derive a reduced test suite from a given test suite, while keeping the fault detection effectiveness unchanged, and applies the mutation analysis technique to measure its effectiveness.
Abstract: Test redundancy detection reduces test maintenance costs and also ensures the integrity of test suites. One of the most widely used approaches for this purpose is based on coverage information. In a recent work, we have shown that although this information can be useful in detecting redundant tests, it may suffer from large number of false-positive errors, that is, a test case being identified as redundant while it is really not. In this paper, we propose a semiautomated methodology to derive a reduced test suite from a given test suite, while keeping the fault detection effectiveness unchanged. To evaluate the methodology, we apply the mutation analysis technique to measure the fault detection effectiveness of the reduced test suite of a real Java project. The results confirm that the proposed manual interactive inspection process leads to a reduced test suite with the same fault detection ability as the original test suite.

Proceedings ArticleDOI
28 Mar 2010
TL;DR: A functional-based test method is presented that integrates the test of Network-on-Chip interconnects and routers that is scalable to any size of network and can reach up to 100% of interconnect faults and 92.75% of router faults.
Abstract: In this work, a functional-based test method is presented that integrates the test of Network-on-Chip interconnects and routers. The proposed approach is scalable to any size of network. Experimental results show that fault coverage can reach up to 100% of interconnect faults and 92.75% of router faults, with yet affordable test sequence lengths.

Journal ArticleDOI
TL;DR: A novel low-overhead approach for design for test and built-in self-test of analog and mixed-mode blocks, derived from the oscillation-based test framework is presented, enhanced by the use of complex oscillation regimes, improving fault coverage and enabling forms of parametric or specification-based testing.
Abstract: Testing is a critical factor for modern large-scale mixed-mode circuits. Strategies for mitigating test cost and duration include moving significant parts of the test hardware on-chip. This paper presents a novel low-overhead approach for design for test and built-in self-test of analog and mixed-mode blocks, derived from the oscillation-based test framework. The latter is enhanced by the use of complex oscillation regimes, improving fault coverage and enabling forms of parametric or specification-based testing. This technique, initially proposed targeting large subsystems such as A/D converters, is here illustrated at a much finer granularity, considering its application to analog-filter stages, and also proving its suitability to backfit existing designs. The simple case of a switched-capacitor second-order bandpass stage is used for illustration discussing how deviations from nominal gain, central frequency, and quality factor can be detected from measurements not requiring A/D stages. A sample design is validated by simulations run at the layout level, including Monte Carlo analysis and simulations based on random fault injections.

Journal ArticleDOI
TL;DR: In this paper, a fault location approach based on pattern recognition of inherent high-frequency noise associated with the switching events of converters is presented, which can be applied toward the development of an effective ground fault location system for converterdominated dc systems.
Abstract: Finding the phase-to-ground faults in ungrounded distribution systems is very difficult and time consuming. In recent years, the advances in power electronics favor dc distribution, particularly for transportation systems such as ships. Therefore, this paper presents a fault location approach which is based on the pattern recognition of inherent high-frequency noise associated with the switching events of converters. This paper demonstrates the feasibility of the approach through a hardware laboratory test. Specifically, a low-voltage dc system, representative of the salient high-frequency behavior of a real zonal electrical distribution system for ships, was used. Closely matching the results obtained from a computer simulation model of the circuitry, the experimental test results show the ability of the approach to appropriately differentiate various fault locations in the laboratory environment. It is concluded that the approach is feasible and can be applied toward the development of an effective ground fault location system for converter-dominated dc systems.

Proceedings ArticleDOI
20 May 2010
TL;DR: In this article, the authors describe the location results of several faults in three separated transmissions lines and highlight the unique characteristics of Reason fault locators to overcome common problems of other TW solutions existing in the market.
Abstract: Transmission utility companies continuously strive for high availability of their lines. When a line is off service following a permanent fault, a significant amount of the time during the restore the line can be attributed to locating the point of the fault. This paper shows how this time can be significantly reduced using highly accurate traveling wave (TW) fault locators. It describes location results of several faults in three separated transmissions lines. The collected data are compared to those obtained by considering one- and two-end impedance location algorithms for the same faults. It also highlights the unique characteristics of Reason fault locators to overcome common problems of other TW solutions existing in the market.