scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2018"


Journal ArticleDOI
TL;DR: An integrated framework combining fault classification and location is proposed by applying an innovative machine-learning algorithm: the summation-wavelet extreme learning machine (SW-ELM) that integrates feature extraction in the learning process and is successfully applied to transmission line fault diagnosis.
Abstract: Accurate and timely diagnosis of transmission line faults is key for reliable operations of power systems. Existing fault-diagnosis methods rely on expert knowledge or extensive feature extraction, which is also highly dependent on expert knowledge. Additionally, most methods for fault diagnosis of transmission lines require multiple separate subalgorithms for fault classification and location performing each function independently and sequentially. In this research, an integrated framework combining fault classification and location is proposed by applying an innovative machine-learning algorithm: the summation-wavelet extreme learning machine (SW-ELM) that integrates feature extraction in the learning process. As a further contribution, an extension of the SW-ELM, i.e., the summation-Gaussian extreme learning machine (SG-ELM), is proposed and successfully applied to transmission line fault diagnosis. SG-ELM is fully self-learning and does not require ad-hoc feature extraction, making it deployable with minimum expert subjectivity. The developed framework is applied to three transmission-line topologies without any prior parameter tuning or ad-hoc feature extraction. Evaluations on a simulated dataset show that the proposed method can diagnose faults within a single cycle, remain immune to fault resistance and inception angle variation, and deliver high accuracy for both tasks of fault diagnosis: fault type classification and fault location estimation.

168 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed new fault indices that detect and locate the open-phase faults without additional hardware, which proves to be simple and independent of the operating point, control technique, and drive parameters.
Abstract: Fault tolerance is much appreciated at industry in applications with high-reliability requirements. Due to their inherent fault-tolerant capability against open-phase faults (OPFs), drives with multiple three-phase windings are ideal candidates in such applications and for this reason many efforts have been devoted to the development of different fault-tolerant control strategies. Fault detection is, however, a previous and mandatory stage in the creation of fault-tolerant drives, and the study of specific OPF detection methods for six-phase drives is still scarce. Taking advantage of the secondary currents (so called x - y currents) that are unique in multiphase machines, this study proposes new fault indices that detect and locate the OPFs without additional hardware. The method proves to be simple and independent of the operating point, control technique, and drive parameters. Comparative experimental results confirm the capability of the proposed method to achieve fast detection times with good robustness.

100 citations


Journal ArticleDOI
TL;DR: By perturbing test patterns/responses and protecting the Obfuscation Key, the proposed architecture is proven to be robust against existing noninvasive scan-based attacks, and can protect all scan data from attackers in foundry, assembly, and system development without compromising the testability.
Abstract: Scan-based test is commonly used to increase testability and fault coverage, however, it is also known to be a liability for chip security. Research has shown that intellectual property (IP) or secret keys can be leaked through scan-based attacks, which can be performed by entities within the supply chain. In this paper, we propose a design and test methodology against scan-based attacks throughout the supply chain, which includes a dynamically obfuscated scan (DOS) for protecting IP/integrated circuits (ICs). By perturbing test patterns/responses and protecting the Obfuscation Key, the proposed architecture is proven to be robust against existing noninvasive scan-based attacks, and can protect all scan data from attackers in foundry, assembly, and system development without compromising the testability. Further, a novel test methodology cooperating with the DOS design is also proposed, which shows full pattern application flexibility. Finally, detailed security and experimental analyses have been performed on ITC and industrial benchmarks. Demonstrated by the simulation results, the proposed architecture can be easily plugged into EDA generated scan chains without generating a noticeable impact on conventional IC design, manufacturing, and test flow. The results demonstrate that the proposed methodology can protect chips from existing brute force, differential, and other scan-based attacks that target the Obfuscation Key. Furthermore, the proposed design is of low overhead on area, power consumption, and pattern generation time, and there is no impact on test time.

91 citations


Journal ArticleDOI
TL;DR: The proposed model aims to consider a tradeoff between the total customer interruption cost and FI relevant costs, including capital investment, installation, and maintenance costs, and guarantees global optimum solution achieved in an effective runtime.
Abstract: Fault indicator (FI) plays a crucial rule in enhancing service reliability in distribution systems. This device brings substantial benefits for fault management procedure by speeding up fault location procedure. This paper intends to develop a new optimization model to optimally deploy FI in distribution systems. The proposed model aims to consider a tradeoff between the total customer interruption cost and FI relevant costs, including capital investment, installation, and maintenance costs. As the main contribution of this paper, the problem is formulated in mixed integer programing, which guarantees global optimum solution achieved in an effective runtime. Moreover, the model takes advantages of taking pragmatic fault location procedure into account, which results in more reliable solutions. The effectiveness of the proposed method is scrutinized through various case studies and sensitivity analyses of a test system. In addition, the applicability of the model in practice is appraised by applying it on a real-life distribution network. The resultant outcomes demonstrate the integrity of the proposed model.

53 citations


Journal ArticleDOI
TL;DR: A genetic algorithm-based framework which integrates software fault localization techniques and focuses on reusing test specifications and input values whenever feasible is proposed which can be easily reused between different products of the same family and help reduce the overall testing and debugging cost.
Abstract: In response to the highly competitive market and the pressure to cost-effectively release good-quality software, companies have adopted the concept of software product line to reduce development cost. However, testing and debugging of each product, even from the same family, is still done independently. This can be very expensive. To solve this problem, we need to explore how test cases generated for one product can be used for another product. We propose a genetic algorithm-based framework which integrates software fault localization techniques and focuses on reusing test specifications and input values whenever feasible. Case studies using four software product lines and eight fault localization techniques were conducted to demonstrate the effectiveness of our framework. Discussions on factors that may affect the effectiveness of the proposed framework is also presented. Our results indicate that test cases generated in such a way can be easily reused (with appropriate conversion) between different products of the same family and help reduce the overall testing and debugging cost.

49 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a review of the principles of fault location and indication techniques and their application considerations, in order to gain further insight into the strengths and limitations of each method, a comparative analysis is carried out.

45 citations


Journal ArticleDOI
TL;DR: A recognized method of distribution line fault type was proposed based on the analysis of time-frequency features of fault waveform, and results indicated that recognition success rate reached 90%, which verified the feasibility of using time- Frequency characteristics of faultWaveform to realize recognition of Distribution line fault types.
Abstract: Accurate recognition of distribution line fault types can provide directional guidance for line operation and maintenance personnel. Based on the analysis of time-frequency features of fault waveform, a recognized method of distribution line fault type was proposed in this paper. Through modeling and theoretical analysis of waveforms of different fault types, characteristic parameters, which could characterize waveforms of different fault types from three aspects, time domain, frequency domain, and electric arc, were put forward. Calculation formula for extracting characteristic parameters according to fault waveform data was proposed, recognition logic was established by taking multi-parameter fusion as a basis, and then,automatic recognition of distribution line fault types caused by different factors was realized through detection and classification of characteristic parameters of input waveform data. Finally, 136 groups of field fault waveform data provided by the Electric Power Research Institute were used to do closed-loop control and verification of the algorithm, and results indicated that recognition success rate reached 90%, which verified the feasibility of using time-frequency characteristics of fault waveform to realize recognition of distribution line fault types.

37 citations


Journal ArticleDOI
TL;DR: In this article, a fault location scheme for three-terminal untransposed double-circuit transmission lines utilizing synchronized voltage and current measurements obtained by GPS technique is proposed taking into consideration the distributed line model and the mutual couplings effect between the parallel lines to obtain accurate results.

29 citations


Journal ArticleDOI
TL;DR: Two approaches, one involving random search and the other, involving directed search have been proposed and validated on benchmark circuits considering missing-gate fault (complete and partial), bridging fault and stuck-at fault with optimum coverage and reduced computational efforts.
Abstract: Low power circuit design has been one of the major growing concerns in integrated circuit technology. Reversible circuit (RC) design is a promising future domain in computing which provides the benefit of less computational power. With the increase in the number of gates and input variables, the circuits become complex and the need for fault testing becomes crucial in ensuring high reliability of their operation. Various fault detection methods based on exhaustive test vector search approaches have been proposed in the literature. With increase in circuit complexity, a faster test generation method for providing optimal coverage becomes desirable. In this paper, a genetic algorithm-based heuristic test set generation method for fault detection in RCs is proposed which avoids the need for an exhaustive search. Two approaches, one involving random search and the other, involving directed search have been proposed and validated on benchmark circuits considering missing-gate fault (complete and partial), bridging fault and stuck-at fault with optimum coverage and reduced computational efforts.

29 citations


Journal ArticleDOI
TL;DR: Aiming applications where simplicity and low computation efforts are required and fast dynamic response is not imperative, an open-loop volts/hertz (V/f) compensation strategy is discussed in order to keep the rated operation even after the occurrence of the fault, assuring the maintenance of sinusoidal flux.
Abstract: Considering that so far all studies regarding multiphase drives fault tolerance performance have been carried out making use of conventional two-level inverters, this study discusses the fault tolerance performance of a six-phase drive system based on a dual inverter under single-, two-, or three-phase open-circuit fault. Aiming applications where simplicity and low computation efforts are required and fast dynamic response is not imperative, an open-loop volts/hertz (V/f) compensation strategy is discussed in order to keep the rated operation even after the occurrence of the fault, assuring the maintenance of sinusoidal flux. The six-phase machine mathematical model after the fault is detailed and utilised to elaborate the compensation strategy. Simulation and experimental results show the validity of the discussed solution and its feasibility.

20 citations


Journal ArticleDOI
TL;DR: This paper presents an adaptive test flow for mixed-signal circuits that has comprehensive user-defined parameters to span the tradeoff between test time savings and fault coverage so as to select an advantageous point on the curve based on test economics.
Abstract: The standard approach in industry for post-manufacturing testing of mixed-signal circuits is to measure the performances that are included in the data sheet. Despite being accurate and straightforward, this approach involves a high test time since there are numerous performances that need to be measured sequentially by switching the circuit into different test configurations. Adaptive test is a new test paradigm that still adheres to the standard approach, but it adjusts it on-the-fly to the particularities of the circuit under test so as to better control test time and to achieve robust outlier detection. In this paper, we present an adaptive test flow for mixed-signal circuits that has comprehensive user-defined parameters to span the tradeoff between test time savings and fault coverage so as to select an advantageous point on the curve based on test economics. The flow also provides robust test escape risk estimation so as to add confidence to the test process. The proposed idea is demonstrated on a sizable production dataset from a large mixed-signal circuit.

Proceedings ArticleDOI
Harry H. Chen1
16 Apr 2018
TL;DR: The aim is to stimulate new ideas and directions for future research in system-based testing as upcoming 5G/IoT/AI-based applications penetrate deeper and pervasively into the authors' daily lives.
Abstract: The steady march of Moore's Law in semiconductors has enabled the creation of ever more complex systems with electronics playing a central role. As a result, thorough testing of individual components is no longer adequate to ensure overall system performance, quality, and reliability. The rising importance of system-level test (SLT) to supplement traditional component structural test has gained wide recognition recently. To understand this trend, we examine where traditional testing falls short and where SLT fills the gap. In many cases, system failures are the result of complex component interactions leading to abnormal scenarios not attributable to simple and single root causes. Rather than being confined by traditional gate-level fault models, it might be more appropriate to develop a system-level fault model derived as an emergent property of complex systems. New definitions of fault coverage and how to automate high-coverage test generation are obvious challenges in the system domain. Recent developments in design verification which faces similar challenges in dealing with system complexity may offer possible solutions for SLT to draw on. The aim is to stimulate new ideas and directions for future research in system-based testing as upcoming 5G/IoT/AI-based applications penetrate deeper and pervasively into our daily lives.

Book ChapterDOI
01 Jan 2018
TL;DR: An overview of software fault prediction using machine-learning techniques to predict the occurrence of faults and the conventional techniques is presented, aimed at describing the problem of fault proneness.
Abstract: Machine-learning techniques are used to find the defect, fault, ambiguity, and bad smell to accomplish quality, maintainability, and reusability in software. Software fault prediction techniques are used to predict software faults by using statistical techniques. However, Machine-learning techniques are also valuable in detecting software fault. This paper presents an overview of software fault prediction using machine-learning techniques to predict the occurrence of faults. This paper also presents the conventional techniques. It aims at describing the problem of fault proneness.

Proceedings ArticleDOI
28 May 2018
TL;DR: Pareto based Multi-Objective Harmony Search approach for regression test case selection from an existing test suite to achieve some test adequacy criteria is proposed and results of statistical tests indicate significant improvement over existing approaches.
Abstract: Regression testing is a way of catching bugs in new builds and releases to avoid the product risks. Corrective, progressive, retest all and selective regression testing are strategies to perform regression testing. Retesting all existing test cases is one of the most reliable approaches but it is costly in terms of time and effort. This limitation opened a scope to optimize regression testing cost by selecting only a subset of test cases that can detect faults in optimal time and effort. This paper proposes Pareto based Multi-Objective Harmony Search approach for regression test case selection from an existing test suite to achieve some test adequacy criteria. Fault coverage, unique faults covered and algorithm execution time are utilised as performance measures to achieve optimization criteria. The performance evaluation of proposed approach is performed against Bat Search and Cuckoo Search optimization. The results of statistical tests indicate significant improvement over existing approaches.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This work establishes that even without the knowledge of the faulty ciphertexts, one can still perform differential fault analysis attacks, given the availability of side-channel information.
Abstract: Redundancy based countermeasures against fault attacks are a popular choice in security-critical commercial products, owing to its high fault coverage and applications to safety/reliability. In this paper, we propose a combined attack on such countermeasures. The attack assumes a random byte/nibble fault model with existence of side-channel leakage of the final comparison, and no knowledge of the faulty ciphertext. Unlike the previously proposed biased/multiple fault attack, we just need to corrupt one computation branch. Both analytical and experimental evaluation of this attack strategy is presented on software implementations of two state-of-the-art block ciphers, AES and PRESENT, on an ATmega328P microcontroller, via side-channel measurements and a laser-based fault injection. Moreover, this work establishes that even without the knowledge of the faulty ciphertexts, one can still perform differential fault analysis attacks, given the availability of side-channel information.

Proceedings ArticleDOI
02 Jul 2018
TL;DR: A method for evaluating the fault coverage that can be achieved using an application program is proposed and some guidelines for improving the achieved fault coverage are provided.
Abstract: General Purpose Graphical Processing Units (GPGPUs) are increasingly used in safety critical applications such as the automotive ones. Hence, techniques are required to test them during the operational phase with respect to possible permanent faults arising when the device is already deployed in the field. Functional tests adopting Software-based Self-test (SBST) are an effective solution since they provide benefits in terms of intrusiveness, flexibility and test duration. While the development of the functional test code addressing the several computational cores composing a GPGPU can be done resorting to known methods developed for CPUs, for other modules which are typical of a GPGPU we still miss effective solutions. This paper focuses on one of the most relevant module consists on the scheduler core which is in charge of managing different scalar computational cores and the different executed threads. At first, we propose a method for evaluating the fault coverage that can be achieved using an application program. Then, we provide some guidelines for improving the achieved fault coverage. Experimental results are provided on an open-source VHDL model of a GPGPU.

Journal ArticleDOI
TL;DR: In this article, the authors proposed runtime adaptive scrubbing (RAS), a multilayered error correction and detection scheme with three modes of operation enabled by an area-efficient configurable encoder for encoding packets on the switch-to-switch (s2s) layer.
Abstract: As aggressive scaling continues to push multiprocessor system-on-chips (MPSoCs) to new limits, complex hardware structures combined with stringent area and power constraints will continue to diminish reliability. Waning reliability in integrated circuits will increase the susceptibility of transient and permanent faults. There is an urgent demand for adaptive error correction coding (ECC) schemes in network-on-chips to provide fault tolerance and improve overall resiliency of MPSoC architectures. The goal of adaptive ECC schemes should be to maximize power savings when faults are infrequent and increase application speedup by boosting fault coverage when faults are frequent. In this paper, we propose runtime adaptive scrubbing (RAS), a novel multilayered error correction and detection scheme with three modes of operation enabled by an area-efficient configurable encoder for encoding packets on the switch-to-switch (s2s) layer, thus preventing faults from accumulating up the network stack and onto the end-to-end layer. As fault rates fluctuate we propose a dynamic methodology for improving fault localization and intelligently adapt fault coverage on demand to sustain graceful network degradation. RAS successfully improves network resiliency, fault localization, and fault coverage as compared to traditional static s2s schemes. Simulation results demonstrate that static RAS improves network speedup by 10% for Splash-2/PARSEC benchmarks on a $8 \times 8$ mesh network while reducing area overhead by 14% and incurring on an average 6.6% power penalty by boosting fault tolerance when fault rates increase. Further, our dynamic RAS scheme maintains 97.88% of network performance for real applications while incurring 20% power penalty.

Journal ArticleDOI
TL;DR: The procedure has the ability to identify tests in the pool that are effective for test compaction even when they do not increase the fault coverage, and is designed for the case where multicycle functional broadside tests are extracted from functional test sequences.
Abstract: This study describes a static test compaction procedure that is applicable in the scenario where (i) a large pool of tests can be generated efficiently, but (ii) test compaction that modifies tests, and covering procedures, are not applicable, and (iii) reverse order fault simulation procedures are not sufficient for test compaction. The procedure has the ability to identify tests in the pool that are effective for test compaction even when they do not increase the fault coverage. This ability is achieved using only fault simulation with fault dropping. The procedure is designed for the case where multicycle functional broadside tests are extracted from functional test sequences. The use of multicycle tests results in higher levels of test compaction than possible with two-cycle functional broadside tests. It adds another dimension to the procedure that also needs to select a number of clock cycles for every test.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This paper describes an approach based on Software-based Self-test, which is currently being adopted within the MaMMoTH-Up project, targeting the development of an innovative COTS-based system to be used on the Ariane5 launcher.
Abstract: One of the current trends in space electronics is towards considering the adoption of COTS components, mainly to widen the spectrum of available products. When substituting space-qualified components with COTS ones a major challenge lies in guaranteeing the same level of reliability. To achieve this goal, a mix of different solutions can be considered, including effective test techniques, able to guarantee a high level of permanent fault coverage while matching several constraints in terms of system accessibility and hardware complexity. In this paper, we describe an approach based on Software-based Self-test, which is currently being adopted within the MaMMoTH-Up project, targeting the development of an innovative COTS-based system to be used on the Ariane5 launcher. The approach aims at testing the OR1200 processor adopted in the system, combined with new and effective techniques for identifying the safe faults. Results also include a comparison between functional and structural test approaches.

Journal ArticleDOI
TL;DR: QCA based low power design of Polar encoder circuit at nano-scale is demonstrated and the stuck-at-fault effect in generation of valid Polar codes is explored and the simulation result proves the design accuracy of the encoding circuit.

Journal ArticleDOI
TL;DR: A fully software-based method is presented to increase the reliability of COTS equipment against transient faults by utilizing a task-level redundancy in operating system and can be used in embedded systems without any hardware, software, or information redundancy.

Journal ArticleDOI
TL;DR: A new methodology for an efficient hardware test of RHS, which decreases considerably not only the number of the faults but also the test patterns needed for testing to significantly reduce time and cost needed for the testing process.
Abstract: This paper deals with the test of a reconfigurable hardware system (RHS). The latter is a hardware device that allows to change the hardware resources at runtime in order to modify the system functions and therefore to dynamically adapt the system to its environment. The increasing functional complexity of embedded systems and the transition to the RHS make the hardware testing a challenging task, especially under the confine of providing a high quality with a low cost. Considering the fact that the hardware test represents a key cost factor in a production process, an optimal test strategy can be advantageous in the competitive industrial market. Accordingly, this paper introduces a new methodology for an efficient hardware test of RHS. For an RHS, the number of stuck-at faults can be very large, which leads to a significant slowdown in the testing process. Because of the redundancy of faults between the different circuits composing an RHS, the proposed methodology aims at minimizing the number of faults using the inter-circuits relationships and consequently at providing an optimal fault set that can be effectively used for testing. Efficient techniques for test generation and test set validation are proposed to provide the test patterns for faults reduced by inter-circuits fault collapsing. The application of the generated test patterns is typically sufficient to provide an overall fault coverage. The proposed methodology is implemented in a new visual environment named TnTest. An experimental study confirms and validates the expected findings. Note to Practitioners —This paper addresses possible challenges for future generations of adaptive embedded systems. It proposes an original methodology for an efficient reconfigurable hardware system (RHS) hardware test. The main objective is to significantly reduce time and cost needed for the testing process. For an RHS, the number of stuck-at faults can be very large, which can cause a major slowdown in the hardware test. Based on the inter-circuits relations existing between the different circuits composing an RHS, the proposed methodology decreases considerably not only the number of the faults but also the test patterns needed for testing. The application of the generated test patterns is typically sufficient to provide an overall fault coverage. The proposed methodology is implemented in a new visual software environment named TnTest, which is capable of providing the smallest fault set as well as the efficient test set that can be effectively used for testing. This environment can be applied to test any embedded device that can be deployed in any new application based on flexible technologies. It can also be useful in manufacturing industries for a required improvement of the production process in relation to time and cost.

Proceedings ArticleDOI
01 Sep 2018
TL;DR: MTF -Storm is a highly effective fuzzer for industrial systems employing Modbus/TCP connectivity that achieves high fault coverage, while offering high performance and quick testing of the System-Under- Test (SUT).
Abstract: MTF -Storm is a highly effective fuzzer for industrial systems employing Modbus/TCP connectivity. It achieves high fault coverage, while offering high performance and quick testing of the System-Under- Test (SUT). Analogously to its predecessor MTF, MTF -Storm operates in 3 phases: reconnaissance, fuzz testing and failure detection. Reconnaissance identifies the memory organization of the SUT and the supported functionality, enabling selection and synthesis of fuzz testing sequences that are effective for the specific SUT. MTF -Storm develops its test sequences systematically, starting with single field tests and proceeding with combined field tests, adopting techniques for automated combinatorial software testing and reducing the test space through partitioning field value ranges. MTF -Storm has been used to evaluate 9 different Modbus/TCP implementations and has identified issues with all of them, ranging from out-of-spec responses to successful denial-of-service attacks and crashes.

Journal ArticleDOI
TL;DR: It is shown that faulty quantum circuits under the widely accepted single fault assumption can be fully characterized by the (single) faulty gate and the corresponding fault model, which allows them to efficiently determine test input states as well as measurement strategy for fault detection and diagnosis.
Abstract: Detection and isolation of faults is a crucial step in the physical realization of quantum circuits. Even though quantum gates and circuits compute reversible functions, the standard techniques of automatic test pattern generation (ATPG) for classical reversible circuits are not directly applicable to quantum circuits. For faulty quantum circuits under the widely accepted single fault assumption, we show that their behavior can be fully characterized by the (single) faulty gate and the corresponding fault model. This allows us to efficiently determine test input states as well as measurement strategy for fault detection and diagnosis. Building on top of these, we design randomized algorithms which are able to detect every nontrivial single-gate fault with minimal probability of error. We also describe similar algorithms for fault diagnosis. We evaluate our algorithms by the number of output samples that needs to be collected and the probability of error. Both of these can be related to the eigenvalues of the operators corresponding to the circuit gates. We experimentally compare all our strategies with the state-of-the-art ATPG techniques for quantum circuits under the “single missing faulty gate” model and demonstrate that significant improvement is possible if we can exploit the quantum nature of circuits.

Proceedings ArticleDOI
12 Mar 2018
TL;DR: Experiments are reported, showing that in typical embedded safety-critical systems their number is often far from being negligible, and depends on many parameters, including the application code run by the processor.
Abstract: When microprocessor cores are used in safety-critical applications, in-field test must be performed to reach the target reliability figures. In turns, the in-field test must be organized so that it achieves a sufficient fault coverage. The fault list to be considered for computing the fault coverage should only include testable faults, i.e., faults which may cause a failure in the operating conditions. Hence, single permanent faults that in the operating conditions cannot be excited, or do not propagate to any output, or both, should be removed from the list. These faults, called on-line functionally untestable faults, require a significant effort to be identified. The contribution of this paper is twofold. From one side, it reports experiments, showing that in typical embedded safety-critical systems their number is often far from being negligible, and depends on many parameters, including the application code run by the processor. Secondly, it provides a semi-automated approach to their identification. Experimental results on a representative microprocessor are reported.

Journal ArticleDOI
TL;DR: The results indicate that the proposed method provides 75–98% fault coverage and enables a drastic reduction in search space, ranging from 41.5 to 95.5%, for the selection of candidate ATMR modules and no compromise on the area overhead reduction is noticed.
Abstract: Area overhead reduction in conventional triple modular redundancy (TMR) by using approximate modules has been proposed in the literature. However, the vulnerability of approximate TMR (ATMR) in the case of a critical input, where faults can lead to errors at the output, is yet to be studied. Here, identifying critical input space through automatic test pattern generation and making it unavailable for the technique of approximating modules of TMR (ATMR) were focused, which involves a prime implicant reduction expansion. The results indicate that the proposed method provides 75–98% fault coverage, which amounts up to 43.8% improvement over that achieved previously. The input vulnerability-aware approach enables a drastic reduction in search space, ranging from 41.5 to 95.5%, for the selection of candidate ATMR modules and no compromise on the area overhead reduction is noticed.

Journal ArticleDOI
TL;DR: This article synthesises issues related to an emerging area of self-healing technologies that links software and hardware mitigations strategies that comprises self-recovery or self-repair capability and has a focus on system resilience and recovering from fault events.

Journal ArticleDOI
TL;DR: Simulation results show that the fault coverage of dc tests can be estimated sufficiently accurately with simulation time being reduced by 10X for the benchmark circuit, or by approximately the number of SCCs.
Abstract: In analog fault simulation, the number one challenge is that simulation time could grow rapidly and become prohibitive as the circuit size becomes large. This paper proposes a systematic method to significantly improve the time efficiency in estimating the fault coverage for analog fault simulation. In the proposed method, a circuit under test (CUT) is first partitioned into independent sub-circuits. This is accomplished through mapping the circuit into a graph, decomposing the graph into strongly connected components (SCCs), and generating a sub-circuit for each SCC. The impacts of potential faults directly entering a sub-circuit are then simulated and recorded using the sub-circuits, which is expected to be much more time efficient than fault simulation using the whole circuit, which is much larger in size. Finally, the fault detectability at the given test points is evaluated based on the fault impacts and sensitivity among different sub-circuits. As a first step toward quick estimation of fault coverage, this paper focuses on dc tests. Simulation results show that the fault coverage of dc tests can be estimated sufficiently accurately with simulation time being reduced by 10X for the benchmark circuit, or by approximately the number of SCCs. For a much larger circuit, the number of SCCs is expected to be much larger and the time-saving factor will be much larger.

18 Aug 2018
TL;DR: In this article, the authors report on their experience using a fault-based testing approach, which overcomes the limitations of specification-based approaches that derive from the intrinsic incompleteness of the specification, and from the focus of specifications on correct behaviors, rather than potential faults.
Abstract: Because of their complexity, business transactions are prone to failure in many ways. This paper reports on our experience using a fault-based testing approach. The approach overcomes the limitations of specification-based approaches that derive from the intrinsic incompleteness of the specification, and from the focus of specifications on correct behaviors, rather than potential faults, and hence provides a complementary technique during the testing phase.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: A framework for selecting a set of high-quality and minimized liveness assertions by combining a new data mining technique with fault analysis approaches along with assertion conversion methodology that converts liveness assertion into safety assertions to provide a cost-effective checking infrastructure.
Abstract: This paper proposes a methodology for producing a set of high quality hardware checkers from Register-Transfer Level (RTL) assertions. Assertion Based Verification (ABV) has become a highly popular area in design verification. On the other hand, extreme down-scaling of modern technologies has significantly increased the probability of faults occurring during the life-time of the system. To overcome this, concurrent cost-effective checker circuitry is required in order to enable fault resilience of systems. Currently, designing such checker infrastructure is a manual and error-prone work. A possible solution to automate the synthesis of concurrent error checkers is to derive them from verification assertions. However, the number of assertions is generally far too high to allow for area-efficient checking infrastructure. Moreover, the number of liveness assertions generated by automated methods may be too high even for verification purposes. Therefore, there is a need for qualification and minimization of liveness assertions with a prospect of reusing them as hardware safety checkers. In order to derive low-area, high fault coverage hardware safety checkers from a large number of liveness assertions, this paper proposes for the first time a framework for selecting a set of high-quality and minimized liveness assertions by combining a new data mining technique with fault analysis approaches along with assertion conversion methodology that converts liveness assertions into safety assertions. The framework then synthesizes these safety assertions into hardware checkers to be evaluated at the gate level to provide a cost-effective checking infrastructure. Experimental results support the effectiveness of the proposed framework.