scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2020"


Journal ArticleDOI
TL;DR: This work proposes a methodology to enable correct, practical, and robust implementation of code-based CEDs, and presents a detailed implementation strategy which can guarantee the detection of any fault covered by the underlying EDC.
Abstract: By injecting faults, active physical attacks pose serious threats to cryptographic hardware where Concurrent Error Detection (CED) schemes are promising countermeasures. They are usually based on an Error-Detecting Code (EDC) which enables detecting certain injected faults depending on the specification of the underlying code. Here, we propose a methodology to enable correct, practical, and robust implementation of code-based CEDs. We show that straightforward hardware implementations of given code-based CEDs can suffer from severe vulnerabilities, not providing the desired protection level. In particular, propagation of faults into combinatorial logic is often ignored in security evaluation of these schemes. First, we formally define this detrimental effect and demonstrate its destructive impact. Second, we introduce an implementation strategy to limit the fault propagation effect. Third, in contrast to many other works where the fault coverage is the main focus, we present a detailed implementation strategy which can guarantee the detection of any fault covered by the underlying EDC. This holds for any time of the computation and any location in the circuit, both in data processing and control unit. In short, we provide practical guidelines how to construct efficient CED schemes with arbitrary EDCs to achieve the desired protection level. We practically evaluate the efficiency of our methodology by case studies covering different symmetric block ciphers and various linear EDCs.

36 citations


Journal ArticleDOI
TL;DR: This work proposes an efficient approach based on traditional multi-objective evolutionary mechanisms, which can obtain better solutions with lower time complexity for fault detection of confidential real-time applications running on Cyber Physical Systems.

36 citations


Journal ArticleDOI
TL;DR: A novel framework called SAFARI for automatically synthesizing fault-attack resistant implementations of block ciphers, which automatically detects the vulnerable locations from the specification, applies an appropriate countermeasure based on the user-specified security requirements, and synthesizes an efficient, fault- attack protected, RTL, or C code for the cipher.
Abstract: Most cipher implementations are vulnerable to a class of cryptanalytic attacks known as fault injection attacks. To reveal the secret key, these attacks make use of faults induced at specific locations during the execution of the cipher. Countermeasures for fault injection attacks require these vulnerable locations in the implementation to be first identified and then protected. However, both these steps are difficult and error-prone and, hence, it requires considerable expertise to design efficient countermeasures. Incorrect or insufficient application of the countermeasures would cause the implementation to remain vulnerable, while inefficient application of the countermeasures could lead to significant performance penalties to achieve the desired fault-attack resistance. In this paper, we present a novel framework called SAFARI for automatically synthesizing fault-attack resistant implementations of block ciphers. The framework takes as input the security requirements and a high-level specification of the block cipher. It automatically detects the vulnerable locations from the specification, applies an appropriate countermeasure based on the user-specified security requirements, and then synthesizes an efficient, fault-attack protected, RTL, or C code for the cipher. We take AES, CAMELLIA, and CLEFIA as case studies and demonstrate how the framework would explore different countermeasures, based on the vulnerability of the locations, the output format, and the required security margins. We then evaluate the efficacy of SAFARI in hardware and software to the design overhead incurred and the fault coverage.

18 citations


Proceedings ArticleDOI
Yu Li1, Min Li1, Bo Luo1, Ye Tian1, Qiang Xu1 
30 Oct 2020
TL;DR: DeepDyve as mentioned in this paper employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification, which can reduce 90% of the risks at around 10% overhead.
Abstract: Deep neural networks (DNNs) have become one of the enabling technologies in many safety-critical applications, e.g., autonomous driving and medical image analysis. DNN systems, however, suffer from various kinds of threats, such as adversarial example attacks and fault injection attacks. While there are many defense methods proposed against maliciously crafted inputs, solutions against faults presented in the DNN system itself (e.g., parameters and calculations) are far less explored. In this paper, we develop a novel lightweight fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification. The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much. We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve. Experimental results show that DeepDyve can reduce 90% of the risks at around 10% overhead.

17 citations


Proceedings ArticleDOI
01 Jun 2020
TL;DR: A Reinforcement Learning (RL) approach to explore the fault space and find critical faults and compare the proposed method with Monte Carlo-based fault injection is more efficient in terms of fault coverage and time to find the first critical fault.
Abstract: Assessing the safety of complex Cyber-Physical Systems (CPS) is a challenge in any industry. Fault Injection (FI) is a proven technique for safety analysis and is recommended by the automotive safety standard ISO 26262. Traditional FI methods require a considerable amount of effort and cost as FI is applied late in the development cycle and is driven by manual effort or random algorithms. In this paper, we propose a Reinforcement Learning (RL) approach to explore the fault space and find critical faults. During the learning process, the RL agent injects and parameterizes faults in the system to cause catastrophic behavior. The fault space is explored based on a reward function that evaluates previous simulation results such that the RL technique tries to predict improved fault timing and values. In this paper, we apply our technique on an Adaptive Cruise Controller with sensor fusion and compare the proposed method with Monte Carlo-based fault injection. The proposed technique is more efficient in terms of fault coverage and time to find the first critical fault.

15 citations


Journal ArticleDOI
TL;DR: A comparative study shows the cost effective implementation in QCA.

14 citations


Journal ArticleDOI
TL;DR: Results on the projects of Defects4j repository depicts that the proposed approach is able to produce minimized test suites having the capability of detecting 95.16% of faults and locating all detected faults with fault localization score almost equivalent to that of the complete test suite.

13 citations


Journal ArticleDOI
TL;DR: ArmorAll is proposed, a light-weight, adaptive, selective, and portable software solution to protect GPUs against soft errors designed to optimize instruction duplication on GPUs, thereby enabling much more reliable execution.
Abstract: The vulnerability of GPUs to soft errors has become a first-class design concern as they are increasingly being used in accuracy-sensitive and safety-critical domains. Existing solutions used to enhance the reliability of GPUs come with significant overhead in terms of area, power, and/or performance. In this article, we propose ArmorAll, a light-weight, adaptive, selective, and portable software solution to protect GPUs against soft errors. ArmorAll consists of a set of purely compiler-based redundancy schemes designed to optimize instruction duplication on GPUs, thereby enabling much more reliable execution. The choice of the scheme determines the subset of instructions that must be duplicated in an application, allowing adaptable fault coverage for different applications. ArmorAll can intelligently select a redundancy scheme that provides the best coverage to an application with an accuracy of 91.7%. The high coverage provided by ArmorAll comes at an average improvement of 64.5% in runtime when using the selected redundancy scheme as compared to the state-of-the-art.

12 citations


Journal ArticleDOI
TL;DR: The proposed Asynchronous Full Error Detection and Correction architecture can handle single events and timing faults of arbitrarily long duration as well as the synchronous FEDC, but additionally can address known metastability issues of the F EDC and other similar synchronous architectures and provide a more practical solution for handling the error recovery process.
Abstract: In this paper, an asynchronous design for soft error detection and correction in combinational and sequential circuits is presented. The proposed architecture is called Asynchronous Full Error Detection and Correction (AFEDC). A custom design flow with integrated commercial EDA tools generates the AFEDC using the asynchronous bundled-data design style. The AFEDC relies on an Error Detection Circuit (EDC) for protecting the combinational logic and fault-tolerant latches for protecting the sequential logic. The EDC can be implemented using different detection methods. For this work, two boundary variants are considered, the Full Duplication with Comparison (FDC) and the Partial Duplication with Parity Prediction (PDPP). The AFEDC architecture can handle single events and timing faults of arbitrarily long duration as well as the synchronous FEDC, but additionally can address known metastability issues of the FEDC and other similar synchronous architectures and provide a more practical solution for handling the error recovery process. Two case studies are developed, a carry look-ahead adder and a pipelined non-restoring array divider. Results show that the AFEDC provides equivalent fault coverage when compared to the FEDC while reducing area, ranging from 9.6% to 17.6%, and increasing energy efficiency, which can be up to 6.5%.

11 citations


Journal ArticleDOI
TL;DR: Rigorous statistical tests indicate that FCBTSO outperforms other approaches implemented with respect to the execution time that includes theExecution time of the proposed approach to find the optimized test suite and the executionTime of test cases in the optimize test suite.
Abstract: This paper presents a novel method referred as fault coverage-based test suite optimization (FCBTSO) for regression test suite optimization. FCBTSO is proposed based on Harrolds–Gupta–Soffa (HGS) test suite reduction method, and it follows the phenomenon: “learning from mistakes”. We conducted computational experiments on 12 versions of benchmarked programs retrieved from software artefact infrastructure repository and dummy fault matrix test. The performance of the proposed FCBTSO is measured against the traditional test suite reduction methods (Greedy method, Additional Greedy, HGS, and Enhanced HGS) by following the performance measures: fault coverage, execution time and reduced optimized test suite size. Rigorous statistical tests are conducted to determine the performance significance, which indicates that FCBTSO outperforms other approaches implemented with respect to the execution time that includes the execution time of the proposed approach to find the optimized test suite and the execution time of test cases in the optimized test suite.

10 citations


Proceedings ArticleDOI
01 Nov 2020
TL;DR: In this paper, a simple Design for Test (DfT) technique called intentional offset injection is proposed to detect various faults in the op amp, which can provide high fault coverage of 95% with modest area requirements.
Abstract: An operational amplifier (op amp) is a fundamental block used extensively both as a stand-alone device and as a major block embedded in an SoC. To fully characterize an op amp, sophisticated analog and digital testing is required, which is expensive. Fault detection techniques have proved to reduce package cost and test cost by detecting faulty devices early in the test sequence. In this paper, a simple Design for Test (DfT) technique called intentional offset injection is proposed to detect various faults in the op amp. As our proposed method is completely digital, pure digital circuitry can be used, thereby avoiding expensive analog testing. The op amp can be tested with the proposed fault detection method during wafer probe test right after the continuity tests and the faulty devices could be discarded, thereby circumventing time-consuming analog testing. Additionally, our detection scheme can be used for power-on selftest after deployment and for online health monitoring during normal operation. We show that the proposed detection method can provide high fault coverage of 95% with modest area requirements. In this work, we also introduce a detector called digital window comparator which is used to monitor faults in the biasing circuit as well as in the Widlar current reference providing increased fault coverage.

Proceedings ArticleDOI
17 May 2020
TL;DR: A generalized method for hardening asynchronous Click-based controllers is introduced, where a combination of spatial redundancy and Guard Gate (GG) is used to mitigate Single Event Transients (SET) and Single Event Upsets (SEUs).
Abstract: Ensuring modern VLSI systems are resilient to soft errors resulting from radiation effects continues to be a challenging problem. Traditional Radiation Hardened by Design (RHBD) approaches typically have high costs in terms of area, power, and/or performance overheads. In recent years, pairing RHBD with asynchronous design has emerged as a potential solution to reduce these overheads and improve efficiency. In this paper, a generalized method for hardening asynchronous Click-based controllers is introduced, where a combination of spatial redundancy and Guard Gate (GG) is used to mitigate Single Event Transients (SET) and Single Event Upsets (SEUs). Two Click controllers that can benefit from the RHBD methodology are presented, each one targeting a recently proposed soft error resilient asynchronous architecture, the Soft Error Resilient Asynchronous Design (SERAD) and the Asynchronous Full Error Detection and Correction (AFEDC). We have implemented the different controllers using a 130nm cell library. Results show the correctness and resilience to SET and SEU events. Additionally, the proposed RHBD templates require less area and power overheads when compared to the Triple Modular Redundancy (TMR) implementation for equivalent fault coverage.

Journal ArticleDOI
TL;DR: The main aim of this work is to propose a methodology according with the SBST paradigm that permits to develop test programs able to achieve high coverage on different microcontrollers of the same family, and to reach the same fault coverage figures over many processors while dramatically reducing the development time.

Journal ArticleDOI
TL;DR: A method that generates software-based self-tests by leveraging bounded model checking techniques and targeting, for the first time, out-of-order [out- of-order execution (OOE)] superscalar processors to combat the state-space explosion associated with BMC is presented.
Abstract: Generating functional tests for processors has been a challenging problem for decades in the very large-scale integration testing field. This paper presents a method that generates software-based self-tests by leveraging bounded model checking (BMC) techniques and targeting, for the first time, out-of-order [out-of-order execution (OOE)] superscalar processors. To combat the state-space explosion associated with BMC, the proposed method starts by combining module-level abstraction-refinement with slicing to reduce the size of the model under verification. Next, an off-the-shelf BMC solver is used on the obtained extended finite-state machines to generate the leading sequences that are necessary to excite internal processor functions. Finally, constrained automatic test-pattern generation is used to cover all structural faults within every function excited by the obtained leading sequences. Experimental results show that the proposed method leads to extremely high fault coverage on the critical components corresponding to OOE operations in functional mode. The method therefore helps in tackling the over-testing problem that is inherent to the full-scan test approach.

Journal ArticleDOI
TL;DR: Both stuck-at and delay fault coverage improves under pseudo-random tests using inversion TPs, and extended data collection finds noteworthy trends on the effectiveness of TP architectures.
Abstract: This article analyzes and rationalizes the capabilities of inversion test points (TPs) when implemented in lieu of traditional test point architectures. With scaling transistor density, logic built-in self-test (LBIST) quality degrades and additional efforts must keep LBIST quality high. Additionally, delay faults must be targeted by LBIST, but delay faults can be masked when using control-0/1 (i.e., traditional) TP architectures. Although inversions as TPs have been proposed in literature, the effect inversion TPs have on fault coverage compared to traditional alternatives has not been explored. This study extends work previously presented in the North Atlantic Test Workshop (NATW’19) and finds both stuck-at and delay fault coverage improves under pseudo-random tests using inversion TPs, and extended data collection finds noteworthy trends on the effectiveness of TP architectures.

Proceedings ArticleDOI
23 Nov 2020
TL;DR: In this paper, a parallel concatenation of linear feedback shift registers (LFSR) is proposed to reduce the use of memory elements in an LFSR, which enables the test pattern generator to supply divergent test sequences for comparatively high fault coverage.
Abstract: Determination of the most appropriate test set is a critical task for high fault coverage in digital testing. Linear feedback shift registers (LFSR) is a common choice to generate pseudo-random patterns for any circuit under test. However, literature shows that pseudo-random generation is incapable of achieving high fault coverage in complex circuits under test. Moreover, a proportional amount of LFSR hardware is loaded with additional circuitry to implement weighted random and mixed-mode reseeding techniques. Despite dense research around weighted random and mixed-mode reseeding techniques, test pattern generation remains a high-cost block in built-in self-test architectures. This research paper uses the parallel concatenation of LFSRs to propose a simple, uniform, and scalable test pattern generator architecture for BIST applications. The proposed test pattern generator reduces the large use of memory elements in an LFSR. Moreover, the parallel concatenation of LFSRs enables the test pattern generator to supply divergent test sequences for comparatively high fault coverage. Fault simulations on combinational profiles of ISCAS’89 benchmark circuits show higher fault coverage with low hardware overhead as compared to standard LFSR.

Journal ArticleDOI
Irith Pomeranz1
TL;DR: It is made the new observation that by keeping the same number of compressed tests and applying several different tests based on every compressed test, it is possible to improve the quality of the test set applied to the circuit.
Abstract: Test data compression is based on storage of compressed tests and use of on-chip decompression logic for test application. Further reductions in the input test data volume are achieved by methods that apply several different tests based on every compressed test. This article makes the new observation that by keeping the same number of compressed tests and applying several different tests based on every compressed test, it is possible to improve the quality of the test set applied to the circuit. This article studies such an approach for path delay faults (PDFs) using a linear-feedback shift register (LFSR) as the decompression logic. Because of the nature of PDFs, targeting an extended subset of PDFs increases the confidence that important PDFs are detected. However, the benefit of detecting additional faults may not justify an increase in the number of stored tests. The approach suggested in this article is used for detecting an extended subset of target PDFs using the same set of LFSR seeds. Extra clocking of the LFSR is used for obtaining scan-in states for several new two-cycle tests based on the same seed. Experimental results for benchmark circuits demonstrate the effectiveness of this approach.

Journal ArticleDOI
29 Oct 2020
TL;DR: A stochastic model for analysing the behaviour of a multi-state system consisting of two non-identical units by incorporating the concept of coverage factor and two types of repair facilities between failed state to a normal state is presented.
Abstract: The main objective of this study is to analyse the reliability behaviour of parallel systems with three types of failure, namely unit failure, human failure and major failure. For this purpose, we apply three different statistical techniques, namely copula, coverage and copula-coverage. More precisely, this study presents a stochastic model for analysing the behaviour of a multi-state system consisting of two non-identical units by incorporating the concept of coverage factor and two types of repair facilities between failed state to a normal state. The system could be characterized as being in a failed state due to unit failures, human failure and major failures, such as catastrophic and environmental failure. All failure rates are constant and it is assumed that these are exponentially distributed whereas, repair rates follow the Gumbel-Hougaard copula distribution. The entire system is modelled as a finite-state Markov process. Time-dependent reliability measures like availability, reliability and mean time to failure (MTTF) are obtained by supplementary variable techniques and Laplace transformations. The present study provides a comparative analysis for reliability measures among the aforementioned techniques, while a discussion referring to which technique makes the system more reliable is also developed. Furthermore, numerical simulations are presented to validate the analytical results. KeywordsPerfect coverage, Gumbel-Hougaard copula, Parallel system, Human failure, MTTF, Markov process.

Journal ArticleDOI
TL;DR: An enhanced march test algorithm is proposed to achieve 100% fault coverage and diagnostic accuracy in bit-oriented PCM and a built-in self-test (BIST) march test scheme is proposed, realizing the independent test of PCM without any external equipment.
Abstract: As one of the most promising candidates for nonvolatile memory, phase change memory (PCM) technology has shown great performance advantages in market applications. However, the conventional test methods have not kept pace with the development. In this article, focusing on specific PCM faults and others, an enhanced march test algorithm is proposed to achieve 100% fault coverage and diagnostic accuracy in bit-oriented PCM. The proposed algorithm is then converted for word-oriented PCM and equipped with capability to detect potential intraword impact. In addition, to reduce the dependence of memory test on the external devices, a novel storage scheme of fault information is devised. Through the modeling and simulation in C-language, this method is proven to improve the probability of finding the predefined fault-free regions in the tested memory. Finally, combining the enhanced test algorithm and the novel storage scheme, a built-in self-test (BIST) march test scheme is proposed, realizing the independent test of PCM without any external equipment. By comparison, the result of experiments, which are performed with C-language, proves that the proposed test scheme not only increases the fault coverage and diagnostic accuracy, but also reduces the additional area overhead.

Proceedings ArticleDOI
17 Jun 2020
TL;DR: This study shows training and using artificial neural networks to predict signal probabilities increases post-test point insertion fault coverage compared to using COP, especially in circuits with many reconvergent fan-outs.
Abstract: This article presents an artificial neural network-based signal probability predictor for VLSI circuits which considers reconvergent fan-outs. Current testability analysis techniques are useful for inserting test points to improve circuit testability, but reconvergent fan-outs in digital circuits creates inaccurate testability analysis. Conventional testability analysis methods like COP do not consider reconvergent fan-outs and can degrade algorithm results (e.g., test point insertion), while more advanced methods increase analysis time significantly. This study shows training and using artificial neural networks to predict signal probabilities increases post-test point insertion fault coverage compared to using COP, especially in circuits with many reconvergent fan-outs.

Proceedings ArticleDOI
05 Apr 2020
TL;DR: This article surveys test point (TP) architectures and test point insertion (TPI) methods for increasing pseudo-random and logic built-in self-test (LBIST) fault coverage and discusses some known weaknesses of TPs.
Abstract: This article surveys test point (TP) architectures and test point insertion (TPI) methods for increasing pseudo-random and logic built-in self-test (LBIST) fault coverage. We present a history of TPI approaches, including TPI for increasing stuck-at fault coverage, compressing test patterns, detecting path delay faults, and reducing test power. We discuss some known weaknesses of TPs and explore research directions to overcome them.

Journal ArticleDOI
TL;DR: A simple, yet energy- and area-efficient method for tolerating the stuck-at faults caused by an endurance issue in secure-resistive main memories and it is shown that the fault coverage of the proposed method is similar to that of the state-of-the-art method.
Abstract: In this article, we present a simple, yet energy- and area-efficient method for tolerating the stuck-at faults caused by an endurance issue in secure-resistive main memories. In the proposed method, by employing the random characteristics of the encrypted data encoded by the Advanced Encryption Standard (AES) as well as a rotational shift operation, a large number of memory locations with stuck-at faults could be employed for correctly storing the data. Due to the simple hardware implementation of the proposed method, its energy consumption is considerably smaller than that of other recently proposed methods. The technique may be employed along with other error correction methods, including the error correction code (ECC) and the error correction pointer (ECP). To assess the efficacy of the proposed method, it is implemented in a phase-change memory (PCM)-based main memory system and compared with three error tolerating methods. The results reveal that for a stuck-at fault occurrence rate of 10−2 and with the uncorrected bit error rate of ${2 \times 10}^{-3}$ , the proposed method achieves 82% energy reduction compared to the state-of-the-art method. More generally, using a simulation analysis technique, we show that the fault coverage of the proposed method is similar to that of the state-of-the-art method.

Proceedings ArticleDOI
16 Jun 2020
TL;DR: Path Sensitive Signatures (PaSS) is presented, a low overhead and high fault coverage software method to detect illegal control flows and provides a lightweight technique to protect inter-procedural control flow transfers including calls and returns.
Abstract: Transistors' performance has been improving by shrinking feature sizes, lowering voltage levels, and reducing noise margins. However, these changes also make transistors more vulnerable and susceptible to transient faults. As a result, transient fault protection has become a crucial aspect of designing reliable systems. According to previous research, it is about 2.5x harder to mask control flow errors than data flow errors, making control flow protection critical. In this paper, we present Path Sensitive Signatures (PaSS), a low overhead and high fault coverage software method to detect illegal control flows. PaSS targets off-the-shelf embedded systems and combines two different methods to detect control flow errors that incorrectly jump to both nearby and faraway locations. In addition, it provides a lightweight technique to protect inter-procedural control flow transfers including calls and returns. PaSS is evaluated on the SPEC2006 benchmarks. The experimental results demonstrate that with the same level of fault coverage, PaSS only incurs 15.5% average performance overhead compared to 64.7% overhead incurred by the traditional signature-based technique. PaSS can also further extend fault coverage by providing inter-procedural protection at an additional 3.6% performance penalty.

Proceedings ArticleDOI
Yu Li1, Min Li1, Bo Luo1, Ye Tian1, Qiang Xu1 
TL;DR: A novel lightweight fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification, which can reduce 90% of the risks at around 10% overhead.
Abstract: Deep neural networks (DNNs) have become one of the enabling technologies in many safety-critical applications, e.g., autonomous driving and medical image analysis. DNN systems, however, suffer from various kinds of threats, such as adversarial example attacks and fault injection attacks. While there are many defense methods proposed against maliciously crafted inputs, solutions against faults presented in the DNN system itself (e.g., parameters and calculations) are far less explored. In this paper, we develop a novel lightweight fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification. The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much. We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve. Experimental results show that DeepDyve can reduce 90% of the risks at around 10% overhead.

Journal ArticleDOI
TL;DR: The experimental results on SPEC2000 benchmarks show that the Evaluation Factor of the proposed method is 50% better than the Relationship Signatures for Control Flow Checking with Data Validation (RSCFCDV) methods, which are suggested in the literature.

Proceedings ArticleDOI
09 Mar 2020
TL;DR: This paper proposes a new design-for-testability (DFT) scheme for FinFET SRAMs to detect such faults by creating a mismatch in the sense amplifier (SA) that will incorrectly bias the SA and cause incorrect read outputs.
Abstract: Manufacturing defects can cause faults in FinFET SRAMs Of them, easy-to-detect (ETD) faults always cause incorrect behavior, and therefore are easily detected by applying sequences of write and read operations However, hard-to-detect (HTD) faults may not cause incorrect behavior, only parametric deviations Detection of these faults is of major importance as they may lead to test escapes This paper proposes a new design-for-testability (DFT) scheme for FinFET SRAMs to detect such faults by creating a mismatch in the sense amplifier (SA) This mismatch, combined with the defect in the cell, will incorrectly bias the SA and cause incorrect read outputs Furthermore, post-silicon calibration schemes can be used to avoid over-testing or test escapes caused by process variation effects Compared to the state of the art, this scheme introduces negligible overheads in area and test time while it significantly improves fault coverage and reduces the number of test escapes

Proceedings ArticleDOI
17 Jun 2020
TL;DR: A Hardware-in-the-Loop (HIL) setup for the development of low voltage Modular Multilevel Converter (MMC) control algorithms, providing an affordable and flexible environment for the early proof of concept and risk-free training of controller handling.
Abstract: This paper presents a Hardware-in-the-Loop (HIL) setup for the development of low voltage Modular Multilevel Converter (MMC) control algorithms. The fundamental low voltage MMC design is presented first, including its control. After that, a low cost and low performance microprocessor topology will be shown, on which the control algorithms are deployed. Advantages are ease of use, providing an affordable and flexible environment for the early proof of concept and risk-free training of controller handling. Then, the HIL testbed implementation with a complete real-time MMC simulation is depicted. Experimental results show that the HIL setup is sufficient to test the controller designs with a high test and fault coverage. The controller designs are also validated. Issues arise with the performance of the real-time simulation, when CPUs with limited performance are used.

Proceedings ArticleDOI
01 Apr 2020
TL;DR: A new weighted pseudo-random test generator called wPRPG is proposed for low-power launch-on-capture (LOC) transition delay fault testing and can achieve much higher transitiondelay fault coverage in LOC delay testing than the conventional test-per-scan PRPG.
Abstract: A new weighted pseudo-random test generator called wPRPG is proposed for low-power launch-on-capture (LOC) transition delay fault testing. The low-power weighted PRPG is implemented by assigning different weights on the test enable signals and applying a gating technique. The new low-power PRPG can achieve much higher transition delay fault coverage in LOC delay testing than the conventional test-per-scan PRPG.

Proceedings ArticleDOI
01 Nov 2020
TL;DR: Gathered results show that there is no correlation between stuck-at and path delay fault coverage, and provide guidelines for developing more effective functional test, based on an open-source RISC-V-based processor core as benchmark device.
Abstract: Path Delay fault test currently exploits DfT-based techniques, mainly relying on scan chains, widely supported by commercial tools. However, functional testing may be a desirable choice in this context because it allows to catch faults at-speed with no hardware overhead and it can be used both for endof-manufacturing tests and for in-field test. The purpose of this article is to compare the results that can be achieved with both approaches. This work is based on an open-source RISC-V-based processor core as benchmark device. Gathered results show that there is no correlation between stuck-at and path delay fault coverage, and provide guidelines for developing more effective functional test.

Proceedings ArticleDOI
19 Oct 2020
TL;DR: An observability-based algorithm to transfer results captured in non-scan flip-flops to scan flip- flops using low speed functional clock cycles, termed coda cycles, so the results can be read out is described.
Abstract: Delay test is used to verify the timing performance of integrated circuits. The test requires launching rising or falling transitions into the circuit and capturing the results after the specified delay. All the sequential elements in the design are required to be implemented with scan flip-flops such that the captured data can be observed for correct behavior. If the result is captured at a non-scan flip-flop, or a memory, it cannot be read out, resulting in fault coverage loss. This research describes an observability-based algorithm to transfer results captured in non-scan flip-flops to scan flip-flops using low speed functional clock cycles, termed coda cycles, so the results can be read out. We demonstrate the algorithm using path delay test on ISCAS89 benchmark circuits, where a fraction of the scan flip-flops have been made non-scan, and demonstrate the improvement in coverage when adding coda cycles to the clocking method.