scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2019"


Proceedings ArticleDOI
02 Jun 2019
TL;DR: Experimental results show the proposed GCN model has superior accuracy to classical machine learning models on difficult-to-observation nodes prediction, and compared with commercial testability analysis tools, the proposed observation point insertion flow achieves similar fault coverage.
Abstract: Applications of deep learning to electronic design automation (EDA) have recently begun to emerge, although they have mainly been limited to processing of regular structured data such as images. However, many EDA problems require processing irregular structures, and it can be non-trivial to manually extract important features in such cases. In this paper, a high performance graph convolutional network (GCN) model is proposed for the purpose of processing irregular graph representations of logic circuits. A GCN classifier is firstly trained to predict observation point candidates in a netlist. The GCN classifier is then used as part of an iterative process to propose observation point insertion based on the classification results. Experimental results show the proposed GCN model has superior accuracy to classical machine learning models on difficult-to-observation nodes prediction. Compared with commercial testability analysis tools, the proposed observation point insertion flow achieves similar fault coverage with an 11% reduction in observation points and a 6% reduction in test pattern count.

76 citations


Proceedings ArticleDOI
01 Nov 2019
TL;DR: This paper investigates structural and functional testing methodologies for neuromorphic circuits to significantly reduce the number of faults to be tested, and test time accordingly, without sacrificing the output accuracy or fault coverage.
Abstract: Deep neural networks have shown great potential in solving difficult cognitive problems such as object recognition and classification. As a result, several neuromorphic circuits are designed and fabricated to perform various cognitive tasks. While these circuits are subjected to manufacturing defects and runtime failures, a proper testing method for neuromorphic circuits should exploit their inherent tolerance to inaccuracies in order to reduce the cost and complexity of post-manufacturing testing. This paper investigates structural and functional testing methodologies for neuromorphic circuits. The proposed test methodologies allow to significantly reduce the number of faults to be tested, and test time accordingly, without sacrificing the output accuracy or fault coverage.

28 citations


Journal ArticleDOI
TL;DR: This paper presents a hybrid test point technology designed to reduce deterministic PCs and to improve fault detection likelihood by means of the same minimal set of test points, and demonstrates feasibility of the new schemes for industrial designs.
Abstract: Logic built-in self-test (LBIST) is now increasingly used with on-chip test compression as a complementary solution for in-system test, where high quality, low power, low silicon area, and most importantly short test application time are key factors affecting ICs targeted for safety-critical systems. Test points, common in LBIST-ready designs, can help to reduce test time and the overall silicon overhead so that one can get desired test coverage with the minimal number of patterns. Typically, LBIST test points are dysfunctional when enabled in an ATPG-based test compression mode. Similarly, test points used to reduce ATPG pattern counts (PCs) cannot guarantee desired random testability. In this paper, we present a hybrid test point technology designed to reduce deterministic PCs and to improve fault detection likelihood by means of the same minimal set of test points. The hybrid test points are subsequently deployed in a scan-based LBIST scheme addressing stringent test requirements of certain application domains such as the automotive electronics market. These requirements, largely driven by safety standards, are met by significantly reducing test application time while preserving the high fault coverage. The new scheme is a combination of pseudorandom test patterns delivered in a test-per-clock fashion through conventional scan chains and per-cycle-driven hybrid observation test points that capture faulty effects every shift cycle into dedicated scan chains. Their content is gradually shifted into a compactor shared with the remaining chains that deliver responses once a test pattern has been shifted-in. Experimental results obtained for industrial designs confirm feasibility of the new schemes, and they are reported herein.

28 citations


Proceedings ArticleDOI
04 Apr 2019
TL;DR: Pseudo-random test patterns are generated to test circuit using reseeding LFSR technique which reduces the need for memory to store seed value and the power utilization and indirectly reduces the time required to check the circuits.
Abstract: Testing of circuits became difficult as the scale of integration is increasing as said in Moore’s Law. Conventional testing approach is not sufficient with the growth of device counts and density. Testing helps the developer to investigate faults and error present in developed circuit which helps to reduce time require to test and thus decreases chances of getting failed during operation. Test time is one of the most important parameters in digital circuit testing which effects the overall process of testing. Reducing the test time of the test pattern generation is one of the most effected solution for the process. Reseeding LFSR is one of the methods to generate the test patterns for testing. In this paper, pseudo-random test patterns are generated to test circuit using reseeding LFSR technique. This helps to reduce the test pattern required to be stored for testing. This technique can be applied with the principles which are required for low power as well as low test data volume. Fault coverage of proposed circuit is calculated using ISCAS’89 benchmark circuits. The technique is integrated with the benchmark circuits and comparison is done based on the performance and resource utilization. Proposed model reduces the need for memory to store seed value and the power utilization. Reseeding can mainly be applied for BIST which targets complete fault coverage and minimization of the test length. Data compression for reducing the test pattern required for testing will indirectly reduce the time required to check the circuits. Future work is to reduce time required for the test pattern generation. Hamming distance can be applied to calculate the number of bits changing during the test patterns transition. Hamming distance approach can be implemented to reduce the parameter.

24 citations


Proceedings ArticleDOI
15 Jul 2019
TL;DR: The time of feature extraction is demonstrated to be significantly faster compared to heuristically-based TP evaluation, and the impact of inserted TPs is shown to provide superior stuck-at fault coverage compared to conventional heuristic-based testability analysis.
Abstract: A method of data collecting, training, and using artificial neural networks (ANNs) for evaluating test point (TP) quality for TP insertion (TPI) is presented in this study. The TPI method analyzes a digital circuit and determines where to insert TPs to improve fault coverage under pseudo-random stimulus, but in contrast to conventional TPI algorithms using heuristically-calculated testability measures, the proposed method uses an ANN trained through fault simulation to evaluate a TP's quality. The time of feature extraction is demonstrated to be significantly faster compared to heuristic-based TP evaluation, and the impact of inserted TPs is shown to provide superior stuck-at fault coverage compared to conventional heuristic-based testability analysis.

22 citations


Proceedings ArticleDOI
01 Dec 2019
TL;DR: Methods of increasing logic built-in self-test (LBIST) delay fault coverage using artificial neural networks (ANNs) to selecting test point (TP) locations a method to train ANNs using randomly generated circuits are presented.
Abstract: This article presents methods of increasing logic built-in self-test (LBIST) delay fault coverage using artificial neural networks (ANNs) to selecting test point (TP) locations a method to train ANNs using randomly generated circuits. This method increases delay test quality both during and after manufacturing. This article also trains ANNs without relying on valuable third-party intellectual property (IP) circuits. Results show higher-quality TPs are selected in significantly reduced CPU time and third-party IP is not be required for ANN training.

20 citations


Journal ArticleDOI
TL;DR: Experiments show that relevant reliability improvements can be obtained from an efficient exploration of the compilation solutions space, and fault-injection simulation campaigns are performed to assess the results.
Abstract: A method is presented for automated improvement of embedded application reliability. The compilation process is guided using genetic algorithms and a multiobjective optimization approach (MOOGAs). Even though modern compilers are not designed to generate reliable builds, they can be tuned to obtain compilations that improve their reliability, through simultaneous optimization of their fault coverage, execution time, and memory size. Experiments show that relevant reliability improvements can be obtained from an efficient exploration of the compilation solutions space. Fault-injection simulation campaigns are performed to assess our proposal against different benchmarks, and the results are assessed against a real Advanced RISC Machines-based system-on-chip under proton irradiation.

17 citations


Proceedings ArticleDOI
25 Mar 2019
TL;DR: This paper proposes an automatic test pattern generation methodology for approximate circuits based on boolean satisfiability, which is aware of output quality and approximable vs non-approximable faults, and can significantly reduce the number of faults to be tested, and test time accordingly, without sacrificing the output quality or test coverage.
Abstract: Approximate computing has gained growing attention as it provides trade-off between output quality and computation effort for inherent error tolerant applications such as recognition, mining, and media processing applications. As a result, several approximate hardware designs have been proposed in order to harness the benefits of approximate computing. While these circuits are subjected to manufacturing defects and runtime failures, the testing methods should be aware of their approximate nature. In this paper, we propose an automatic test pattern generation methodology for approximate circuits based on boolean satisfiability, which is aware of output quality and approximable vs non-approximable faults. This allows us to significantly reduce the number of faults to be tested, and test time accordingly, without sacrificing the output quality or test coverage. Experimental results show that, the proposed approach can reduce the fault list by 2.85× on average while maintaining high fault coverage.

16 citations


Proceedings ArticleDOI
13 May 2019
TL;DR: This study finds delay fault coverage can be improved using inversion TPs in logic circuits using pseudo-random tests without negatively impacting stuck-at fault coverage.
Abstract: This article analyzes and rationalizes the capabilities of inversion-based test points (TPs) when implemented in lieu of control-0/1 TPs. With upward scaling of transistor density, delay faults can be masked when using pseudo-random tests with control-0/1 (“conventional”) TP architectures. This study finds delay fault coverage can be improved using inversion TPs in logic circuits using pseudo-random tests without negatively impacting stuck-at fault coverage.

16 citations


Journal ArticleDOI
TL;DR: A noninvasive solution for prebond TSV test based on pulse shrinking is proposed in this paper, which shows that the proposed method performs better than the existing methods in terms of fault coverage, area overhead, and test time.
Abstract: Since the physical defects such as resistive open and leakage in through silicon vias (TSVs) caused by immature manufacturing techniques tend to undermine the reliability and yield of 3-D integrated circuits, it is very important to test the TSV as early as possible in the fabrication process. There are some shortcomings in the existing prebond TSV test techniques, such as incomprehensive fault coverage, large area overhead, and additional test time. To overcome these problems, a noninvasive solution for prebond TSV test based on pulse shrinking is proposed in this paper. This method makes use of the fact that defects in TSV lead to variation in the propagation delay—the rise and fall times are first transformed into pulse width, and the pulse shrinking technique is used to digitize the pulse width into a digital code which is then compared with an expected value for a fault-free TSV. Experiments on defect detection are carried out using HSPICE simulations with realistic models for 45-nm CMOS technology. The results show that the proposed method performs better than the existing methods in terms of fault coverage, area overhead, and test time.

15 citations


Journal ArticleDOI
TL;DR: A novel ATPG technique is presented that is able to achieve an average yield improvement ranging from 19% up to 36% — compared to conventional ATPG—in terms of approximation-redundant fault coverage reduction, and in some cases, the improvement can reach up to 100%.
Abstract: Intrinsic resiliency of many today's applications opens new design opportunities. Some computation accuracy loss within the so-called resilient kernels does not affect the global quality of results. This has led the scientific community to introduce the approximate computing paradigm that exploits such a concept to boost computing system performances. By applying approximation to different layers, it is possible to design more efficient systems—in terms of energy, area, and performance—at the cost of a slight accuracy loss. In particular, at hardware level, this led to approximate integrated circuits. From the test perspective, this particular class of integrated circuits leads to new challenges. On the other hand, it also offers the opportunity of relaxing test constraints at the cost of a careful selection of so-called approximation-redundant faults . Such faults are classified as tolerable because of the slight introduced error. It follows that improvements in yield and test-cost reduction can be achieved. Nevertheless, conventional automatic test pattern generation (ATPG) algorithms, when not aware of the introduced approximation, generate test vectors covering approximation-redundant faults, thus reducing the yield gain. In this work, we show experimental evidence of such problem and present a novel ATPG technique to deal with it. Then, we extensively evaluate the proposed technique, and show that we are able to achieve an average yield improvement ranging from 19% up to 36% — compared to conventional ATPG—in terms of approximation-redundant fault coverage reduction. In some cases, the improvement can reach up to 100%.

Proceedings ArticleDOI
Yu Zhou1, Jun Bi1, Yunsenxiao Lin1, Yangyang Wang1, Dai Zhang1, Zhaowei Xi1, Jiamin Cao1, Chen Sun1 
24 Jun 2019
TL;DR: P4Tester is proposed, a new network testing system for troubleshooting runtime rule faults on programmable data planes that offers a new intermediate representation based on Binary Decision Diagram, which enables efficient probe generation for various P4-defined data plane functions and a new probe model that uses source routing to forward probes.
Abstract: P4 and programmable data planes bring significant flexibility to network operation but are inevitably prone to various faults. Some faults, like P4 program bugs, can be verified statically, while some faults, like runtime rule faults, only happen to running network devices, and they are hardly possible to handle before deployment. Existing network testing systems can troubleshoot runtime rule faults via injecting probes, but are insufficient for programmable data planes due to large overheads or limited fault coverage. In this paper, we propose P4Tester, a new network testing system for troubleshooting runtime rule faults on programmable data planes. First, P4Tester proposes a new intermediate representation based on Binary Decision Diagram, which enables efficient probe generation for various P4-defined data plane functions. Second, P4Tester offers a new probe model that uses source routing to forward probes. This probe model largely reduces rule fault detection overheads, i.e. requiring only one server to generate probes for large networks and minimizing the number of probes. Moreover, this probe model can test all table rules in a network, achieving full fault coverage. Evaluation based on real-world data sets indicates that P4Tester can efficiently check all rules in programmable data planes, generate 59% fewer probes than ATPG and Pronto, be faster than ATPG by two orders of magnitude, and troubleshoot multiple rule faults within one second on BMv2 and Tofino.

Proceedings ArticleDOI
13 May 2019
TL;DR: A Double Node Upset Resilient Flip-Flop (DNUR-FF) circuit that can tolerate double errors while incurring low area and power overheads and realizes 58% of PDP improvement compared to Triple Module Redundancy (TMR) approach while delivering high-performance with low complexity and power consumption.
Abstract: In this work, we aim to maintain the correct execution of instructions in the pipeline stages. To achieve that, the integrity for the data computed in registers during execution should be maintained via protecting the susceptible registers. Thus, we present a Double Node Upset Resilient Flip-Flop (DNUR-FF) circuit that can tolerate double errors while incurring low area and power overheads. We deploy the proposed soft-error resilient register at higher level to replace the most vulnerable registers in large-scale pipeline processors. The experimental results validate the robustness of our design by delivering superior fault coverage masking (100%) for both SEU and DNU errors. In addition, the proposed design utilizes partial spatial redundancy, and therefore, incurs reduced area overhead (31%) and realizes 58% of PDP improvement compared to Triple Module Redundancy (TMR) approach while delivering high-performance with low complexity and power consumption.

Journal ArticleDOI
TL;DR: It is shown in practical cases of study that it is beneficial to complement defect coverage with fault coverage and assess the severity of defect escapes to get a complete picture of test quality.
Abstract: In safety critical applications, there is a demand for estimating defect coverage in order to meet stringent quality levels. However, defect simulation of complex AMS-RF circuits is computationally expensive since achieving a good confidence interval requires sampling many defects. In this paper, we show in practical cases of study that it is beneficial to complement defect coverage with fault coverage and assess the severity of defect escapes to get a complete picture of test quality. The computational burden of defect and fault simulations is taken into account and accurate statistical estimates of defect and fault escapes are provided to allow safe early stopping of the simulations.

Journal ArticleDOI
TL;DR: Why the fault Simulation of the STLs represents a different problem with respect to the classical fault simulation of test stimuli is explained, why it can be highly computationally expensive, and some solutions to reduce the computational cost and possibly trade-off between results accuracy and cost are overviewed.
Abstract: The adoption of complex and technologically advanced integrated circuits (ICs) in safety-critical applications (e.g., in automotive) forced the introduction of new solutions to guarantee the achievement of the required reliability targets. One of these solutions lies in performing in-field test (i.e., the test performed when the device is already deployed in the mission environment) to detect faults that may arise in this phase of electronic circuit life. In this scenario, one increasingly adopted approach is based on the software test libraries (STLs), i.e., suitable code which is run by the CPU included in the system and is able to detect the existence of possible permanent faults both in the CPU itself and in the rest of the system. In order to assess the effectiveness of the STLs, fault simulation is performed, so that the achieved fault coverage (e.g., in terms of stuck-at faults) can be computed. This paper explains why the fault simulation of the STLs represents a different problem with respect to the classical fault simulation of test stimuli (for which very effective algorithms and tools are available), shows why it can be highly computationally expensive, and overviews some solutions to reduce the computational cost and possibly trade-off between results accuracy and cost.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: This paper presents a structured approach that identifies the sensitive nets, namely a well chosen small subset of internal nets that are affected by these faults, and utilizes the speed of DC analysis and some common behavioral aspects of analog signals to find this subset.
Abstract: The traditional body of literature on analog testing deals with propagation of faults to the output nets of the circuit. Often the set of detectable faults remains unsatisfactory because suitable stimuli cannot be found for propagating certain faults to the output. Existing technology supports capturing of the state of internal nets of a circuit, thereby enhancing the scope of detecting faults by observing their effect on internal nets. This approach is feasible only if the number of internal nets probed by the built-in test structure is very few. This paper presents a structured approach that identifies the sensitive nets, namely a well chosen small subset of internal nets that are affected by these faults. We utilize the speed of DC analysis and some common behavioral aspects of analog signals to find out this subset. We report dramatic improvement in fault coverage on several circuits including benchmarks.

Proceedings ArticleDOI
01 Nov 2019
TL;DR: This work proposes a novel method to predict simultaneous switching noise using fast Deep Neural Networks (DNNs) such as Fully Connected Network, Convolutional Neural Network, and Natural Language Processing, that is significantly faster than conventional estimation methods and can potentially reduce the test time.
Abstract: The Power Distribution Network (PDN) is designed for worst-case power-hungry functional use-cases. Most often Design for Test (DFT) scenarios are not accounted for, while optimizing the PDN design. Automatic Test Pattern Generation (ATPG) tools typically follow a greedy algorithm to achieve maximum fault coverage with short test times. This causes Power Supply Noise (PSN) during scan testing to be much higher than functional mode since switching activity is higher by an order of magnitude. Understanding the noise characteristics through exhaustive pattern simulation is extremely machine and memory intensive and requires unsustainably long runtimes. Hence, we aggressively limit switching factors to conservative estimates and rely on post-silicon noise characterization to optimize test vectors. In this work, we propose a novel method to predict simultaneous switching noise using fast Deep Neural Networks (DNNs) such as Fully Connected Network, Convolutional Neural Network, and Natural Language Processing. Our approach, that is based on pre-silicon ATPG vectors, is significantly faster than conventional estimation methods and can potentially reduce the test time.

Proceedings ArticleDOI
01 Sep 2019
TL;DR: This paper proposes two algorithms that manipulate DDMs to optimize cell-aware ATPG results with respect to fault coverage, test pattern count, and compute time, and derives an innovative heuristic that outperforms solutions in the literature.
Abstract: Cell-aware test (CAT) explicitly targets defects inside library cells and therefore significantly reduces the number of test escapes compared to conventional automatic test pattern generation (ATPG) approaches that cover cell-internal defects only serendipitously. CAT consists of two steps, viz. (1) library characterization and (2) cell-aware ATPG. Defect detection matrices (DDMs) are used as the interface between both CAT steps; they record which cell-internal defects are detected by which cell-level test patterns. This paper proposes two algorithms that manipulate DDMs to optimize cell-aware ATPG results with respect to fault coverage, test pattern count, and compute time. Algorithm 1 identifies don't-care bits in cell patterns, such that the ATPG tool can exploit these during cell-to-chip expansion to increase fault coverage and reduce test-pattern count. Algorithm 2 selects, at cell level, a subset of preferential patterns that jointly provides maximal fault coverage at a minimized stimulus care-bit sum. To keep the ATPG compute time under control, we run cell-aware ATPG with the preferential patterns first, and a second ATPG run with the remaining patterns only if necessary. Selecting the preferential patterns maps onto a well-known N Phard problem, for which we derive an innovative heuristic that outperforms solutions in the literature. Experimental results on twelve circuits show average reductions of 43% of non-covered faults and 10% in chip-pattern count.

Journal ArticleDOI
TL;DR: This article analyzes the impact of manufacturing and physical defects across all layers of the bio-chip architectures and proposes a graph theory-inspired formulation for maximizing the fault coverage through test point insertion.
Abstract: Microfluidic very large scale integration (mVLSI) plays a crucial role for designing point-of-care systems. This article analyzes the impact of manufacturing and physical defects across all layers of the bio-chip architectures and proposes a graph theory-inspired formulation for maximizing the fault coverage through test point insertion. This represents a worthwhile contribution to design-for-testability of mVLSI systems. —Paul Bogdan, University of Southern California

Journal ArticleDOI
TL;DR: A novel test architecture that combines the advantages of high-quality deterministic scan-based test and low-cost built-in self-test and a novel compression method that combines broadcast scan as well as a tailored single-input compression architecture is presented.
Abstract: This paper presents a novel test architecture that combines the advantages of high-quality deterministic scan-based test and low-cost built-in self-test. The main idea is to record (store) all required compressed test data in a novel scan chain structure, and extract and decompress them during testing. This requires a very high compression ratio to obtain a low test data volume, that is, smaller than the number of scan cells in the circuit under test. To achieve such a high compression ratio, we propose a novel compression method that combines broadcast scan as well as a tailored single-input compression architecture. We also utilize the concept of scan chain partitioning and clock gating to reduce the test time and test power. An on-chip test controller is employed to automatically generate all required control signals for the whole test procedure. This significantly reduces the requirements on external automatic test equipment. Experimental results show that our method is well suitable for multicore designs. For example, experiments on the 8-core open-source OpenSPARC T2 processor with 5.7M gates show that all required test data for 100% testable stuck-at fault coverage can be stored in just 59.4% of the scan cells of the processor. Experimental results for transition faults are also presented, which show that more identical cores are needed in order to store all test data for transition faults. We also discuss how to extend this paper to address fault diagnosis and engineering change order problems.

Book ChapterDOI
01 Jan 2019
TL;DR: The advances in VLSI technology have resulted in devices with millions of transistors thus creating new test challenges, increasing the probability that a manufacturing defect in the IC will result in a faulty chip.
Abstract: The advances in VLSI technology have resulted in devices with millions of transistors thus creating new test challenges Moore’s law states that the scale of ICs has doubled every 18 months Reduction in feature size increases the speed of integrated circuits, thus increasing the probability that a manufacturing defect in the IC will result in a faulty chip

Journal ArticleDOI
TL;DR: A topology-agnostic test mechanism capable of diagnosing on-line, coexistent channel-short, and stuck-at faults in these special NoCs as well as in traditional mesh architectures is proposed and an efficient scheduling scheme to reduce test time without compromising resource utilization during testing is presented.
Abstract: High--performance multiprocessor SoCs used in practice require a complex network-on-chip (NoC) as communication architecture, and the channels therein often suffer from various manufacturing defects. Such physical defects cause a multitude of system-level failures and subsequent degradation of reliability, yield, and performance of the computing platform. Most of the existing test approaches consider mesh-based NoC channels only and do not perform well for other regular topologies such as octagons or spidergons, with regard to test time and overhead issues. This article proposes a topology-agnostic test mechanism that is capable of diagnosing on-line, coexistent channel-short, and stuck-at faults in these special NoCs as well as in traditional mesh architectures. We introduce a new test model called Damaru to decompose the network and present an efficient scheduling scheme to reduce test time without compromising resource utilization during testing. Additionally, the proposed scheduling scheme scales well with network size, channel width, and topological diversity. Simulation results show that the method achieves nearly 92% fault coverage and improves area overhead by almost 60% and test time by 98% compared to earlier approaches. As a sequel, packet latency and energy consumption are also improved by 67.05% and 54.69%, respectively, and they are further improved with increasing network size.

Proceedings ArticleDOI
25 Mar 2019
TL;DR: The most relevant problems for the development of the STL are discussed and a set of strategies and solutions oriented to produce an efficient and non-intrusive STL to be used exclusively during the in-field testing of automotive processor cores are presented.
Abstract: Today, safety-critical applications require self-tests and self-diagnosis approaches to be applied during the lifetime of the device. In general, the fault coverage values required by the standards (like ISO 26262) in the whole System-on-Chip (SoC) are very high. Therefore, different strategies are adopted. In the case of the processor core, the required fault coverage can be achieved by scheduling the periodical execution of a set of test programs or Software-Test Library (STL). However, the STL for infield testing should be able to comply with the operating system specifications without affecting the mission operation of the device application. In this paper, the most relevant problems for the development of the STL are first discussed. Then, it presents a set of strategies and solutions oriented to produce an efficient and non-intrusive STL to be used exclusively during the in-field testing of automotive processor cores. The proposed approach was experimented on an automotive SoC developed by STMicroelectronics.

Proceedings ArticleDOI
01 Apr 2019
TL;DR: A new fault detection scheme based information redundancy for the AES robustness against the fault injection attacks is proposed and the original and the protected AES hardware implementation are implemented on the Xilinx Virtex-5 FPGA.
Abstract: The protection of the symmetric cryptographic algorithm, specially the Advanced Encryption Standard (AES), against fault injection attacks is very inportant to guarantee the security of transmitted data. In this paper, we proposed a new fault detection scheme based information redundancy for the AES. We analysis the detection scheme robustness against the fault injection attacks. The simulation fault attacks results prove that the fault coverage reaches 71.43%. In addition, the original and the protected AES hardware implementation have been implemented on the Xilinx Virtex-5 FPGA. We confirm that the protected AES is very effective while keeping the frequency overhead very low.

Book ChapterDOI
01 Jan 2019
TL;DR: A test oracle based on probabilistic neural network is proposed, aiming at the classification problem, and experiments show that it is better than BP neural network in prediction speed and accuracy.
Abstract: Test oracle is a mechanism that to determine whether the actual output value of the program is in line with expectations. It is an indispensable part of software testing process and also a weak area in software testing. The automation of test oracle not only effectively reduces the burden on testers, but also provides strong support for uninterrupted continuous testing. Heuristic-based oracle has the advantage of easy implementation, fast execution, and wider fault coverage. Heuristic-based oracle usually uses BP neural network as oracle information, but compared with probabilistic neural network, BP neural network has its limitation in classification. Aiming at the classification problem, this paper proposes a test oracle based on probabilistic neural network. Experiments show that it is better than BP neural network in prediction speed and accuracy.

Proceedings ArticleDOI
16 Apr 2019
TL;DR: The aim of this paper is to present a novel Design for Test infrastructure, accessible via software, for enabling a high fault coverage on-line test of arithmetic units within embedded processor cores, to overcome limitations of both hardware- and software-based test approaches, while striving for a low invasive on- line test.
Abstract: Safety-critical applications require to reach high fault coverage figures for on-line testing in order to be compliant with currently used functional safety standards. Nowadays, for meeting these constraints different solutions are adopted by semiconductor manufactures. Such approaches may vary from pure hardware-based mechanisms to software-based ones. Each of these possible solutions presents several advantages and drawbacks, typically: software approaches are less intrusive and have the advantage of reduced test application time compared to hardware ones. Conversely, hardware approaches yield high defect coverage but they are normally invasive and have longer test application time. The aim of this paper is to present a novel Design for Test infrastructure, accessible via software, for enabling a high fault coverage on-line test of arithmetic units within embedded processor cores. The end-goal is to overcome limitations of both hardware- and software-based test approaches, while striving for a low invasive on-line test. Such architecture was implemented on an open source processor, the OpenRISC 1200 and its effectiveness evaluated by means of exhaustive fault injection campaigns.

Journal ArticleDOI
TL;DR: In this paper, a safety-related variant of complete test suites for finite state machines is introduced, which can be used to uncover every violation of safety properties from a certain well-defined class, while erroneous behaviour without safety relevance may remain undetected.
Abstract: In this paper, a novel safety-related variant of complete test suites for finite state machines is introduced. Under certain hypotheses which are similar to the ones used in the well-known W-Method and its improved versions, the new method guarantees to uncover every violation of safety properties from a certain well-defined class, while erroneous behaviour without safety relevance may remain undetected. While the method can be based on any of the known complete strategies for FSM testing, its most effective variant is based on the H-method, and this variant is presented in detail, denoted as the Safety-complete H-Method. It is guaranteed that application of the Safety-complete H-Method always results in less or equally many test cases than when applying the original H-Method. In well-defined situations that can be pre-determined from the reference model, the Safety-complete H-Method leads to a substantial reduction of test cases in comparison to the size of the analogous H test suites. We advocate this new test suite for situations, where exhaustive testing of the complete system is too expensive. In these cases, strong guarantees with respect to fault coverage should only be given for the errors representing safety violations, while it may be considered as acceptable if less critical errors remain undetected.

Proceedings ArticleDOI
18 Jun 2019
TL;DR: Resistance threshold for the defect detection is determined considering trade-off between fault coverage and yield loss, and resistance between a pair of bumps under TSVs to detect open defects of the TSVs as a part of structural power integrity test.
Abstract: Increasing test coverage of power integrity in manufacturing test of 3D-ICs is necessary to achieve zero DPPM (Defect Parts Per Million) in the market. Although only functional tests are applied to analog circuits such as power distribution networks in general, applying structural tests will increase the coverage. This paper proposes to measure resistance between a pair of bumps under TSVs (Through Silicon Vias) to detect open defects of the TSVs as a part of structural power integrity test. Diagnostic performance of each bump pair is evaluated by simulations and the best one is selected to detect each TSV defect. Resistance threshold for the defect detection is determined considering trade-off between fault coverage and yield loss. Experimental simulations of power distribution network in a 3DIC with 2 dies are conducted and the trade-off between them is derived.

Proceedings ArticleDOI
01 Sep 2019
TL;DR: Simulation results show that the proposed BIST algorithm named March-LV for low-voltage SRAM arrays covers 100% target faults, and the test results further verify the feasibility of the algorithm.
Abstract: A novel Built-In Self-Test (BIST) algorithm is proposed in this paper, which is used for testing low-voltage SRAM. The algorithm is the improvement of March C+ algorithm, which integrates the continuous write 0 and write 1 operations to cover more fault models like Read Destructive Coupling Fault (CFrd), Write Destructive Coupling Fault (CFwd) and Write Disturb Fault. Consequently, higher fault coverage is achieved than traditional March algorithm. The proposed algorithm is implemented by user defined algorithm (UDA) of Mentor tools. In order to verify the effectiveness of the algorithm, a low-voltage 8T SRAM chip is designed and tested based on SMIC 40nm LL CMOS process. Simulation results show that the proposed BIST algorithm named March-LV for low-voltage SRAM arrays covers 100% target faults. The test results further verify the feasibility of the algorithm.

Proceedings ArticleDOI
05 Jul 2019
TL;DR: This paper presents automation system of Fault Detection and Analysis using PLC and SCADA and includes three types of failure of fault detection in boiler which are major caused in steam power Plant during operation.
Abstract: This paper present automation system of Fault Detection and Analysis using PLC and SCADA (Softwares-RSlogix 500 and Wonderware Intouch). The processes which are used for monitoring techniques are most important in practice and based on models constructed. A new approach to failure detection and isolation either remove the faulty system or convert it into backward error recovery system. In PLC controlled flexible fault detection and manufacturing systems, there is no automatic fault detection module in PLC controller itself, so additional module required to be improved. The New fault detection and fault finding system based on measured data from the detection machines and sensor data. In this system PLC internal program detects and find the fault on the basis of sensor and measured data. The fault detection is analysis on the basis of comparing normal condition with abnormal condition. Generally, fault detection via Allen Bradley PLC program execution can provide greater fault coverage across a processor-chip compared to error coding techniques on individual hardware structures. This paper includes three types of failure of fault detection in boiler which are major caused in steam power Plant during operation. In this whole process SCADA is used for Real time visualization. SCADA provide a complete pictorial overview of the entire of the entire plant with necessary monitoring and control facilities at a central station.