scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 2019"


Journal ArticleDOI
TL;DR: This paper presents a hybrid test point technology designed to reduce deterministic PCs and to improve fault detection likelihood by means of the same minimal set of test points, and demonstrates feasibility of the new schemes for industrial designs.
Abstract: Logic built-in self-test (LBIST) is now increasingly used with on-chip test compression as a complementary solution for in-system test, where high quality, low power, low silicon area, and most importantly short test application time are key factors affecting ICs targeted for safety-critical systems. Test points, common in LBIST-ready designs, can help to reduce test time and the overall silicon overhead so that one can get desired test coverage with the minimal number of patterns. Typically, LBIST test points are dysfunctional when enabled in an ATPG-based test compression mode. Similarly, test points used to reduce ATPG pattern counts (PCs) cannot guarantee desired random testability. In this paper, we present a hybrid test point technology designed to reduce deterministic PCs and to improve fault detection likelihood by means of the same minimal set of test points. The hybrid test points are subsequently deployed in a scan-based LBIST scheme addressing stringent test requirements of certain application domains such as the automotive electronics market. These requirements, largely driven by safety standards, are met by significantly reducing test application time while preserving the high fault coverage. The new scheme is a combination of pseudorandom test patterns delivered in a test-per-clock fashion through conventional scan chains and per-cycle-driven hybrid observation test points that capture faulty effects every shift cycle into dedicated scan chains. Their content is gradually shifted into a compactor shared with the remaining chains that deliver responses once a test pattern has been shifted-in. Experimental results obtained for industrial designs confirm feasibility of the new schemes, and they are reported herein.

28 citations


Proceedings ArticleDOI
04 Apr 2019
TL;DR: Pseudo-random test patterns are generated to test circuit using reseeding LFSR technique which reduces the need for memory to store seed value and the power utilization and indirectly reduces the time required to check the circuits.
Abstract: Testing of circuits became difficult as the scale of integration is increasing as said in Moore’s Law. Conventional testing approach is not sufficient with the growth of device counts and density. Testing helps the developer to investigate faults and error present in developed circuit which helps to reduce time require to test and thus decreases chances of getting failed during operation. Test time is one of the most important parameters in digital circuit testing which effects the overall process of testing. Reducing the test time of the test pattern generation is one of the most effected solution for the process. Reseeding LFSR is one of the methods to generate the test patterns for testing. In this paper, pseudo-random test patterns are generated to test circuit using reseeding LFSR technique. This helps to reduce the test pattern required to be stored for testing. This technique can be applied with the principles which are required for low power as well as low test data volume. Fault coverage of proposed circuit is calculated using ISCAS’89 benchmark circuits. The technique is integrated with the benchmark circuits and comparison is done based on the performance and resource utilization. Proposed model reduces the need for memory to store seed value and the power utilization. Reseeding can mainly be applied for BIST which targets complete fault coverage and minimization of the test length. Data compression for reducing the test pattern required for testing will indirectly reduce the time required to check the circuits. Future work is to reduce time required for the test pattern generation. Hamming distance can be applied to calculate the number of bits changing during the test patterns transition. Hamming distance approach can be implemented to reduce the parameter.

24 citations


Proceedings ArticleDOI
01 Sep 2019
TL;DR: An oracle-free, ATPG-based approach is proposed for characterizing the security of a locked sequential circuit, and is effective at recovering the key sequence from various sequentially locked circuits that have been locked using different locking methods.
Abstract: Hardware security-related threats such as the insertion of malicious circuits, overproduction, and reverse engineering are of increasing concern in the IC industry. To mitigate these threats, various design-for-trust techniques have been developed, including sequential logic locking. Sequential logic locking protects a non-scanned design by employing a key-controlled entrance FSM, key-controlled transitions, or a combination of both techniques. Current methods for characterizing (attacking) the security of sequentially locked circuits do not have the scalability to be applicable to modern circuits. In addition, current methods often require the use of an oracle, which is a working, unlocked circuit that is assumed to be fully initializable and controllable. In this work, an oracle-free, ATPG-based approach is proposed for characterizing the security of a locked sequential circuit. This method is of several in a tool box called CLIC-A (Characterization of Locked ICs via ATPG). Experiments using CLIC-A demonstrate it is effective at recovering the key sequence from various sequentially locked circuits that have been locked using different locking methods.

20 citations


Proceedings ArticleDOI
14 May 2019
TL;DR: In this article, the authors propose TrojanZero, a methodology for designing undetectable HTs in the circuits, which conceals their existence by gate-level modifications without being detected using standard testing techniques.
Abstract: Conventional Hardware Trojan (HT) detection techniques are based on the validation of integrated circuits to determine changes in their functionality, and on non-invasive side-channel analysis to identify the variations in their physical parameters. In particular, almost all the proposed side-channel power-based detection techniques presume that HTs are detectable because they only add gates to the original circuit with a noticeable increase in power consumption. This paper demonstrates how undetectable HTs can be realized with zero impact on the power and area footprint of the original circuit. Towards this, we propose a novel concept of TrojanZero and a systematic methodology for designing undetectable HTs in the circuits, which conceals their existence by gate-level modifications. The crux is to salvage the cost of the HT from the original circuit without being detected using standard testing techniques. Our methodology leverages the knowledge of transition probabilities of the circuit nodes to identify and safely remove expendable gates, and embeds malicious circuitry at the appropriate locations with zero power and area overheads when compared to the original circuit. We synthesize these designs and then embed in multiple ISCAS85 benchmarks using a 65nm technology library, and perform a comprehensive power and area characterization. Our experimental results demonstrate that the proposed TrojanZero designs are undetectable by the state-of-the-art power-based detection methods.

18 citations


Proceedings ArticleDOI
25 Mar 2019
TL;DR: This paper proposes an automatic test pattern generation methodology for approximate circuits based on boolean satisfiability, which is aware of output quality and approximable vs non-approximable faults, and can significantly reduce the number of faults to be tested, and test time accordingly, without sacrificing the output quality or test coverage.
Abstract: Approximate computing has gained growing attention as it provides trade-off between output quality and computation effort for inherent error tolerant applications such as recognition, mining, and media processing applications. As a result, several approximate hardware designs have been proposed in order to harness the benefits of approximate computing. While these circuits are subjected to manufacturing defects and runtime failures, the testing methods should be aware of their approximate nature. In this paper, we propose an automatic test pattern generation methodology for approximate circuits based on boolean satisfiability, which is aware of output quality and approximable vs non-approximable faults. This allows us to significantly reduce the number of faults to be tested, and test time accordingly, without sacrificing the output quality or test coverage. Experimental results show that, the proposed approach can reduce the fault list by 2.85× on average while maintaining high fault coverage.

16 citations


Proceedings ArticleDOI
06 Mar 2019
TL;DR: VeriSFQ as discussed by the authors is a semi-formal verification framework for single-flux quantum (SFQ) circuits using the Universal Verification Methodology (UVM) standard.
Abstract: In this paper, we propose a semi-formal verification framework for single-flux quantum (SFQ) circuits called VeriSFQ, using the Universal Verification Methodology (UVM) standard. The considered SFQ technology is superconducting digital electronic devices that operate at cryogenic temperatures with active circuit elements called the Josephson junction, which operate at high switching speeds and low switching energy - allowing SFQ circuits to operate at frequencies over 300 gigahertz. Due to key differences between SFQ and CMOS logic, verification techniques for the former are not as advanced as the latter. Thus, it is crucial to develop efficient verification techniques as the complexity of SFQ circuits scales. The VeriSFQ framework focuses on verifying the key circuit and gate-level properties of $\mathrm{SFQ}$ logic: fanout, gate-level pipeline, path balancing, and input-to-output latency. The combinational circuits considered in analyzing the performance of VeriSFQ are: Kogge-Stone adders (KSA), array multipliers, integer dividers, and select ISCAS’85 combinational benchmark circuits. Methods of introducing bugs into SFQ circuit designs for verification detection were experimented with - including stuck-at faults, fanout errors, unbalanced paths, and functional bugs like incorrect logic gates. In addition, we propose an SFQ verification benchmark consisting of combinational SFQ circuits that exemplify SFQ logic properties and present the performance of the VeriSFQ framework on these benchmark circuits. The portability and reusability of the UVM standard allows the VeriSFQ framework to serve as a foundation for future SFQ semi-formal verification techniques.

15 citations


Journal ArticleDOI
TL;DR: A novel ATPG technique is presented that is able to achieve an average yield improvement ranging from 19% up to 36% — compared to conventional ATPG—in terms of approximation-redundant fault coverage reduction, and in some cases, the improvement can reach up to 100%.
Abstract: Intrinsic resiliency of many today's applications opens new design opportunities. Some computation accuracy loss within the so-called resilient kernels does not affect the global quality of results. This has led the scientific community to introduce the approximate computing paradigm that exploits such a concept to boost computing system performances. By applying approximation to different layers, it is possible to design more efficient systems—in terms of energy, area, and performance—at the cost of a slight accuracy loss. In particular, at hardware level, this led to approximate integrated circuits. From the test perspective, this particular class of integrated circuits leads to new challenges. On the other hand, it also offers the opportunity of relaxing test constraints at the cost of a careful selection of so-called approximation-redundant faults . Such faults are classified as tolerable because of the slight introduced error. It follows that improvements in yield and test-cost reduction can be achieved. Nevertheless, conventional automatic test pattern generation (ATPG) algorithms, when not aware of the introduced approximation, generate test vectors covering approximation-redundant faults, thus reducing the yield gain. In this work, we show experimental evidence of such problem and present a novel ATPG technique to deal with it. Then, we extensively evaluate the proposed technique, and show that we are able to achieve an average yield improvement ranging from 19% up to 36% — compared to conventional ATPG—in terms of approximation-redundant fault coverage reduction. In some cases, the improvement can reach up to 100%.

15 citations


Proceedings ArticleDOI
11 Mar 2019
TL;DR: This paper identifies a compact subset of defect locations for defect characterization and ATPG, in which it includes only one representative defect location for each set of equivalent defects locations.
Abstract: Cell-aware test (CAT) explicitly targets defects inside library cells and therefore significantly reduces the amount of test escapes compared to conventional automatic test pattern generation (ATPG). Our CAT flow consists of three steps: (1) defect-location identification (DLI), (2) defect characterization based on detailed analog simulation of the cells, and (3) cell-aware automatic test pattern generation (ATPG). This paper focuses on Step 1, as quality and cost are determined by the set of cell-internal defect locations considered in the remainder of the flow. Based on technology inputs from the user and a parasitic extraction (PEX) run that analyzes the cell layouts, we derive a set of open defects on and short defects between both transistor terminals and intra-cell interconnects. The full set of defect locations is stored for later use during failure analysis. Through dedicated DLI algorithms, we identify a compact subset of defect locations for defect characterization and ATPG, in which we include only one representative defect location for each set of equivalent defects locations. For Cadence’s GPDK045 library, the compact subset contains only 2.8% of the full set of defect locations and reduces the time required for defect characterization with the same ratio.

14 citations


Proceedings ArticleDOI
Yu Zhou1, Jun Bi1, Yunsenxiao Lin1, Yangyang Wang1, Dai Zhang1, Zhaowei Xi1, Jiamin Cao1, Chen Sun1 
24 Jun 2019
TL;DR: P4Tester is proposed, a new network testing system for troubleshooting runtime rule faults on programmable data planes that offers a new intermediate representation based on Binary Decision Diagram, which enables efficient probe generation for various P4-defined data plane functions and a new probe model that uses source routing to forward probes.
Abstract: P4 and programmable data planes bring significant flexibility to network operation but are inevitably prone to various faults. Some faults, like P4 program bugs, can be verified statically, while some faults, like runtime rule faults, only happen to running network devices, and they are hardly possible to handle before deployment. Existing network testing systems can troubleshoot runtime rule faults via injecting probes, but are insufficient for programmable data planes due to large overheads or limited fault coverage. In this paper, we propose P4Tester, a new network testing system for troubleshooting runtime rule faults on programmable data planes. First, P4Tester proposes a new intermediate representation based on Binary Decision Diagram, which enables efficient probe generation for various P4-defined data plane functions. Second, P4Tester offers a new probe model that uses source routing to forward probes. This probe model largely reduces rule fault detection overheads, i.e. requiring only one server to generate probes for large networks and minimizing the number of probes. Moreover, this probe model can test all table rules in a network, achieving full fault coverage. Evaluation based on real-world data sets indicates that P4Tester can efficiently check all rules in programmable data planes, generate 59% fewer probes than ATPG and Pronto, be faster than ATPG by two orders of magnitude, and troubleshoot multiple rule faults within one second on BMv2 and Tofino.

13 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed procedure is the first work that distinguishes 100% of all fault pairs in all ISCAS’89 and IWLS’05 benchmark circuits and over 99.99% for all ITC’99 benchmark circuits using a conventional ATPG tool that generates tests to detect faults.
Abstract: This paper proposes an efficient diagnosis-aware automatic test pattern generation (ATPG) procedure that can quickly identify equivalent-fault pairs and generate diagnosis patterns (DPs) for nonequivalent-fault pairs, where a (non)equivalent fault pair contains two stuck-at faults that are (non)equivalent. The proposed procedure contains three main methods, which together can efficiently generate highly compacted DPs by using a conventional ATPG tool. First, an all-pairs at-a-time diagnosis pattern generation (AFPAT-DPG) method, which adopts user-defined fault models (UDFMs), is employed to quickly generate DPs for most fault pairs that cannot be distinguished by a given set of, typically fault detection, test patterns (TP). For those fault pairs that cannot be distinguished by AFPAT-DPG, a multipair diagnostic ATPG method (MP-DATPG) is used. MP-DATPG is a complete method in the sense that it can generate diagnosis tests for every distinguishable pair of faults or prove that the pair of faults is indistinguishable. However, due to back-track limits in test generation procedures, diagnosis test generation for some fault pairs may be aborted after the application of the two methods. For such fault pairs, a subcircuit analysis (SCA) method is applied to identify equivalent fault pairs among the aborted fault pairs by trimming the circuit under consideration into one that is much easier to process within the back-track limits of the test generation procedures. Experimental results show that the proposed procedure is the first work that distinguishes 100% of all fault pairs in all ISCAS’89 and IWLS’05 benchmark circuits and over 99.99% for all ITC’99 benchmark circuits using a conventional ATPG tool that generates tests to detect faults.

11 citations


Journal ArticleDOI
TL;DR: A method for correcting multiple design bugs in gate level circuits using an incremental satisfiability-based mechanism which not only does not require a complete set of test patterns to produce a gate level implementation which does not exhibit erroneous behavior, but also will not reintroduce old bugs after fixing new bugs.
Abstract: As the complexity of digital designs continuously increases, existing methods to ensure their correctness are facing more serious challenges. Although many studies have been provided to enhance the efficiency of debugging methods, they are still suffering from the lack of scalable automatic correction mechanisms. In this paper, we propose a method for correcting multiple design bugs in gate level circuits. To reduce the correction time, an incremental satisfiability-based mechanism is proposed which not only does not require a complete set of test patterns to produce a gate level implementation which does not exhibit erroneous behavior, but also will not reintroduce old bugs after fixing new bugs. The results show that our method can quickly and accurately suggest corrected gates even for large industrial circuits with many bugs. Average improvements in terms of the runtime and memory usage in comparison with existing methods are ${2.8 \times }$ and ${{6.5 \times }}$ , respectively. Also, the results show that our method compared to the state-of-the-art methods needs ${{2.6 \times }}$ less test patterns.

Proceedings ArticleDOI
01 Nov 2019
TL;DR: This work proposes a novel method to predict simultaneous switching noise using fast Deep Neural Networks (DNNs) such as Fully Connected Network, Convolutional Neural Network, and Natural Language Processing, that is significantly faster than conventional estimation methods and can potentially reduce the test time.
Abstract: The Power Distribution Network (PDN) is designed for worst-case power-hungry functional use-cases. Most often Design for Test (DFT) scenarios are not accounted for, while optimizing the PDN design. Automatic Test Pattern Generation (ATPG) tools typically follow a greedy algorithm to achieve maximum fault coverage with short test times. This causes Power Supply Noise (PSN) during scan testing to be much higher than functional mode since switching activity is higher by an order of magnitude. Understanding the noise characteristics through exhaustive pattern simulation is extremely machine and memory intensive and requires unsustainably long runtimes. Hence, we aggressively limit switching factors to conservative estimates and rely on post-silicon noise characterization to optimize test vectors. In this work, we propose a novel method to predict simultaneous switching noise using fast Deep Neural Networks (DNNs) such as Fully Connected Network, Convolutional Neural Network, and Natural Language Processing. Our approach, that is based on pre-silicon ATPG vectors, is significantly faster than conventional estimation methods and can potentially reduce the test time.

Proceedings ArticleDOI
01 Nov 2019
TL;DR: An ATPG-based toolbox called CLIC-A (Characterization of Locked Integrated Circuits via ATPG) that can be used to determine the level of security effectiveness for a given instance of a locked circuit is proposed.
Abstract: Threats to integrated circuits exist due to the outsourcing of IC design and fabrication to third parties. As a result, various design-for-trust techniques have been developed including logic locking. Current methods of characterizing the security of logic locking require multiple tools and expertise for the various locking types now in existence. In this paper, we propose an ATPG-based toolbox called CLIC-A (Characterization of Locked Integrated Circuits via ATPG) that can be used to determine the level of security effectiveness for a given instance of a locked circuit. Experiments demonstrate that CLIC-A is effective across a multitude of locking methods.

Proceedings ArticleDOI
01 Sep 2019
TL;DR: This paper proposes two algorithms that manipulate DDMs to optimize cell-aware ATPG results with respect to fault coverage, test pattern count, and compute time, and derives an innovative heuristic that outperforms solutions in the literature.
Abstract: Cell-aware test (CAT) explicitly targets defects inside library cells and therefore significantly reduces the number of test escapes compared to conventional automatic test pattern generation (ATPG) approaches that cover cell-internal defects only serendipitously. CAT consists of two steps, viz. (1) library characterization and (2) cell-aware ATPG. Defect detection matrices (DDMs) are used as the interface between both CAT steps; they record which cell-internal defects are detected by which cell-level test patterns. This paper proposes two algorithms that manipulate DDMs to optimize cell-aware ATPG results with respect to fault coverage, test pattern count, and compute time. Algorithm 1 identifies don't-care bits in cell patterns, such that the ATPG tool can exploit these during cell-to-chip expansion to increase fault coverage and reduce test-pattern count. Algorithm 2 selects, at cell level, a subset of preferential patterns that jointly provides maximal fault coverage at a minimized stimulus care-bit sum. To keep the ATPG compute time under control, we run cell-aware ATPG with the preferential patterns first, and a second ATPG run with the remaining patterns only if necessary. Selecting the preferential patterns maps onto a well-known N Phard problem, for which we derive an innovative heuristic that outperforms solutions in the literature. Experimental results on twelve circuits show average reductions of 43% of non-covered faults and 10% in chip-pattern count.

Journal ArticleDOI
TL;DR: This paper presents a fully autonomous built-in self-test approach for FAST, which supports in-field testing by appropriate strategies for test generation and response compaction, and the impact of the considered fault size is studied in detail.
Abstract: Marginal hardware introduces severe reliability threats throughout the life cycle of a system. Although marginalities may not affect the functionality of a circuit immediately after manufacturing, they can degrade into hard failures and must be screened out during manufacturing test to prevent early life failures. Furthermore, their evolution in the field must be proactively monitored by periodic tests before actual failures occur. In recent years, small delay faults (SDFs) have gained increasing attention as possible indicators of marginal hardware. However, SDFs on short paths may be undetectable even with advanced timing aware ATPG. Faster-than-at-speed test (FAST) can detect such hidden delay faults (HDFs), but so far FAST has mainly been restricted to manufacturing test. This paper presents a fully autonomous built-in self-test approach for FAST, which supports in-field testing by appropriate strategies for test generation and response compaction. In particular, the required test frequencies for HDF detection are selected, such that hardware overhead and test time are minimized. Furthermore, test response compaction handles the large number of unknowns ( ${X}$ -values) on long paths by storing intermediate MISR-signatures in a small on-chip memory for later analysis using ${X}$ -canceling transformations. A comprehensive experimental study demonstrates the effectiveness of the presented approach. In particular, the impact of the considered fault size is studied in detail.

Proceedings ArticleDOI
23 Apr 2019
TL;DR: This paper addresses several radically new phenomena in RSFQ technology, especially the existence of single-pattern delay tests and the need to propagate delayed values via multiple pipeline stages, and proposes a completely new ATPG paradigm which utilizes these new phenomena to select target delay subpaths and generate test patterns that are guaranteed to excite the worst-case delay along each targetdelay sub-path.
Abstract: Rapid Single Quantum Flux (RSFQ) logic, based on Josephson Junctions (JJs), is seeing a resurgence as a way for providing high performance in the era beyond the end of physical scaling of CMOS. Since it uses fabrication processes with large feature sizes, the defect density for RSFQ is dramatically lower than its CMOS counterpart. Hence, process variations and other RSFQ-specific non-idealities become the major causes of chip failures. Because of the nature of its quantized pulse-based operation, even highly-distorted pulses are interpreted logically correctly by cells, but the timings is affected. Therefore, timing verification and delay testing increase in importance in RSFQ. In this paper, we address several radically new phenomena in RSFQ technology, especially the existence of single-pattern delay tests and the need to propagate delayed values via multiple pipeline stages. We then characterize cells under process variations and identify delay excitation conditions, sensitization conditions, and conditions for propagation of the logic errors caused by process variations. We then propose a completely new ATPG paradigm which utilizes these new phenomena to select target delay subpaths and generate test patterns that are guaranteed to excite the worst-case delay along each target delay sub-path. Finally, we present Monte Carlo simulation results for benchmark circuits with process variations to demonstrate the effectiveness of the vectors generated by our new ATPG.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: This paper proposes a methodology to improve Fault Analysis Tools Confidence Level (TCL) by detecting errors in the classification of faults by combining the strengths of Automatic Test Pattern Generators, Formal Methods and Fault Injection Simulators.
Abstract: The development of Integrated Circuits for the Automotive sector imposes on complex challenges. ISO26262 Functional Safety requirements entail extensive Fault Injection campaigns and complex analysis for the evaluation of deployed Software Tools. This paper proposes a methodology to improve Fault Analysis Tools Confidence Level (TCL) by detecting errors in the classification of faults. By combining the strengths of Automatic Test Pattern Generators (ATPG), Formal Methods and Fault Injection Simulators we are able to automatically generate a Test Environment that enables the validation of the tools and provides supplementary information about the design behavior. Our results showed fault detection rates above 99% including information to improve ISO26262 metrics calculation.

Proceedings ArticleDOI
25 Mar 2019
TL;DR: A new formal digital-inspired technique, called AMS-QED, can potentially solve issues in analog and mixed-signal verification in SOCs as these subsystems do not share the formal description potential of their digital counterparts.
Abstract: The integration of increasingly more complex and heterogeneous SOCs results in ever more complicated demands for the verification of the system and its underlying subsystems. Pre-silicon design validation as well as post-silicon test generation of the analog and mixed-signal (AMS) subsystems within SOCs proves extremely challenging as these subsystems do not share the formal description potential of their digital counterparts. Several methods have been developed to cope with this lack of formalization during AMS pre-silicon validation, including model checkers, affine arithmetic formalisms and equivalence checkers. However, contrary to the industrial practice for digital circuits of using formal verification and ATPG tools, common industry practice for analog circuits still largely defaults to simulation-based validation and test generation. A new formal digital-inspired technique, called AMS-QED, can potentially solve these issues in analog and mixed-signal verification.

Journal ArticleDOI
TL;DR: This paper proposes an alternative method, which overcomes drawbacks by determining application-specific knowledge of the circuit, namely the relations of flip-flops and when they assume the same value, and is automatically applicable to arbitrary circuits.
Abstract: Due to their shrinking feature sizes as well as environmental influences, such as high-energy radiation, electrical noise, and particle strikes, integrated circuits are getting more vulnerable to transient faults. Accordingly, how to make those circuits more robust has become an essential step in today’s design flows. Methods increasing the robustness of circuits against these faults already exist for a long period of time but either introduce huge additional logic, change the timing behavior of the circuit, or are applicable for dedicated circuits such as microprocessors only. In this paper, we propose an alternative method, which overcomes these drawbacks by determining application-specific knowledge of the circuit, namely the relations of flip-flops and when they assume the same value. By this, we exploit partial redundancies, which are inherent in most circuits anyway (even the optimized ones), to frequently compare the circuit signals for their correctness—eventually leading to an increased robustness. Since determining the correspondingly needed information is a computationally hard task, formal methods, such as bounded model checking, satisfiability-based automatic test pattern generation, and binary decision diagrams, are utilized for this purpose. The resulting methodology requires only a slight increase in additional hardware, does only influence the timing behavior of the circuit negligibly, and is automatically applicable to arbitrary circuits. Experimental evaluations confirm these benefits.

Proceedings ArticleDOI
Haiying Ma, Guo Rui, Quan Jing, Jing Han, Yu Huang1, Rahul Singhal1, Wu Yang1, Xin Wen1, Fanjin Meng1 
01 Sep 2019
TL;DR: The solution and tradeoffs made to optimize DFT silicon area overhead, test cost, test coverage, pre-silicon verification run time with ready-to-use silicon bring-up methodologies are presented.
Abstract: Recent advances in artificial intelligence (AI) are becoming a driving force behind the technological revolution and industrial transformation leading to economic and social development. Application specific AI SoCs are being developed at different companies to accelerate processing of the data-intensive AI computations. There are many new challenges in designing and implementing Design-For-Test (DFT) logic for AI SoCs. In this paper, we share our experiences with DFT implementation for our AI SoC. To achieve lower power and higher bandwidth for AI SoC, we use high speed Serdes PHY with lower threshold voltage, which uses many SoC pins. Therefore, it has a negative impact on DFT and ATPG due to lack of reusable IOs that can be used as scan test channels. In this paper, we present our solution and tradeoffs made to optimize DFT silicon area overhead, test cost, test coverage, pre-silicon verification run time with ready-to-use silicon bring-up methodologies.

Proceedings ArticleDOI
01 Sep 2019
TL;DR: A qubit method for synthesizing tests of discrete functions of SoC components is proposed, which leverages Boolean derivatives with respect to a vector description of logic element's behavior in the form of Q-coverage, to synthesize deductive matrices in the qubit fault simulation method.
Abstract: A qubit method for synthesizing tests of discrete functions of SoC components is proposed, which leverages Boolean derivatives with respect to a vector description of logic element's behavior in the form of Q-coverage. The primacy of the metrics of mathematical and technological relations in data structure, on which effective algorithms and methods for controlling or data processing are built to achieve the performance of testing processes, is formulated. A vector model or form of Boolean derivatives is introduced, which is used to synthesize deductive matrices in the qubit fault simulation method and to evaluate the quality of test sequences. A tree-driven ATPG processor, represented by a binary tree-graph of xor-elements for parallel processing of parts of the qubit coverage, and data structures of SoC logic for calculating qubit Boolean derivatives are proposed. The proposed data structures and methods are implemented in a software application that focuses on parallel testing the logic functions of digital systems-on-chips using qubit coverage.

Journal ArticleDOI
TL;DR: The result demonstrates that the ATPG can effectively reduce the volume of control points, decrease the vibration of side-feeding motion and improve machining efficiency while surface quality is well maintained for large aperture freeform optics.
Abstract: Slow tool servo (STS) diamond turning is a well-developed technique for freeform optics machining. Due to low machining efficiency, fluctuations in side-feeding motion and redundant control points for large aperture optics, this paper reports a novel adaptive tool path generation (ATPG) for STS diamond turning. In ATPG, the sampling intervals both in feeding and cutting direction are independently controlled according to interpolation error and cutting residual tolerance. A smooth curve is approximated to the side-feeding motion for reducing the fluctuations in feeding direction. Comparison of surface generation of typical freeform surfaces with ATPG and commercial software DiffSys is conducted both theoretically and experimentally. The result demonstrates that the ATPG can effectively reduce the volume of control points, decrease the vibration of side-feeding motion and improve machining efficiency while surface quality is well maintained for large aperture freeform optics.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: Inspired by the recent advances in reinforcement learning (RL), an RL-based test program generation technique for transition delay fault (TDF) detection is proposed, which significantly improves data efficiency, compared to previous open-loop approaches.
Abstract: Software-based Self-test (SBST) has been recognized as a promising complement to scan-based structural Built-in Self-test (BIST), especially for in-field self-test applications. In response to the ever-increasing complexities of the modern CPU designs, machine learning algorithms have been proposed to extract processor behavior from simulation data and help constrain ATPG to generate functionally-compatible patterns. However, these simulation-based approaches in general suffer sample inefficiency, i.e., only a small portion of the simulation traces are relevant to fault detection. Inspired by the recent advances in reinforcement learning (RL), we propose an RL-based test program generation technique for transition delay fault (TDF) detection. During the training process, knowledge learned from the simulation data is employed to tune the simulation policy; this close-loop approach significantly improves data efficiency, compared to previous open-loop approaches. Furthermore, RL is capable of dealing with delayed responses, which is common when executing processor instructions. Using the trained RL model, instruction sequences that bring the processor to the fault-sensitizing states, i.e., TDF test patterns, can be generated. The proposed test program generation technique is applied to a MIPS32 processor. For TDF, the fault coverage is 94.94%, which is just 2.57% less than the full-scan based approach.

Proceedings ArticleDOI
01 Apr 2019
TL;DR: It is observed that while the ATPG does not yield 100% fault coverage,NLFSR technique gives 100% coverage and the difference in on-chip power between LFSR and NLFSR is also found to be very large with NL FSR consuming more than 80% less power than L FSR.
Abstract: The generation of test patterns for the detection of faults in VLSI circuits is the most integral part of fault detection. The patterns are generated using Pseudorandom Number Generator (PRNG) and applied to the circuits. The most commonly used PRNG is Linear Feedback Shift Register (LFSR). Other than LFSR, Non-Linear Feedback Shift Registers (NLFSR) can also be used as pattern generator. This paper mainly focuses on the advantages of using NLFSRs over ATPG. It compares the fault coverage of single stuck-at faults using NLFSR technique with the fault coverage of single stuck-at faults using ATPG tool for few ISCAS’89 circuits. This paper also shows the reduction in terms of total on-chip power while using NLFSR as pattern generator instead of LFSR. The netlist generation of the circuits is done by using Synopsys Design Compiler tool and ATPG is achieved using Synopsys TetraMax tool. In the NLFSR technique, faults are injected manually and patterns are added to the CUT to get the fault coverage. The faults injected in the circuits are detected in the simulation results of the faulty circuit. The NLFSRs and LFSRs are designed using Xilinx Vivado. The comparison between the fault coverage and power consumption of the techniques are shown in a tabular form and the simulation results for NLFSR technique is shown. It is observed that while the ATPG does not yield 100% fault coverage, NLFSR technique gives 100% coverage. The difference in on-chip power between LFSR and NLFSR is also found to be very large with NLFSR consuming more than 80% less power than LFSR.

Journal ArticleDOI
TL;DR: The intention of this study is to divide the clock frequency by half and make sure that the power is reduced without affecting any timing violations in the design-for-test phase.
Abstract: Low-power design for test is the need of the hour for any system-on-chip designer. The low-power design techniques have been a major challenge to both the designer as well as the testing engineer. ...

Proceedings ArticleDOI
01 Dec 2019
TL;DR: An innovative deterministic ATPG algorithm called TEA (Timing Exception ATPG) is proposed to prevent the generated test patterns from being impacted by timing exceptions, and can generate a more effective test set, improving test coverage, test pattern count, and the total ATPG run time significantly.
Abstract: Timing exceptions are commonly used to indicate that the timing of certain paths have been relaxed so as to enable the design to meet timing closure. Generating scan-based test patterns without considering timing exceptions can lead to invalid test responses, resulting in unpredictable test quality impact. The existing simulation-based solution masks out unreliable signals after a test pattern is generated. If the signals required for detecting the target fault are unreliable and masked out, the generated test pattern fails to detect the target fault, and it is discarded. To achieve an acceptable test coverage, several iterations of test generation with a randomized decision-making process are typically required where different tests are generated for target faults. In this paper, an innovative deterministic ATPG algorithm called TEA (Timing Exception ATPG) is proposed to prevent the generated test patterns from being impacted by timing exceptions. The deterministic algorithm is compatible with the existing simulation-based approach. In this simulation environment, TEA is complete such that for a target fault, the test pattern generated is guaranteed to detect it. If a test pattern cannot be generated using TEA, the target fault is untestable given the timing exception paths in the design and the existing simulation environment. Compared to the existing simulation-based approach, using TEA can generate a more effective test set, improving test coverage, test pattern count, and the total ATPG run time significantly.

Journal ArticleDOI
TL;DR: An Automatic Test Pattern Generation (ATPG) method using the Path-Level expression for generating the minimal complete test set to detect the faults mentioned above and the analysis of the experimental results shows that the proposed method has 100% fault coverage, and test set size is smaller than the existing methods.
Abstract: Recently there has been a growing interest in the applicability of reversible circuits. Reversible circuits are designed using reversible gates, which can efficiently reconstruct the previous state of the computation from the current state. These circuits may find potential applications to the future generation of optical and quantum computers. To ensure the reliability of these circuits, testing is a mandatory phase of the design cycle. Several fault models have been introduced for reversible circuits among which some of them have been taken from the conventional circuits. In this paper, we consider the problem of testing bridging faults (such as single and multiple input bridging faults, single and multiple intra-level bridging faults) in a reversible circuit designed with the NOT, CNOT, Toffoli gates (NCT library) and generalized (n-bit) Toffoli gates (GT library). We propose an Automatic Test Pattern Generation (ATPG) method using the Path-Level expression for generating the minimal complete test set to detect the faults mentioned above. The analysis of the experimental results shows that the proposed method has 100% fault coverage, and test set size is smaller than the existing methods.

Proceedings ArticleDOI
23 Apr 2019
TL;DR: A brief survey of digital delay fault testing is presented, which lists 100+ references on fault models, simulators, ATPG, DFT, and tools to provide direction to students, practicing engineers, and researchers alike.
Abstract: This article presents a brief survey of digital delay fault testing, which lists 100+ references on fault models, simulators, ATPG, DFT, and tools. Continuing studies are needed in this maturing field for new technologies, signal integrity, process variations, faster than critical path operation, asynchronous circuits, counterfeit ICs, and hardware Trojans. This information is compiled to provide direction to students, practicing engineers, and researchers alike.

Journal ArticleDOI
TL;DR: This work presents a compressive sensing approach, which can significantly generate optimal test patterns compared to the ATPG vectors and maximizes the probability of Trojan circuit activation, with a high level of Trojan detection rate.
Abstract: Traditionally many fabless companies outsource the fabrication of IC design to the foundries, which may not be trusted always. In order to ensure trusted IC’s it is more significant to develop an efficient technique that detects the presence of hardware Trojan. This malicious insertion causes the logic variation in the nets or leaks some sensitive information from the chip, which reduces the reliability of the system. The conventional testing algorithm for generating test vectors reduces the detection sensitivity due to high process variations. In this work, we present a compressive sensing approach, which can significantly generate optimal test patterns compared to the ATPG vectors. This approach maximizes the probability of Trojan circuit activation, with a high level of Trojan detection rate. The side channel analysis such as power signatures are measured at different time stamps to isolate the Trojan effects. The effect of process noise is minimized by this power profile comparison approach, which provides high detection sensitivity for varying Trojan size and eliminates the requirement of golden chip. The proposed test generation approach is validated on ISCAS benchmark circuits, which achieves Trojan detection coverage on an average of 88.6% reduction in test length when compared to random pattern.

Proceedings ArticleDOI
04 Apr 2019
TL;DR: A new approach which formulates ECO (Engineering Change Order) as partial logic synthesis is discussed, and a new formulation of an automatic generation of parallel/distributed computing from sequential one is introduced with an application example.
Abstract: We first present historical view on the techniques for two-level and multi-level logic optimizations, and discuss the practical issues with respect to them. Then the techniques for sequential optimizations are briefly reviewed. Based on them, a new approach which formulates ECO (Engineering Change Order) as partial logic synthesis is discussed. Finally a new formulation of an automatic generation of parallel/distributed computing from sequential one is introduced with an application example.