scispace - formally typeset
Search or ask a question

Showing papers on "Automatic test pattern generation published in 2017"


Journal ArticleDOI
TL;DR: An online fault detection and classification method is proposed for thermocouples used in nuclear power plants and a technique is proposed to identify the faulty sensor from the fault data.
Abstract: In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

76 citations


Journal ArticleDOI
TL;DR: This study proposes a signal model-based fault coding to monitor the circuit response after being stimulated to perform a fault diagnosis without training a large amount of sample data and fault classifiers and achieves relatively high fault diagnosis and prognosis accuracy.
Abstract: Analog circuits have been extensively used in industrial systems, and their failure may make the systems work abnormally and even cause accidents. In order to monitor their status, detect faults, and predict their failure early, this study proposes a signal model-based fault coding to monitor the circuit response after being stimulated to perform a fault diagnosis without training a large amount of sample data and fault classifiers. Manifold features extracted from circuit responses are associated with a fault-indicating curve in the feature space, in which a group of fault bases are uniformly and continuously distributed along with gradual deviation from the nominal value of one critical component. These bases can be deployed in a factory setting but used during field operation. Fault coding is converted to a novel optimization problem, and the optimized solution forms a fault code representing fault class, suitable for realizing fault detection, and isolation for different components. A fault indicator based on comparison between fault codes can describe performance degradation trends. To improve the prediction accuracy, historical degradation data are collected and considered as a priori exemplars, and a novel exemplar-based conditional particle filter is proposed to track a degradation process for the prediction of remaining useful performance. Case studies on two analog filter circuits demonstrate that the proposed method achieves relatively high fault diagnosis and prognosis accuracy. The main advantages of our study are two-fold: first, the high diagnostic accuracy can still be obtained even if there is no large amount of training data; second, the prognostic effect remains relatively stable whenever triggering prognosis module.

53 citations


Proceedings ArticleDOI
01 Feb 2017
TL;DR: This paper identifies three test objectives that aim to increase test suite diversity and uses a search-based algorithm to generate diversified but small test suites, and develops a prediction model to stop test generation when adding test cases is unlikely to improve fault localization.
Abstract: One promising way to improve the accuracy of fault localization based on statistical debugging is to increase diversity among test cases in the underlying test suite. In many practical situations, adding test cases is not a cost-free option because test oracles are developed manually or running test cases is expensive. Hence, we require to have test suites that are both diverse and small to improve debugging. In this paper, we focus on improving fault localization of Simulink models by generating test cases. We identify three test objectives that aim to increase test suite diversity. We use these objectives in a search-based algorithm to generate diversified but small test suites. To further minimize test suite sizes, we develop a prediction model to stop test generation when adding test cases is unlikely to improve fault localization. We evaluate our approach using three industrial subjects. Our results show (1) the three selected test objectives are able to significantly improve the accuracy of fault localization for small test suite sizes, and (2) our prediction model is able to maintain almost the same fault localization accuracy while reducing the average number of newly generated test cases by more than half.

46 citations


Journal ArticleDOI
TL;DR: A low-overhead detection technique which inserts malicious logic detection circuitry at netlist sites chosen by an algorithm that employs an intelligent and accurate analysis of fault propagation through logic gates.
Abstract: Hardware Trojan Horses have emerged as great threats to modern electronic design and manufacturing practices. Because of their inherent surreptitious nature, test vector generation to detect hardware Trojan horses is a difficult problem. Efficient online detection techniques can be more effective in detection of hardware Trojan horses. In this paper, we propose a low-overhead detection technique which inserts malicious logic detection circuitry at netlist sites chosen by an algorithm that employs an intelligent and accurate analysis of fault propagation through logic gates. Proactive system-level countermeasures can be activated on detection of malicious logic, thereby avoiding disastrous system failure. Experimental results on benchmark circuits show close to 100 percent HTH detection coverage when our proposed technique is employed, as well as acceptable overheads.

39 citations


Journal ArticleDOI
TL;DR: A design-for-test technique aimed at reducing deterministic pattern counts and test data volume through the insertion of conflict-aware test points, which takes advantage of the conflict analysis and reuses functional flip-flops as drivers of control points.
Abstract: There is mounting evidence that automatic test pattern generation tools capable of producing tests with high coverage of defects occurring in the large semiconductor nanometer designs unprecedentedly inflate test sets and test application times. A design-for-test technique presented in this paper aims at reducing deterministic pattern counts and test data volume through the insertion of conflict-aware test points. This methodology identifies and resolves conflicts across internal signals allowing test generation to increase the number of faults targeted by a single pattern. This is complemented by a method to minimize silicon area needed to implement conflict-aware test points. The proposed approach takes advantage of the conflict analysis and reuses functional flip-flops as drivers of control points. Experimental results on industrial designs with on-chip test compression demonstrate that the proposed test points are effective in achieving, on average, an additional factor of $2\times $ – $4\times $ compression for stuck-at and transition patterns over the best up-to-date results provided by the embedded deterministic test (EDT)-based regular compression.

35 citations


Journal ArticleDOI
TL;DR: This work designed a test system for testing CPSs and analyzed the variability that it needed to test different configurations, and proposed a methodology supported by a tool named ASTERYSCO that automatically generates simulation-based test system instances to test individual configurations of CPSs.
Abstract: Cyber-physical systems (CPSs) are ubiquitous systems that integrate digital technologies with physical processes. These systems are becoming configurable to respond to the different needs that users demand. As a consequence, their variability is increasing, and they can be configured in many system variants. To ensure a systematic test execution of CPSs, a test system must be elaborated encapsulating several sources such as test cases or test oracles. Manually building a test system for each configuration is a non-systematic, time-consuming, and error-prone process. To overcome these problems, we designed a test system for testing CPSs and we analyzed the variability that it needed to test different configurations. Based on this analysis, we propose a methodology supported by a tool named ASTERYSCO that automatically generates simulation-based test system instances to test individual configurations of CPSs. To evaluate the proposed methodology, we selected different configurations of a configurable Unmanned Aerial Vehicle, and measured the time required to generate their test systems. On average, around 119 s were needed by our tool to generate the test system for 38 configurations. In addition, we compared the process of generating test system instances between the method we propose and a manual approach. Based on this comparison, we believe that the proposed tool allows a systematic method of generating test system instances. We believe that our approach permits an important step toward the full automation of testing in the field of configurable CPSs.

35 citations


Journal ArticleDOI
TL;DR: This paper presents HackTest, an attack that extracts secret information generated in the test data, even if the test Data do not explicitly contain the secret, and necessitates that the IC test data generation algorithms can be reinforced with security.
Abstract: Test of integrated circuits (ICs) is essential to ensure their quality; the test is meant to prevent defective and out-of-spec ICs from entering into the supply chain. The test is conducted by comparing the observed IC output with the expected test responses for a set of test patterns; the test patterns are generated using automatic test pattern generation algorithms. Existing test-pattern generation algorithms aim to achieve higher fault coverage at lower test costs. In an attempt to reduce the size of test data, these algorithms reveal the maximum information about the internal circuit structure. This is realized through sensitizing the internal nets to the outputs as much as possible, unintentionally leaking the secrets embedded in the circuit as well. In this paper, we present HackTest, an attack that extracts secret information generated in the test data, even if the test data do not explicitly contain the secret. HackTest can break the existing intellectual property protection techniques, such as camouflaging, within 2 min for our benchmarks using only the camouflaged layout and the test data. HackTest applies to all existing camouflaged gate-selection techniques and is successful even in the presence of the state-of-the-art test infrastructure, i.e., test data compression circuits. Our attack necessitates that the IC test data generation algorithms can be reinforced with security.

34 citations


Proceedings ArticleDOI
Arani Sinha1, Sujay Pandey1, Ayush Singhal1, Alodeep Sanyal1, Alan Schmaltz1 
01 Oct 2017
TL;DR: A novel automated flow for cell characterization that can be used to create patterns at the cell boundary for DFM-aware faults is described, which compares with the cell-aware and dual-cell-aware fault models, and describes relative advantages and application scenarios.
Abstract: Yield improvement, yield ramp, and defect screening have been major areas of concern for the semiconductor industry as technology nodes have advanced. Much effort has been focused on capturing the defects missed by traditional stuck-at and transition delay fault model based testing. A majority of these un-modeled defects stems from features inside a standard cell or between two adjacent standard cells. Traditionally, critical area has been used as the manufacturability guideline to determine opens and shorts that should be targeted for test. This paper motivates a new paradigm — a design-for-manufacturability (DFM) hotspot-aware fault model to target intra-cell and inter-cell defects. The basic objective behind this approach is to bring in knowledge of manufacturing vulnerability in design layouts to weigh likelihood of occurrence of systematic defects. Recent technologies have standard cells much smaller than the lithography-driven optical diameter which means the cell's feature context is a key driver for DFM-driven fault sensitivity. This paper describes a novel automated flow for cell characterization that can be used to create patterns at the cell boundary for DFM-aware faults. The paper presents ATPG results for different DFM-aware faults, and analyzes the coverage gaps. Finally, the paper ends with a comparison with the cell-aware and dual-cell-aware fault models, and describes relative advantages and application scenarios.

21 citations


Proceedings ArticleDOI
09 Apr 2017
TL;DR: A novel fault model is introduced, called the dual-cell-aware (DCA) fault model, which targets the short defects locating between two adjacent standard cells placed in the layout, which cannot be fully covered by the tests of conventional fault models.
Abstract: This paper introduces a novel fault model, called the dual-cell-aware (DCA) fault model, which targets the short defects locating between two adjacent standard cells placed in the layout A layout-based methodology is also presented to automatically extract valid DCA faults from targeted designs and cell libraries The identified DCA faults are outputted in a format that can be applied to a commercial ATPG tool for test generation The result of ATPG and fault simulation based on industrial designs have demonstrated that the DCA faults cannot be fully covered by the tests of conventional fault models including stuck-at, transition, bridge and cell-aware faults and hence require their own designated tests to detect

18 citations


Journal ArticleDOI
TL;DR: A family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size is described.
Abstract: The compaction of test programs for processor-based systems is of utmost practical importance: Software-Based Self-Test (SBST) is nowadays increasingly adopted, especially for in-field test of safety-critical applications, and both the size and the execution time of the test are critical parameters. However, while compacting the size of binary test sequences has been thoroughly studied over the years, the reduction of the execution time of test programs is still a rather unexplored area of research. This paper describes a family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size. The proposed solutions are based on instruction removal and restoration, which is shown to be computationally more efficient than instruction removal alone. Experimental results demonstrate the compaction capabilities, and allow analyzing computational costs and effectiveness of the different algorithms.

17 citations


Journal ArticleDOI
TL;DR: This work presents a new test criterion for model-based test case generation based on FSM, H-Switch Cover, which presented an average and a standard deviation better than the other two test criteria.
Abstract: Test cases generation based on Finite State Machines (FSMs) has been addressed for quite some time. Model-based testing has drawn attention from researchers and practitioners as one of the approaches to support software verification and validation. Several test criteria have been proposed in the literature to generate test cases based on formal methods, such as FSM. However, there is still a lot to be done on this aspect in order to clearly direct a test designer to choose a test criterion most suitable to generate test cases for a certain application domain. This work presents a new test criterion for model-based test case generation based on FSM, H-Switch Cover. H-Switch Cover relies on the traditional Switch Cover test criterion, but H-Switch Cover uses new heuristics to improve its performance, for example, adoption of rules to optimize graph balancing and traverse the graph for test cases generation. We conducted an investigation of cost and efficiency of this new test criterion by comparing it with unique input/output and distinguishing sequence. We used two embedded software products (space application software products) and mutation analysis for assessing efficiency. In general, for the case studies proposed in this paper in terms of cost (amount of events) and efficiency (mutation score), H-Switch Cover test criterion presented an average and a standard deviation better than the other two test criteria.

Proceedings ArticleDOI
Yongwei Tang, Chunmei Wang, Maoli Wang, Huijuan Hao, Jingbo Zhao1 
01 Mar 2017
TL;DR: The device in the depth of a certain type of ship steering instrument on the experimental results are shown that the device is reliable and stable and accurate.
Abstract: In the classical fault dictionary fault detection method based on the law, a microcontroller-based self-learning fault dictionary method for the circuit board fault detection At the same time designed to microcontroller-based self-learning fault detection dictionary diagnostic software as the core, coupled with signal detection circuit, precision rectifier circuit, phase detection circuit, the linear optical isolation circuit, sample and hold circuits, liquid crystal display circuit and alarm circuit consisting of detection devices The device in the depth of a certain type of ship steering instrument on the experimental results are shown that the device is reliable and stable and accurate

Journal ArticleDOI
TL;DR: The proposed approach for prioritizing regression test cases in an order that maximize fault coverage with least test suite execution and compared its effectiveness with other orderings shows higher average percentage of fault detected value and outperforms all other approaches.
Abstract: Test case prioritization techniques organize test cases for implementation in a manner that enhance their efficacy in accordance with some performance goal. The main aim of regression testing is to test the amended software to assure that the amendments performed in software are correct. It is not always feasible to retest entire test cases in a test suite due to limited resources. Therefore, it is necessary to develop some effective techniques that can enhance the regression testing effectiveness by organizing the test cases in an order following some testing criterion. One possible criterion of such prioritization is to enhance a test suite’s fault detection rate. It aspires to arrange test cases in an order that higher priority test cases run earlier than lower ones. This paper proposed a methodology for prioritizing regression test cases based on four factors namely the rate of fault detection, the number of faults detected, the test case ability of risk detection and the test case effectiveness. The proposed approach is implemented on two projects. The resultant test case order is analyzed with other prioritization techniques such as no prioritization, random prioritization, reverse prioritization, optimal prioritization and along with previous works for project 1. We have applied our proposed approach for prioritizing test cases in an order that maximize fault coverage with least test suite execution and compared its effectiveness with other orderings. The result of proposed approach shows higher average percentage of fault detected value and outperforms all other approaches.

Book ChapterDOI
01 Jan 2017
TL;DR: The modern system-on-chip (SoC) design flow is presented and how it raises security concern towards the trustworthiness of third-party IP is described and a case study tries to address the IP trust verification problem is presented.
Abstract: Due to short time-to-market constraints, design house is increasing being dependent on third-party vendors to procure IPs. These IPs are designed by hundreds of IP vendors distributed across the world. Such IPs cannot be assumed to be trusted as hardware Trojans can be maliciously inserted into them and could be used in military, financial, and other critical applications. It is extremely difficult to detect Trojans in third-party IPs (3PIPs) as there is no golden version against which to compare a given IP core during verification. In this chapter, We present the modern system-on-chip (SoC) design flow and describe how it raises security concern towards the trustworthiness of third-party IP. We give a brief description of the techniques which have been proposed to verify the trustworthiness of third-party IP and also describe their limitations. We present a case study that tries to address the IP trust verification problem. This case study is based on identifying suspicious signals with formal verification, coverage analysis, removing redundant circuit, sequential automatic test pattern generation (ATPG), and equivalence theorems.

Journal ArticleDOI
TL;DR: A novel test compression technique is proposed that can achieve very high test compression ratio with low area overhead and only one single test input, as all test and control data can be provided by a single input.
Abstract: In this paper, a novel test compression technique is proposed that can achieve very high test compression ratio with low area overhead and only one single test input. An inverter and a series of D flip-flops together with a configurable switch logic are inserted between the single input and the scan chains so as to convert the input patterns to the test data required by each scan chain. All scan chains are divided into some scan groups such that scan chains in the same group can share the same test data and the switch logic only needs to connect each group to an appropriate data provider. Hence the total area overhead is quite small. A novel algorithm is developed to determine the required test configurations and corresponding test patterns for 100% testable fault coverage. Experimental results show that on average this method can achieve data reduction factors of $23\times $ , $124\times $ , and $394 \times $ with 3.77%, 0.95%, and 0.03% area overhead for ISCAS’89, IWLS’05 OpenCores, and IWLS’05 Gaisler Research benchmark circuits, respectively. These results indicate that the reduction factor increases with the sizes of circuits; it even reaches $464 \times $ for a circuit containing 2.07 million gates with very small area overhead. As all test and control data can be provided by a single input, great reduction on test channel requirement is also achieved.

Journal ArticleDOI
TL;DR: A novel timing-aware framework to evaluate test points’ impact on a design, rank them based on their efficiency, and obtain an optimal configuration of the most efficient test points accurately and rapidly is proposed.
Abstract: Test points are inserted into integrated circuits to increase fault coverage especially in logic built-in self-test schemes. Commercial tools have been developed over the past decade to insert test points in circuits under test, but they are often inefficient and incur unacceptably large area overhead. Our analysis shows that many test points have little or no impact on test coverage. Furthermore, depending on where test points are inserted, they can create a significant area overhead unnecessarily. Therefore, we propose a novel timing-aware framework to evaluate test points’ impact on a design, rank them based on their efficiency, and obtain an optimal configuration of the most efficient test points accurately and rapidly. Specifically, the proposed framework considers not only individual test coverage improvement but also area penalty, path timing, and region in which each test point is inserted. Within this framework, we have two metrics, namely, efficient test point insertion (ETPI) and test point removal estimation (TPRE). The ETPI metric is developed to remove the most inefficient test points inserted in the circuit by commercial tools, thereby minimizing area penalty with very limited test coverage loss. The TPRE metric is introduced to estimate area overhead and test coverage for designs with different percentages (number) of test points removed without the actual insertion of test points and without the need for lengthy circuit simulation, thereby quickly selecting the most effective test point removal scheme and saving significant amount of processing time especially for large circuits. Experimental results, collected by applying the metrics to NXP Semiconductors circuits and academic benchmark circuits, indicate that ETPI can reduce area overhead by up to 95% with test coverage loss as low as 0.57%. In addition, results by applying the TPRE metric indicate that the difference between estimation and actual simulation/synthesis results for area overhead is less than 0.20% for most cases, and the difference between them for test coverage is less than 1% for most cases.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A method is proposed to automatically generate a set of high-quality design assertions that cover 100% of the design input space and could span multiple clock cycles and are compact and provide 100% input space coverage of each target node.
Abstract: Verification is a critical step in the Integrated Circuit (IC) design process. In order to verify a design, a set of assertions based on the design, is generated. The design is checked, either using simulation or formal tools, to make sure that the design does not violate any of the generated assertions. If any of the assertions is violated, then a design bug is detected. The quality of the verification is directly connected to the set of assertions and how much of the design functionality they cover. In this paper, we propose a method to automatically generate a set of high-quality design assertions. The method is based on the design description and on Automatic Test Pattern Generation (ATPG). The proposed method generates design assertions that cover 100% of the design input space and could span multiple clock cycles. In our experiment, we generated properties/assertions for a USB2.0 model from OpenCores. These assertions are compact and provide 100% input space coverage of each target node.

Journal ArticleDOI
TL;DR: In this article, the authors propose a framework for requirement-driven test generation that combines contract-based interface theories with model-based testing, which is driven by a single requirement interface at a time.
Abstract: We propose a framework for requirement-driven test generation that combines contract-based interface theories with model-based testing. We design a specification language, requirement interfaces, for formalizing different views (aspects) of synchronous data-flow systems from informal requirements. Various views of a system, modeled as requirement interfaces, are naturally combined by conjunction. We develop an incremental test generation procedure with several advantages. The test generation is driven by a single requirement interface at a time. It follows that each test assesses a specific aspect or feature of the system, specified by its associated requirement interface. Since we do not explicitly compute the conjunction of all requirement interfaces of the system, we avoid state space explosion while generating tests. However, we incrementally complete a test for a specific feature with the constraints defined by other requirement interfaces. This allows catching violations of any other requirement during test execution, and not only of the one used to generate the test. This framework defines a natural association between informal requirements, their formal specifications, and the generated tests, thus facilitating traceability. Finally, we introduce a fault-based test-case generation technique, called model-based mutation testing, to requirement interfaces. It generates a test suite that covers a set of fault models, guaranteeing the detection of any corresponding faults in deterministic systems under test. We implemented a prototype test generation tool and demonstrate its applicability in two industrial use cases.

Journal ArticleDOI
TL;DR: Experimental results confirm that the Star-EDT can act as a valuable form of deterministic BIST, and elevates compression ratios to values typically unachievable through conventional reseeding-based solutions.
Abstract: This paper presents Star-EDT—a novel deterministic test compression scheme. The proposed solution seamlessly integrates with EDT-based compression and takes advantage of two key observations: 1) there exist clusters of test vectors that can detect many random-resistant faults with a cluster comprising a parent pattern and its derivatives obtained through simple transformations and 2) a significant majority of specified positions of ATPG-produced test cubes are typically clustered within a single or, at most, a few scan chains. The Star-EDT approach elevates compression ratios to values typically unachievable through conventional reseeding-based solutions. Experimental results obtained for large industrial designs, including those with a new class of test points aware of ATPG-induced conflicts, illustrate feasibility of the proposed deterministic test scheme and are reported herein. In particular, they confirm that the Star-EDT can act as a valuable form of deterministic BIST.

Proceedings ArticleDOI
24 Apr 2017
TL;DR: New advanced test technologies such as cell-aware ATPG, hybrid compression / logic BIST / memory BIST and diagnosis-driven yield analysis with RCD provide some key building blocks towards ensuring compliance to the new standards.
Abstract: Meeting the quality and reliability requirements of the ISO 26262 and other automotive electronics standards will only become more difficult as device sizes and complexities continue to grow. New advanced test technologies such as cell-aware ATPG, hybrid compression / logic BIST / memory BIST and diagnosis-driven yield analysis with RCD provide some key building blocks towards ensuring compliance to the new standards. Adoption of these and other advanced test capabilities will not only improve the ability of semiconductor manufacturers to achieve necessary quality and reliability metrics, but will also help to further differentiate their products by delivering embedded test capabilities that can be leveraged by their customers at the system level and in the field.

Proceedings ArticleDOI
01 May 2017
TL;DR: This paper describes a much simpler approach using the same layout database that is used for layout-aware diagnosis for bridging faults, and describes experimental results, including coverage gains, critical area calculation, and explores physical versus traditional coverage metrics.
Abstract: Stringent quality requirements in the automotive sector result in the necessity of targeting additional fault models beyond traditional stuck-at and transition. This paper is concerned with bridging faults, which are well known to require additional tests to detect those which are not detected by traditional tests. To keep ATPG tractable, bridging faults must be extracted from layout to obtain nodes which are physically close together. There are well established methods for such extraction but typically involve complex flows with several tools. This paper describes a much simpler approach using the same layout database that is used for layout-aware diagnosis. The extraction process automatically ranks bridges on critical area so it is simple to truncate the fault list, if necessary. Since each bridge has a critical area it is also possible to provide a weighted critical area based coverage, which is physically more realistic than a simple fault count. The paper describes experimental results, including coverage gains, critical area calculation, and explores physical versus traditional coverage metrics.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: The practical test optimization approach uses historical test data to determine an optimal order oftests ensuring high progressively uniform configuration coverage, early fault detection, and rapid test feedback, and results showed improvement in efficiency compared to industry practice.
Abstract: Testing configurable software for high assurancesystems developed in continuous integration requires effectivetechniques for selecting failure-inducing test cases, thoroughlycovering entire configuration space, while providing rapid feedbackon failures. This involves satisfying multiple objectives:maximizing test fault detection, maximizing test coverage ofthe configuration space, and minimizing test execution time, which often leads to compromises in practice. In this paper, weaddress this problem with a practical test optimization approachthat uses historical test data to determine an optimal order oftests ensuring high progressively uniform configuration coverage, early fault detection, and rapid test feedback. We extensivelyvalidate the approach in a set of experiments using industry testsuites, and report experimental results showing the improvementin efficiency compared to industry practice. In particular, theapproach showed to increase the uniformity of configurationcoverage by 39% on average, which increases fault detectionup to 15%, while just slightly delaying test feedback.

Posted Content
TL;DR: In this paper, the authors present a method to identify and insert redundant logic into a combinational circuit to improve its fault tolerance without having to replicate the entire circuit as is the case with conventional redundancy techniques, however, care should be taken while introducing redundant logic since redundant logic insertion may give rise to new internal nodes and faults on those may impact the fault tolerance of the resulting circuit.
Abstract: This paper presents a novel method to identify and insert redundant logic into a combinational circuit to improve its fault tolerance without having to replicate the entire circuit as is the case with conventional redundancy techniques. In this context, it is discussed how to estimate the fault masking capability of a combinational circuit using the truth-cum-fault enumeration table, and then it is shown how to identify the logic that can introduced to add redundancy into the original circuit without affecting its native functionality and with the aim of improving its fault tolerance though this would involve some trade-off in the design metrics. However, care should be taken while introducing redundant logic since redundant logic insertion may give rise to new internal nodes and faults on those may impact the fault tolerance of the resulting circuit. The combinational circuit that is considered and its redundant counterparts are all implemented in semi-custom design style using a 32/28nm CMOS digital cell library and their respective design metrics and fault tolerances are compared.

Patent
03 Apr 2017
TL;DR: In this article, a skew tolerant interface is proposed to compensate for clock skew differences between a global clock from outside at least one of the partitions and a balanced local clock within at least some partitions.
Abstract: In one embodiment, a system comprises: a global clock input for receiving a global clock, a plurality of partitions; and a skew tolerant interface configured to compensate for clock skew differences between a global clock from outside at least one of the partitions and a balanced local clock within at least one of the partitions. The partitions can be test partitions. The skew tolerant interface can cross a mesochronous boundary. In one exemplary implementation, the skew tolerant interface includes a deskew ring buffer on communication path of the at least one partition. pointers associated with the ring buffer can be free-running and depend only on clocks being pulsed when out of reset. The scheme can be fully synchronous and deterministic. The scheme can be modeled for the ATPG tools using simple pipeline flops. The depth of the pipeline can be dependent on the pointer difference for the read/write interface. The global clock input can be part of a scan link.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A software-based online NoC self-testing solution based on bounded model checking (BMC) that achieves high fault coverage in functional mode and outperforms previously proposed solutions.
Abstract: Online testing is critical to ensure reliable operation of manycore systems based on a network-on-chip (NoC) interconnection fabric. We present a software-based online NoC self-testing solution based on bounded model checking (BMC). The proposed method first implements BMC on a sliced extended finite-state machine, and extracts the leading sequences necessary to excite NoC functions. Next, it targets the structural faults within every function excited by the leading sequence through constrained ATPG. Finally, a test protocol is developed to make the test responses observable. Experimental results show that the proposed method achieves high fault coverage in functional mode and outperforms previously proposed solutions. In addition, the fault coverage is very close to that of full-scan testing, but without any area overhead.

Journal ArticleDOI
TL;DR: The experimental results show that the test sequences generated using the proposed SMTSG (State Machine to Test Sequence Generation) approach are more efficient than the existing approaches.
Abstract: The aim of this paper is to generate test sequences for object-oriented software with composite states using state machines. This experimental work in software testing focuses on generating test sequences using the proposed algorithm called SMTSG (State Machine to Test Sequence Generation). This work also describes the effectiveness of test sequences by using mutation analysis. Our approach considers nine types of state faults for checking the efficiency of the generated test sequences. The effectiveness of the prioritized test sequences is shown using Average Percentage Fault Detection (APFD) metric. The experimental results show that the test sequences generated using our proposed approach are more efficient than the existing approaches.

Proceedings ArticleDOI
09 Apr 2017
TL;DR: The adaptive test flow first eliminates seemingly redundant tests on a wafer-by-wafer basis and then it may refine the test content on a device- by-device basis so as to ensure high fault coverage.
Abstract: Adaptive test is a promising direction for reducing the manufacturing test cost. It aims to dynamically adjust the test program on a device-by-device or on a wafer-by-wafer basis. Adjusting the test program could involve eliminating tests, changing test limits, re-ordering tests, etc. The objective is to spend the minimum possible test time per device without sacrificing fault coverage. In this paper, we present an adaptive test flow for mixed-signal ICs and we demonstrate its effectiveness on a sizable production dataset from a large mixed-signal IC. The adaptive test flow first eliminates seemingly redundant tests on a wafer-by-wafer basis and then it may refine the test content on a device-by-device basis so as to ensure high fault coverage. In addition, it estimates the test escape risk so as to provide confidence in the binning of devices.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: An efficient CPU-GPU algorithm for extracting the complete MCSes that can be optimized on NVIDIA General Purpose Graphics Processing Unit paradigm which is considered one of the most common platforms for GPU parallel computing.
Abstract: Lately, research has been focused on the problem of extracting the main unsatisfiable cores from infeasible constraints The main reasons of infeasibility can be represented by subsets of unsatisfied clauses referred to “Minimal Correction Subsets” Various developed algorithms for computing MCSes can be used for fault detection technique which is considered a core of SAT-based Automatic Test Pattern Generation (ATPG) on digital VLSI circuits This paper presents an efficient CPU-GPU algorithm for extracting the complete MCSes that can be optimized on NVIDIA General Purpose Graphics Processing Unit paradigm which is considered one of the most common platforms for GPU parallel computing Our proposed algorithm is evaluated using a C++ algorithm for generating and reducing a SAT instance of VLSI digital circuits from ISCAS'85, ISCAS'89 and synthetic benchmarks The proposed algorithm, utilizing our presented parallel SAT-solver, delivers about 14x speedup compared to the CUDA@SAT tool

Journal ArticleDOI
TL;DR: This work describes a technique able to execute functional test programs as if they were structural tests during the end-of-production test in order to achieve good fault coverage and, at the same time, avoiding any over-test problems.
Abstract: Structural test is widely adopted to ensure high quality for a given product. The availability of many commercial tools and the use of fault models make it very easy to generate and to evaluate. Despite its efficiency, structural test is also known for the risk of over-testing that may lead to yield loss. This problem is mainly due to the fact that structural test does not take into account the functionality of the circuit under test. On the other hand, functional test guarantees that the circuit is tested under normal conditions, thus avoiding any over- as well as under-testing issues. More in particular, for microprocessor testing, functional test is usually applied by exploiting the Software-Based-Self-Test (SBST) technique. SBST applies a set of functional test programs that are executed by the processor to achieve a given fault coverage. SBST fits particularly well for online testing of processor-based systems. In this work, we describe a technique able to execute functional test programs as if they were structural tests. In this way, they can be applied during the end-of-production test in order to achieve good fault coverage and, at the same time, avoiding any over-test problems. We will show that it is possible to map functional test programs into the classical structural test schemes, so that their application simply requires the presence of a scan chain. Finally, we present a compaction algorithm able to significantly reduce the test length. Results carried out on two different microprocessors show the advantages of such approach.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A new high-level fault model is introduced, which covers a broad class of gate-level Stuck-at-Faults (SAF), conditional SAF, and bridging faults of any multiplicity in processor control paths.
Abstract: The advent of many-core system-on-chips (SoC) will involve new scalable hardware/software mechanisms that can efficiently utilize the abundance of interconnected processing elements found in these SoCs. These trends will have a great impact on the strategies for testing the systems and improving their reliability by exploiting system's re-configurability to achieve graceful degradation of system's performance. We propose a strategy of Software-Based Self-Test (SBST) to be used for testing of processing elements in many-core systems with the goal to increase fault coverage and structuring the test routines in a way which makes test-data delivery in many-core systems more efficient. A new high-level fault model is introduced, which covers a broad class of gate-level Stuck-at-Faults (SAF), conditional SAF, and bridging faults of any multiplicity in processor control paths. Two algorithms for high-level simulation-based test generation for the control path and a bit-wise pseudo-exhaustive test approach for data path are proposed. No implementation details are needed for test data generation. A novel method for proving the redundancy of high-level functional faults is presented, which allows for precise evaluation of fault coverage.