scispace - formally typeset
Search or ask a question

Showing papers on "Fault coverage published in 2001"


Journal ArticleDOI
TL;DR: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal as discussed by the authors, such as rate of fault detection, a measure of how quickly faults are detected within the testing process.
Abstract: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites.

1,200 citations


Journal ArticleDOI
TL;DR: A reconstruction-based fault identification approach using a combined index for multidimensional fault reconstruction and identification and a new method to extract fault directions from historical fault data is proposed.
Abstract: Process monitoring and fault diagnosis are crucial for efficient and optimal operation of a chemical plant. This paper proposes a reconstruction-based fault identification approach using a combined index for multidimensional fault reconstruction and identification. Fault detection is conducted using a new index that combines the squared prediction error (SPE) and T2. Necessary and sufficient conditions for fault detectability are derived. The combined index is used to reconstruct the fault along a given fault direction. Faults are identified by assuming that each fault in a candidate fault set is the true fault and comparing the reconstructed indices with the control limits. Fault reconstructability and identifiability on the basis of the combined index are discussed. A new method to extract fault directions from historical fault data is proposed. The dimension of the fault is determined on the basis of the fault detection indices after fault reconstruction. Several simulation examples and one practical c...

456 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: Techniques are presented in this paper that allow for substantial compression of Automatic Test Pattern Generation (ATPG) produced test vectors, allowing for a more than 10-fold reduction in tester scan buffer data volume on ATPG compacted tests.
Abstract: Rapid increases in the wire-able gate counts of ASICs stress existing manufacturing test equipment in terms of test data volume and test capacity. Techniques are presented in this paper that allow for substantial compression of Automatic Test Pattern Generation (ATPG) produced test vectors. We show compression efficiencies allowing a more than 10-fold reduction in tester scan buffer data volume on ATPG compacted tests. In addition, we obtain almost a 2/spl times/ scan test time reduction. By implementing these techniques for production testing of huge-gate-count ASICs, IBM will continue using existing automated test equipment (ATE)-avoiding costly upgrades and replacements.

368 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a fault detection approach based on the vectors of movement of a fault in both the model space and the residual space, which are then compared to the corresponding vector directions of known faults in the fault library.

359 citations


Proceedings ArticleDOI
25 Jun 2001
TL;DR: This paper gives an overview of recent tools to analyze and explore structure and other fundamental properties of an automated system such that any inherent redundancy in the controlled process can be fully utilized to maintain availability, even though faults may occur.
Abstract: Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to technical parts of the plant, to personnel or the environment. Fault-tolerant control combines diagnosis with control methods to handle faults in an intelligent way. The aim is to prevent that simple faults develop into serious failure and hence increase plant availability and reduce the risk of safety hazards. Fault-tolerant control merges several disciplines into a common framework to achieve these goals. The desired features are obtained through online fault diagnosis, automatic condition assessment and calculation of appropriate remedial actions to avoid certain consequences of a fault. The envelope of the possible remedial actions is very wide. Sometimes, simple re-tuning can suffice. In other cases, accommodation of the fault could be achieved by replacing a measurement from a faulty sensor by an estimate. In yet other situations, complex reconfiguration or online controller redesign is required. This paper gives an overview of recent tools to analyze and explore structure and other fundamental properties of an automated system such that any inherent redundancy in the controlled process can be fully utilized to maintain availability, even though faults may occur.

289 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: Delay fault test application via enhanced scan and skewed load techniques is shown to allow scan-based delay tests to be applied that are unrealizable in normal operation, and rather than higher coverage being a positive feature, it has negative impact on yield and designer productivity.
Abstract: Delay fault test application via enhanced scan and skewed load techniques is shown to allow scan-based delay tests to be applied that are unrealizable in normal operation. Rather than higher coverage being a positive feature, it is shown to have negative impact on yield and designer productivity. The use of functionally justified tests is defended by both a motivating example and data from benchmark circuits. Implications on overhead, yield, timing optimization, and test debug are discussed.

252 citations


Journal ArticleDOI
TL;DR: A new software-based self-testing methodology for processors, which uses a software tester embedded in the processor memory as a vehicle for applying structural tests and demonstrates its significant cost/fault coverage benefits and its ability to apply at-speed test while alleviating the need for high-speed testers.
Abstract: At-speed testing of gigahertz processors using external testers may not be technically and economically feasible. Hence, there is an emerging need for low-cost high-quality self-test methodologies that can be used by processors to test themselves at-speed. Currently, built-in self-test (BIST) is the primary self-test methodology available. While memory BIST is commonly used for testing embedded memory cores, complex logic designs such as microprocessors are rarely tested with logic BIST. In this paper, we first analyze the issues associated with current hardware-based logic-BIST methodologies by applying a commercial logic-BIST tool to two processor cores. We then propose a new software-based self-testing methodology for processors, which uses a software tester embedded in the processor memory as a vehicle for applying structural tests. The software tester consists of programs for test generation and test application. Prior to the test, structural tests are prepared for processor components in the form of self-test signatures. During the process of self-test, the test generation program expands the self-test signatures into test sets and the test application program applies the tests to the components under test at the speed of the processor. Application of the novel software-based self-test method demonstrates its significant cost/fault coverage benefits and its ability to apply at-speed test while alleviating the need for high-speed testers.

217 citations


Journal ArticleDOI
TL;DR: In this article, a fault location algorithm based on phasor measurement units (PMUs) for series compensated lines is proposed, which does not utilize the series device model or knowledge of the operation mode of the series devices to compute the voltage drop during the fault period.
Abstract: This work presents a new fault location algorithm based on phasor measurement units (PMUs) for series compensated lines. Traditionally, the voltage drop of a series device is computed by the device model in the fault locator of series compensated lines, but by using this approach errors are induced by the inaccuracy of the series device model or the uncertainty operation mode of the series device. The proposed algorithm does not utilize the series device model or knowledge of the operation mode of the series device to compute the voltage drop during the fault period. Instead, the proposed algorithm uses the two-step algorithm, prelocation step and correction step, to calculate the voltage drop and fault location. The proposed technique can be easily applied to any series FACTS compensated line. EMTP generated data using a 300 km, 345 kV transmission line has been used to test the accuracy of the proposed algorithm. The tested cases include various fault types, fault locations, fault resistances, fault inception angles, etc. The study also considers the effect of various operation modes of the compensated device during the fault period. Simulation results indicate that the proposed algorithm can achieve up to 99.95% accuracy for most tested cases.

161 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: The proposed technique handles both stuck-at and timing failures (transition faults and hold time faults) and improves the diagnostic resolution by ranking the suspect scan cells inside a range of scan cells.
Abstract: In this paper, we present a scan chain fault diagnosis procedure. The diagnosis for a single scan chain fault is performed in three steps. The first step uses special chain test patterns to determine both the faulty chain and the fault type in the faulty chain. The second step uses a novel procedure to generate special test patterns to identify the suspect scan cell within a range of scan cells. Unlike previously proposed methods that restrict the location of the faulty scan cell only from the scan chain output side, our method restricts the location of the faulty scan cell from both the scan chain output side and the scan chain input side. Hence the number of suspect scan cells is reduced significantly in this step. The final step further improves the diagnostic resolution by ranking the suspect scan cells inside this range. The proposed technique handles both stuck-at and timing failures (transition faults and hold time faults). The extension of the procedure to diagnose multiple faults is discussed. The experimental results show the effectiveness of the proposed method.

147 citations


Proceedings ArticleDOI
30 Oct 2001
TL;DR: The first ever reported case study of the effectiveness of the Illinois Scan Architecture on an industrial circuit is presented, presenting a low cost alternative to conventional scan.
Abstract: Scan based test techniques offer a very efficient alternative to achieve high fault coverage when compared to functional pattern testing. As circuit sizes grow ever larger, test data volume and test application time grow unwieldy even in the very efficient scan based designs. The Illinois Scan Architecture is a low cost alternative to conventional scan. In this paper, we present the first ever reported case study of the effectiveness of the Illinois Scan Architecture on an industrial circuit.

144 citations


Journal ArticleDOI
TL;DR: The Poirot tool isolates and diagnoses defects through fault modeling and simulation, and functional and sequential test pattern applications show success with circuits having a high degree of observability.
Abstract: The Poirot tool isolates and diagnoses defects through fault modeling and simulation. Along with a carefully selected partitioning strategy, functional and sequential test pattern applications show success with circuits having a high degree of observability.

Proceedings ArticleDOI
10 Nov 2001
TL;DR: A source-to-source compiler supporting a software-implemented hardware fault tolerance approach is proposed, based on a set of source code transformation rules, which hardens a program against transient memory errors by introducing software redundancy.
Abstract: Over the last years, an increasing number of safety-critical tasks have been demanded for computer systems. In particular, safety-critical computer-based applications are hitting market areas where cost is a major issue, and thus solutions are required which conjugate fault tolerance with low costs. A source-to-source compiler supporting a software-implemented hardware fault tolerance approach is proposed, based on a set of source code transformation rules. The proposed approach hardens a program against transient memory errors by introducing software redundancy: every computation is performed twice and results are compared, and control flow invariants are checked explicitly. By exploiting the tool's capabilities, several benchmark applications have been hardened against transient errors. Fault injection campaigns have been performed to evaluate the fault detection capability of the hardened applications. In addition, we analyzed the proposed approach in terms of space and time overheads.

Journal ArticleDOI
TL;DR: A low-overhead scheme for achieving complete (100%) fault coverage during built-in self test of circuits with scan is presented and experimental results indicate that complete fault coverage can be obtained with low hardware overhead.
Abstract: A low-overhead scheme for achieving complete (100%) fault coverage during built-in self test of circuits with scan is presented. It does not require modifying the function logic and does not degrade system performance (beyond using scan). Deterministic test cubes that detect the random-pattern-resistant (r.p.r.) faults are embedded in a pseudorandom sequence of bits generated by a linear feedback shift register (LFSR). This is accomplished by altering the pseudorandom sequence by adding logic at the LFSR's serial output to "fix" certain bits. A procedure for synthesizing the bit-fixing logic for embedding the test cubes is described. Experimental results indicate that complete fault coverage can be obtained with low hardware overhead. Further reduction in overhead is possible by using a special correlating automatic test pattern generation procedure that is described for finding test cubes for the r.p.r. faults in a way that maximizes bitwise correlation.

Journal ArticleDOI
TL;DR: An algorithm for generating test patterns automatically from functional register-transfer level (RTL) circuits that target detection of stuck-at faults in the circuit at the logic level, using a data structure named assignment decision diagram that has been proposed previously in the field of high-level synthesis.
Abstract: In this paper, we present an algorithm for generating test patterns automatically from functional register-transfer level (RTL) circuits that target detection of stuck-at faults in the circuit at the logic level. In order to do this, we utilize a data structure named assignment decision diagram that has been proposed previously in the field of high-level synthesis. With the advent of RTL synthesis tools, functional RTL designs are now widely used in the industry to cut design turn around time. This paper addresses the problem of test pattern generation directly at this level due to a number of advantages inherent at the RTL. Since the number of primitive elements at the RTL is usually less than the logic level, the problem size is reduced leading to a reduction in the test-generation time over logic-level automatic test pattern generation (ATPG). Also, a reduction in the number of backtracks can lead to improved fault coverage and reduced test application time over logic-level techniques. The test patterns thus generated can also be used to perform RTL-RTL and RTL-logic validation. The algorithm is very versatile and can tackle almost any type of single-clock design, although performance varies according to the design style. It gracefully degrades to an inefficient logic-level ATPG algorithm if it is applied to a logic-level circuit. Experimental results demonstrate that over 1000 times reduction in test-generation time can be achieved by this algorithm on certain types of RTL circuits without any compromise in fault coverage.

Proceedings ArticleDOI
20 May 2001
TL;DR: This work introduces the concept of fail-stutter fault tolerance, a realistic and yet tractable fault model that accounts for both absolute failure and a new range of performance failures common in modern components.
Abstract: Traditional fault models present system designers with two extremes: the Byzantine fault model, which is general and therefore difficult to apply, and the fail-stop fault model, which is easier to employ but does not accurately capture modern device behavior To address this gap, we introduce the concept of fail-stutter fault tolerance, a realistic and yet tractable fault model that accounts for both absolute failure and a new range of performance failures common in modern components. Systems built under the fail-stutter model will likely perform well, be highly reliable and available, and be easier to manage when deployed.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: Experimental results show that the proposed BIST schemes can attain 100% fault coverage for all of benchmark circuits with drastically reduced test sequence lengths, achieved at low hardware cost even for benchmark circuits that have large number scan inputs.
Abstract: Two noble scan based BIST architectures, namely parallel fixing and serial fixing BIST, which can be implemented at very low hardware cost even for random pattern resistant circuits that have large number of scan elements, are proposed. Both of the proposed BIST schemes use 3-weight weighted random BIST techniques to reduce test sequence lengths by improving detection probabilities of random pattern resistant faults. A special ATPG is used to generate suitable test cube sets that lead to BIST circuits that require minimum hardware overhead. Experimental results show that the proposed BIST schemes can attain 100% fault coverage for all of benchmark circuits with drastically reduced test sequence lengths. This reduction in test sequence length is achieved at low hardware cost even for benchmark circuits that have large number scan inputs.

Proceedings ArticleDOI
04 Nov 2001
TL;DR: A method for identifying X inputs of test vectors in a given test set by using fault simulation and procedures similar to implication and justification of ATPG algorithms is proposed.
Abstract: Given a test set for stuck at faults, some of primary input values may be changed to opposite logic values without losing fault coverage. We can regard such input values as don't care (X). In this paper, we propose a method for identifying X inputs of test vectors in a given test set. While there are many combinations of X inputs in the test set generally, the proposed method finds one including X inputs as many as possible, by using fault simulation and procedures similar to implication and justification of ATPG algorithms. Experimental results for ISCAS benchmark circuits show that approximately 66% of inputs of un-compacted test sets could be X in average. Even for compacted test sets, the method found that approximately 47% of inputs are X. Finally, we discuss how logic values are reassigned to the identified X inputs where several applications exist to make test vectors more desirable.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: An automatic test pattern generation (ATPG) method is presented for a scan-based test architecture which minimizes ATE storage requirements and reduces the bandwidth between the automatic test equipment and the chip under test.
Abstract: An automatic test pattern generation (ATPG) method is presented for a scan-based test architecture which minimizes ATE storage requirements and reduces the bandwidth between the automatic test equipment (ATE) and the chip under test. To generate tailored deterministic test patterns, a standard ATPG tool performing dynamic compaction and allowing constraints on circuit inputs is used. The combination of an appropriate test architecture and the tailored test patterns reduces the test data volume up to two orders of magnitude compared with standard compacted test sets.

Proceedings ArticleDOI
24 Oct 2001
TL;DR: Compares different VHDL-based fault injection techniques: simulator commands, saboteurs and mutants for the validation of fault tolerant systems and preliminary results show that coverages for transient faults can be obtained quite accurately with any of the three techniques.
Abstract: Compares different VHDL-based fault injection techniques: simulator commands, saboteurs and mutants for the validation of fault tolerant systems. Some extensions and implementation designs of these techniques have been introduced. Also, a wide set of non-usual fault models have been implemented. As an application, a fault tolerant microcomputer system has been validated. Faults have been injected using an injection tool developed by the GSTF. We have injected both transient and permanent faults on the system model, using two different workloads. We have studied the pathology of the propagated errors, measured their latencies, and calculated both detection and recovery coverages. Preliminary results show that coverages for transient faults can be obtained quite accurately with any of the three techniques. This enables the use of different abstraction level models for the same system. We have also verified significant differences in implementation and simulation cost between the studied injection techniques.

Journal ArticleDOI
TL;DR: The proposed improvement allows us to drop tests without simulating them based on the fact that the faults they detect will be detected by tests that will be simulated later, hence the name of the improved procedure: forward-looking fault simulation.
Abstract: Fault simulation of a test set in an order different from the order of generation (e.g., reverse- or random-order fault simulation) is used as a fast and effective method to drop unnecessary tests from a test set in order to reduce its size. We propose an improvement to this type of fault simulation process that makes it even more effective in reducing the test-set size. The proposed improvement allows us to drop tests without simulating them based on the fact that the faults they detect will be detected by tests that will be simulated later, hence the name of the improved procedure: forward-looking fault simulation. We present experimental results to demonstrate the effectiveness of the proposed improvement.

Journal Article
TL;DR: The annotated bibliography highlights work in the area of algorithmic test generation from formal specifications with guaranteed fault coverage, i.e., fault model-driven test derivation as a triple, comprising a finite state specification, conformance relation and fault domain that is the set of possible implementations.
Abstract: The annotated bibliography highlights work in the area of algorithmic test generation from formal specifications with guaranteed fault coverage, i.e., fault model-driven test derivation. A fault model is understood as a triple, comprising a finite state specification, conformance relation and fault domain that is the set of possible implementations. The fault model can be specialized to Input/Output FSM, Labeled Transition System, or Input/Output Automaton and to a number of conformance relations such as FSM equivalence, reduction or quasi-equivalence, trace inclusion or trace equivalence and others. The fault domain usually reflects test assumptions, as an example, it can be the universe of all possible I/O FSMs with a given number of states, a classical fault domain in FSM-based testing. A test suite is complete with respect to a given fault model when each implementation from the fault domain passes it if and only if the postulated conformance relation holds between the implementation and its specification. A complete test suite is said to provide fault coverage guarantee for a given fault model.

Proceedings ArticleDOI
29 Mar 2001
TL;DR: This paper presents a new test resource partitioning scheme that is a hybrid approach between external testing and BIST, based on weighted pseudo-random testing and uses a novel approach for compressing and storing the weight sets.
Abstract: This paper presents a new test resource partitioning scheme that is a hybrid approach between external testing and BIST. It reduces tester storage requirements and tester bandwidth requirements by orders of magnitude compared to conventional external testing, but requires much less area overhead than a full BIST implementation providing the same fault coverage. The proposed approach is based on weighted pseudo-random testing and uses a novel approach for compressing and storing the weight sets. Three levels of compression are used to greatly reduce test costs. No test points or any modifications are made to the function logic. The proposed scheme requires adding only a small amount of additional hardware to the STUMPS architecture. Experimental results comparing the proposed approach with other approaches are presented.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: The study focuses on the location and distribution of probable bridging defects and attempts to explain the findings in the context of the characteristics of the design and its implementation.
Abstract: Presents an experimental study of bridging fault locations on the Intel Pentium (TM) 4 CPU as determined by an inductive fault analysis tool. The study focuses on the location and distribution of probable bridging defects and attempts to explain the findings in the context of the characteristics of the design and its implementation. The coverage obtained against these faults by manually generated functional patterns is compared against that achieved by ATPG vectors.

Proceedings ArticleDOI
30 Oct 2001
TL;DR: A static compaction procedure to reduce test set size for scan designs and a procedure to order test patterns in order to steepen the fault coverage curve are presented.
Abstract: A static compaction procedure to reduce test set size for scan designs and a procedure to order test patterns in order to steepen the fault coverage curve are presented. The computational effort for both procedures is linearly proportional to the computational effort required for standard fault simulation with fault dropping. Experimental results on large industrial circuits demonstrate both the efficiency and effectiveness of the proposed procedures.

Patent
05 Sep 2001
TL;DR: In this article, a test coverage tool provides output that identifies differences between the actual coverage provided by a test suite run on a program under test and the coverage criteria required by the test/development team management.
Abstract: A test coverage tool provides output that identifies differences between the actual coverage provided by a test suite run on a program under test and the coverage criteria (e.g., the coverage criteria required by the test/development team management). The output from the test coverage tool is generated in the same language that was used to write the coverage criteria that are input to an automated test generator to create the test cases which form the test suite. As a result, the output from the coverage tool can be input back into the automated test generator to cause the generator to revise the test cases to correct the inadequacies. This allows iterative refinement of the test suite automatically, enabling automated test generation to be more effectively and efficiently used with more complex software and more complex test generation inputs. In preferred embodiments, test coverage analysis results of several different test suites, some manually generated and others automatically generated, are used to produce a streamlined automatically-generated test suite and/or to add missing elements to an automatically generated test-suite.

Journal ArticleDOI
TL;DR: A defective-part-level model combined with a method for choosing test patterns that use site observation can predict defect levels in submicron ICs more accurately than simple stuck-at fault analysis.
Abstract: After an integrated circuit (IC) design is complete, but before first silicon arrives from the manufacturing facility, the design team prepares a set of test patterns to isolate defective parts. Applying this test pattern set to every manufactured part reduces the fraction of defective parts erroneously sold to customers as defect-free parts. This fraction is referred to as the defect level (DL). However, many IC manufacturers quote defective part level, which is obtained by multiplying the defect level by one million to give the number of defective parts per million. Ideally, we could accurately estimate the defective part level by analyzing the circuit structure, the applied test-pattern set, and the manufacturing yield. If the expected defective part level exceeded some specified value, then either the test pattern set or (in extreme cases) the design could be modified to achieve adequate quality. Although the IC industry widely accepts stuck-at fault detection as a key test-quality figure of merit, it is nevertheless necessary to detect other defect types seen in real manufacturing environments. A defective-part-level model combined with a method for choosing test patterns that use site observation can predict defect levels in submicron ICs more accurately than simple stuck-at fault analysis.

Journal ArticleDOI
TL;DR: New space compression techniques which facilitate designing VLSI circuits using compact test sets, with the primary objective of minimizing the storage requirements for the circuit under test (CUT) while maintaining the fault coverage information.
Abstract: The design of space-efficient support hardware for built-in self-testing (BIST) is of critical importance in the design and manufacture of VLSI circuits. This paper reports new space compression techniques which facilitate designing such circuits using compact test sets, with the primary objective of minimizing the storage requirements for the circuit under test (CUT) while maintaining the fault coverage information. The compaction techniques utilize the concepts of Hamming distance, sequence weights, and derived sequences in conjunction with the probabilities of error occurrence in the selection of specific gates for merger of a pair of output bit streams from the CUT. The outputs of the space compactor may eventually be fed into a time compactor (viz. syndrome counter) to derive the CUT signatures. The proposed techniques guarantee simple design with a very high fault coverage for single stuck-line faults, with low CPU simulation time, and acceptable area overhead. Design algorithms are proposed in the paper, and the simplicity and ease of their implementations are demonstrated with numerous examples. Specifically, extensive simulation runs on ISCAS 85 combinational benchmark circuits with FSIM, ATALANTA, and COMPACTEST programs confirm the usefulness of the suggested approaches.

Journal ArticleDOI
TL;DR: In this article, a fault detection, isolation and fault tolerant control for an spark ignition engine is investigated for an IC engine, where the integrated design of control and diagnostics is achieved by combining the integral sliding mode control methodology and observers with hypothesis testing.
Abstract: Fault detection, isolation and fault tolerant control are investigated for an spark ignition engine. Fault tolerant control refers to a strategy in which the desired stability and robustness of the control system are guaranteed in the presence of faults. In an attempt to realize fault tolerant control, a methodology for integrated design of control and fault diagnostics is proposed. Specifically, the integrated design of control and diagnostics is achieved by combining the integral sliding mode control methodology and observers with hypothesis testing. Information obtained from integral sliding mode control and from observers with hypothesis testing is utilized so that a fault can be detected, isolated and compensated. As an application example, the air and fuel dynamics of an IC engine are considered. A mean value engine model is developed and implemented in Simulink®. The air and fuel dynamics of the engine are identified using experimental data. The proposed algorithm for integration of control and diagnostics is then validated using the identified engine model. Copyright © 2001 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
13 Mar 2001
TL;DR: A deterministic software-based self-testing methodology for processor cores is introduced that efficiently tests the processor datapath modules without any modification of the processor structure to provide high fault coverage without repetitive fault simulation experiments.
Abstract: A deterministic software-based self-testing methodology for processor cores is introduced that efficiently tests the processor datapath modules without any modification of the processor structure. It provides a guaranteed high fault coverage without repetitive fault simulation experiments which is necessary in pseudorandom software-based processor self-testing approaches. Test generation and output analysis are performed by utilizing the processor functional modules like accumulators (arithmetic part of ALU) and shifters (if they exist) through processor instructions. No extra hardware is required and there is no performance degradation.

Journal ArticleDOI
TL;DR: A new structural testing of phase-locked loops (PLLs) using charge-based frequency measurement BIST (CF-BIST) technique, which performs simple dc-like charge injection tests, suitable for high-speed PLL applications.
Abstract: We propose a new structural testing of phase-locked loops (PLLs) using charge-based frequency measurement BIST (CF-BIST) technique. The technique uses the existing charge-pump as the stimulus generator and the VCO/divide-by-N as the measuring device to reduce the area overhead. This approach performs simple dc-like charge injection tests, thus, it is suitable for high-speed PLL applications. Fault simulation results show higher fault coverage than a previous test method with less die area. As no test stimulus is required and the test output is pure digital, low-cost and practical implementation of on-chip BIST structure for a PLL is possible.