scispace - formally typeset
Search or ask a question

Showing papers on "Fault model published in 1990"


Journal ArticleDOI
R. Dekker1, F. Beenker1, L. Thijssen
TL;DR: A fault model for SRAMs based on physical spot defects, which are modeled as local disturbances in the layout of the SRAM, is presented and two linear test algorithms that cover 100% of the faults under the fault model are proposed.
Abstract: Testing static random access memories (SRAMs) for all possible failures is not feasible and one must restrict the class of faults to be considered. This restricted class is called a fault model. A fault model for SRAMs based on physical spot defects, which are modeled as local disturbances in the layout of the SRAM, is presented. Two linear test algorithms that cover 100% of the faults under the fault model are proposed. A general solution is given for testing word-oriented SRAMs. The practical validity of the fault model and the two test algorithms are verified by a large number of actual wafer tests and device failure analyses. >

242 citations


Journal ArticleDOI
TL;DR: A model for delay faults that answers questions correctly, but with calculations simple enough to be done for large circuits, is presented.
Abstract: Defects in integrated circuits can cause delay faults of various sizes. Testing for delay faults has the goal of detecting a large fraction of these faults for a wide range of fault sizes. Hence, an evaluation scheme for a delay fault test must not only compute whether or not a delay fault was detected, but also calculate the sizes of detected delay faults. Delay faults have the counterintuitive property that a test for a fault of one size need not be a test for a similar fault of a larger size. This makes it difficult to answer questions about the sizes of delay faults detected by a set of tests. A model for delay faults that answers such questions correctly, but with calculations simple enough to be done for large circuits, is presented. >

120 citations


Journal ArticleDOI
TL;DR: The current state of the art of system reliability, safety, and fault tolerance is reviewed, and an approach to designing resourceful systems based upon a functionally rich architecture and an explicit goal orientation is developed.
Abstract: Above all, it is vital to recognize that completely guaranteed behavior is impossible and that there are inherent risks in relying on computer systems in critical environments. The unforeseen consequences are often the most disastrous [Neumann 1986].Section 1 of this survey reviews the current state of the art of system reliability, safety, and fault tolerance. The emphasis is on the contribution of software to these areas. Section 2 reviews current approaches to software fault tolerance. It discusses why some of the assumptions underlying hardware fault tolerance do not hold for software. It argues that the current software fault tolerance techniques are more accurately thought of as delayed debugging than as fault tolerance. It goes on to show that in providing both backtracking and executable specifications, logic programming offers most of the tools currently used in software fault tolerance. Section 3 presents a generalization of the recovery block approach to software fault tolerance, called resourceful systems. Systems are resourceful if they are able to determine whether they have achieved their goals or, if not, to develop and carry out alternate plans. Section 3 develops an approach to designing resourceful systems based upon a functionally rich architecture and an explicit goal orientation.

93 citations


Proceedings ArticleDOI
10 Sep 1990
TL;DR: The authors address the problem of generating minimum test sets for diagnosing faults in wiring interconnects on printed circuit boards with a fault model that includes multiple stuck-at and short faults.
Abstract: The authors address the problem of generating minimum test sets for diagnosing faults in wiring interconnects on printed circuit boards. It is assumed that all the nets can be accessed in parallel or through a boundary-scan chain on the board. The fault model includes multiple stuck-at and short faults. Three methods for three different diagnosis mechanisms are presented. It is also pointed out that the self-diagnosis problem is the same as the concurrent error detection problem for asymmetric errors. Thus, the diagnostic methods considered are similar to the coding methods used in concurrent error detection. All the diagnostic methods can be extended to structural tests by taking advantage of the geometry of a circuit board to produce an efficient test. >

77 citations


Proceedings ArticleDOI
Kwang-Ting Cheng1, J.-Y. Jou1
10 Sep 1990
TL;DR: The authors developed an automatic test generation algorithm and built a test generation system using a single-transition fault model, which shows the effectiveness of this method is shown by experimental results on a set of benchmark finite-state machines.
Abstract: A functional test generation method for finite-state machines is described. A functional fault model, called the single-transition fault model, on the state transition level is used. In this model, a fault causes a single transition to a wrong destination state. A fault-collapsing technique for this fault model is also described. For each state transition, a small subset of states is selected as the faulty destination states so that the number of modeled faults for test generation is minimized. On the basis of this fault model, the authors developed an automatic test generation algorithm and built a test generation system. The effectiveness of this method is shown by experimental results on a set of benchmark finite-state machines. A 100% stuck-at fault coverage is achieved by the proposed method for several machines, and a very high coverage (>97%) is also obtained for other machines. In comparison with a gate-level test generator STG3, the test generation time is speeded up by a factor of 100. >

72 citations


Proceedings ArticleDOI
22 Oct 1990
TL;DR: The computational power of 2D and 3D processor arrays that contain a potentially large number of faults is analyzed in this article, and it is shown that low-dimensional arrays are surprisingly fault tolerant.
Abstract: The computational power of 2-D and 3-D processor arrays that contain a potentially large number of faults is analyzed. Both a random and a worst-case fault model are considered, and it is proved that in either scenario low-dimensional arrays are surprisingly fault tolerant. It is also shown how to route, sort, and perform systolic algorithms for problems such as matrix multiplication in optimal time on faulty arrays. In many cases, the running time is the same as if there were no faults in the array (up to constant factors). On the negative side, it is shown that any constant congestion embedding of an n*n fault-free array on an n*n array with Theta (n/sup 2/) random faults (or Theta (log n) worst-case faults) requires dilation Theta (log n). For 3-D arrays, knot theory is used to prove that the required dilation is Omega ( square root log n). >

64 citations


Patent
24 Oct 1990
TL;DR: In this article, an alarm sequence generator is used to test the correctness of a fault model and generate a user interface from which specific components can be selected for failure at specified times.
Abstract: In a real-time diagnostic system, an alarm sequence generator is used to test the correctness of a fault model. The fault model describes an industrial process being monitored. The alarm sequence generator reads the fault model and generates a user interface, from which specific components can be selected for failure at specified times. The alarm sequence generator assembles all alarms that are causally downstream from the selected set of faulty components and determines which alarms should be turned on based on probabilistic and temporal information in the fault model. The timed alarm sequence can be used by an expert to measure the correctness of a particular model, or can be used as input into a diagnostic system to measure the correctness of the diagnostic system.

58 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the location of the San Andreas fault and found that a vertical fault below the surface trace fits the data much better than either a dipping fault zone located south of the surface traces.
Abstract: In the region of the Los Padres-Tehachapi geodetic network, the San Andreas fault (SAF) changes its orientation by over 30{degree} from N 40{degree}W, close to that predicted by plate motion for a transform boundary, to N 73{degree}W. The strain orientation near the SAF is consistent with right-lateral shear along the fault, with maximum shear rate of 0.38 {plus minus} 0.01 {mu}rad/yr at N 63{degree}W. In contrast, away from the SAF the strain orientations on both sides of the fault are consistent with the plate motion direction, with maximum shear rate of 0.19 {plus minus} 0.01 {mu}rad/yr at N 44{degree}W. The strain rate does not drop off rapidly away from the fault, and thus the area is fit by either a broad shear zone below the SAF or a single fault with a relatively deep locking depth. The fit to the line length data is poor for locking depth d less than 25 km. For d of 25 km a buried slip rate of 30 {plus minus} 6 mm/yr is estimated. The authors also estimated buried slip for models that include the Garlock and Big Pine faults, in addition to the SAF. Slip rates on other faults are poorly constrained bymore » the Los Padres-Tehachapi network. The best fitting Garlock fault model had computed left-lateral slip rate of 11 {plus minus} 2 mm/yr below 10 km. Buried left-lateral slip of 15 {plus minus} 6 mm/yr on the Big Pine fault, within the Western Transverse Ranges, provides significant reduction in line length residuals; however, deformation there may be more complicated than a single vertical fault. A subhorizontal detachment on the southern side of the SAF cannot be well constrained by these data. The authors investigated the location of the SAF and found that a vertical fault below the surface trace fits the data much better than either a dipping fault zone located south of the surface trace.« less

46 citations


Proceedings ArticleDOI
10 Sep 1990
TL;DR: A novel linear-time algorithm for identifying a large set of faults that are undetectable by a given test vector, intended as a simple, fast preprocessing step to be performed after a test vector has been generated, but before the (often lengthy) process of fault simulation begins.
Abstract: The authors propose a novel linear-time algorithm for identifying, in a large combinatorial circuit, a large set of faults that are undetectable by a given test vector. Although this so-called X-algorithm does not identify all the undetectable faults, empirical evidence is offered to show that the reduction in the number of remaining faults to be simulated is significant. The algorithm is intended as a simple, fast preprocessing step to be performed after a test vector has been generated, but before the (often lengthy) process of fault simulation begins. The empirical results indicate that the X-algorithm is both useful (indicated by the utility factor) and good (indicated by the effectiveness factor). It provides as much as a 50% reduction in the number of faults that need to be simulated. Moreover, the algorithm seems to identify a large fraction of the undetectable faults. >

45 citations


Journal ArticleDOI
TL;DR: A system architecture called the recovery metaprogram (RMP) is proposed, which separates the application from the recovery software, giving programmers a single environment that lets them use the most appropriate fault-tolerance scheme.
Abstract: A system architecture called the recovery metaprogram (RMP) is proposed. It separates the application from the recovery software, giving programmers a single environment that lets them use the most appropriate fault-tolerance scheme. To simplify the presentation of the RMP approach, it is assumed that the fault model is limited to faults originating in the application software, and that the hardware and kernel layers can mask their own faults from the RMP. Also, relationships between backward and forward error recovery are not considered. Some RMP examples are given, and a particular RMP implementation is described. >

43 citations


Proceedings ArticleDOI
Kwang-Ting Cheng1, J.-Y. Jou1
11 Nov 1990
TL;DR: Experimental results show that the test set generated for SST faults achieves not only a high single stuck-at fault coverage but also a high transistor fault coverage for a multilevel implementation of the machine.
Abstract: A fault model in the state transition level of finite state machines is studied. In this model, called a single-state-transition (SST) fault model, a fault causes a state transition to go to a wrong destination state while leaving its input/output label intact. An analysis is given to show that a test set that detects all SST faults will also detect most multiple-state-transition (MST) faults in practical finite state machines. It is shown that, for an N-state M-transaction machine, the length of the SST fault test set is upper-bounded by 2*M*N/sup 2/ while the length is exponential in terms of N for a checking experiment. Experimental results show that the test set generated for SST faults achieves not only a high single stuck-at fault coverage but also a high transistor fault coverage for a multilevel implementation of the machine. >

Proceedings ArticleDOI
12 Mar 1990
TL;DR: This paper describes PROOFS, a super fast fault simulator for synchronous sequential logic circuits that achieves high performance by combining all the advantages in differential fault simulation, single fault propagation, and parallel fault simulation to minimize the memory requirements, to reduce events that need to be simulated, and to simplify the complexity of the software implementation.
Abstract: This paper describes PROOFS, a super fast fault simulator for synchronous sequential logic circuits. PROOFS achieves high performance by combining all the advantages in differential fault simulation, single fault propagation, and parallel fault simulation to minimize the memory requirements, to reduce events that need to be simulated, and to simplify the complexity of the software implementation. The experimental results of PROOFS and other available fault simulators on 20 benchmark circuits showed that PROOFS is the best. >

Proceedings ArticleDOI
17 Sep 1990
TL;DR: Through the use of transient analysis it is shown that the only way to insure proper functioning of BiCMOS circuits is to test for delay faults, and more importantly, tests for stuck-at faults will not detect realistic features in Bi CMOS technology.
Abstract: The adequacy of the stuck-at fault model for BiCMOS logic is investigated. Realistic failures in basic logic blocks are examined, and their coverage by the stuck-at model is explored. It is shown that the static stuck-at model cannot cover the complete range of possible failures, and more importantly, tests for stuck-at faults will not detect realistic features in BiCMOS technology. This is because most open faults manifest themselves as delay failures. Through the use of transient analysis it is shown that the only way to insure proper functioning of BiCMOS circuits is to test for delay faults. >

Proceedings ArticleDOI
24 Jun 1990
TL;DR: A dependency-directed backtracking method is implemented to speed up the test generation process for circuits with high-level primitives and techniques for signal value justification, and fault propagation are presented.
Abstract: A general methodology to speed up the test generation process for combinational circuits with high-level primitives is proposed. The technique is able to handle circuits in a hierarchical fashion, treats the signal at a bit-vector level rather than the bit level and takes advantage of the complex operations that are available in the computer system. The technique has been implemented and the results are presented for five circuits. It is shown that by using the high-level primitives a significant speed-up and significant reduction in storage requirement are achieved. More importantly, the reduction in storage size permits test generation for very large circuits. It is clear that use of high-level primitives is more efficient than use of low-level primitives in test generation. A dependency-directed backtracking mechanism is also present which reduces the number of backtracks. The technique presented is complete, permits test vector generation for a broad class of large circuits with complex primitives, and accommodates a very general fault model. >

Journal ArticleDOI
TL;DR: In this article, a nested fault model with different geometric scales is proposed to explain the hierarchical occurrence of the fault systems, where the secondary fault system is contained within the primary fault system as a mirror image of it.

Journal ArticleDOI
TL;DR: This method can be used to generate tests for general circuits in a hierarchical fashion, with both high- and low-level fault types, yielding 100 percent SSL fault coverage with significantly fewer test patterns and less test generation effort than conventional one-level approaches.
Abstract: A new hierarchical modeling and test generation technique for digital circuits is presented. First, a high-level circuit model and a bus fault model are introduced—these generalize the classical gate-level circuit model and the single-stuck-line (SSL) fault model. Faults are represented by vectors allowing many faults to be implicitly tested in parallel. This is illustrated in detail for the special case of array circuits using a new high-level representation, called the modified pseudo-sequential model, which allows simultaneous test generation for faults on individual lines of a multiline bus. A test generation algorithm called VPODEM is then developed to generate tests for bus faults in high-level models of arbitrary combinational circuits. VPODEM reduces to standard PODEM if gate-level circuit and fault models are used. This method can be used to generate tests for general circuits in a hierarchical fashion, with both high- and low-level fault types, yielding 100 percent SSL fault coverage with significantly fewer test patterns and less test generation effort than conventional one-level approaches. Experimental results are presented for representative circuits to compare VPODEM to standard PODEM and to random test generation techniques, demonstrating the advantages of the proposed hierarchical approach.

Journal ArticleDOI
TL;DR: In this article, a fault model that includes row and column sensitive faults is formally defined, and an algorithm to detect faults on the basis of this model is presented, and two different implementations of the algorithm for a VLSI built-in-self test (BIST) environment are presented.
Abstract: Row and column sensitive faults in RAMs are a class of faults in which the contents of a cell become sensitive to the contents of the row and column containing the cell in presence of a fault. A fault model that includes such faults is formally defined, and an algorithm to detect faults on the basis of this model is presented. Two different implementations of the algorithm for a VLSI built-in-self-test (BIST) environment are presented. They are a random-logic-based design and a microcode-based design. Additional properties of the algorithm, such as its capability to detect stuck-at faults, coupling faults, and conventional pattern sensitive faults, are identified. >

Journal ArticleDOI
TL;DR: It is shown that domino-CMOS is much more suitable for implementation of self-checking circuits than static CMOS and the concept of strongly self- checking (SSC) circuits, which is a generalization from TSC circuits, is introduced.
Abstract: The totally self-checking (TSC) concept is well established for applications in the area of online error-indication. TSC circuits can detect both transient and permanent faults. They consist of a functional circuit with encoded inputs and outputs and a checker which monitors these outputs. The TSC concept can be generalized for the functional circuits using the strongly fault-secure (SFS) concept. Here, the concept of strongly self-checking (SSC) circuits, which is a generalization from TSC circuits, is introduced. Most of the TSC circuits presented in the literature are designed at the logic gate level using the stuck-at fault model. However, this fault model is inadequate for MOS technologies. Here, it is shown that a TSC gate-level functional circuit can be implemented in the domino-CMOS technology as an SFS circuit, while a TSC gate-level checker can be implemented as an SSC checker. For the domino-CMOS implementation the fault model is enlarged to stuck-at, stuck-open, and stuck-on faults. It is shown that domino-CMOS is much more suitable for implementation of self-checking circuits than static CMOS. >

Journal ArticleDOI
TL;DR: An outline is presented of a synthesis procedure that, beginning from a state transition graph (STG) description of a sequential machine, produces an optimized easily testable programmable logic array (PLA) based logic implementation.
Abstract: An outline is presented of a synthesis procedure that, beginning from a state transition graph (STG) description of a sequential machine, produces an optimized easily testable programmable logic array (PLA) based logic implementation. Previous approaches to synthesizing easily testable sequential machines have concentrated on the stuck-at-fault model; for PLAs, an extended fault model called the crosspoint fault model has been used. The authors propose a procedure of constrained state assignment and logic optimization which guarantees testability for all combinationally irredundant crosspoint faults in a PLA-based finite-state machine. No direct access to the flip-flops is required. The test sequences to detect these faults can be obtained using combinational test generation techniques alone. This procedure thus represents an alternative to a scan design methodology. Results are presented which show that the area/performance penalties in return for easy testability are small. >

Journal ArticleDOI
TL;DR: In this article, a new stuck-at-fault model is presented which provides a better coverage of physical failures than the conventional stuck at fault model for ECL OR/NOR gates.
Abstract: Logic behaviour of an ECL OR/NOR gate under different physical faults is examined. It is shown that the conventional stuck-at fault modelling may be inadequate for obtaining a sufficiently high fault coverage. A new augmented stuck-at fault model is presented which provides a better coverage of physical failures.

Journal ArticleDOI
TL;DR: Results fundamental to the problem of detecting coupling faults in random access memories (RAMs) are presented, and Abadir and Reghbati's improved version of the traditional test GALPAT, of length 4n2 + 4n, is shown to detect general toggling but not general coupling.
Abstract: This article presents results fundamental to the problem of detecting coupling faults in random access memories (RAMs). Good and faulty memories are represented as Mealy automata using the formal framework for sequential machine testing developed by Brzozowski and Jurgensen. A precise description of the coupling fault is used to define two fault models: “general coupling,” which is the set of all possible multiple coupling faults, and “general toggling,” which is a subset of general coupling. A lower bound of 2n2 + n is derived on the length of any test that detects general toggling in an n cell memory; a test by Marinescu is thereby shown to be optimal for this fault model. A lower bound of 2n2 + 3n is derived on the length of any test that detects general coupling, and a corresponding test of length 2n2 + 4n is described. Abadir and Reghbati's improved version of the traditional test GALPAT, of length 4n2 + 4n, is shown to detect general toggling but not general coupling.

Proceedings ArticleDOI
24 Jun 1990
TL;DR: A new method, Difference Propagation, is proposed to analyze fault models in combinational circuits that propagates Boolean functional information represented by ordered binary decision diagrams and represents the first data of this type for bridging faults.
Abstract: A new method, difference propagation, is proposed to analyze fault models in combinational circuits. It propagates Boolean functional information represented by ordered binary decision diagrams. Results are presented concerning exact detectabilities and syndromes for a set of benchmark circuits. The data suggest answers to open questions in CAD and represent the first data of this type for bridging faults. The information is shown to affect testable design, as well as test generation. >

Journal ArticleDOI
TL;DR: A characterization for the fault location and the fault type of the one-response fault are given and this characterization is used in proving that baseline interconnection networks with fan-in/fan-out of 2 can be diagnosed with a constant number of tests independent of the network size.
Abstract: A novel approach for the diagnosis of baseline interconnection networks with a fan-in/fan-out of 2 is presented. The totally exhaustive combinatorial fault model with single fault assumption is used in the analysis. Some new characteristics of baseline interconnection networks are proved. A characterization for the fault location and the fault type of the one-response fault are given. This characterization is used in proving that baseline interconnection networks with fan-in/fan-out of 2 can be diagnosed with a constant number of tests independent of the network size. The maximum number of tests is 12. >

Proceedings ArticleDOI
30 Sep 1990
TL;DR: A basic model for dynamic change management is proposed which permits changes to be specified declaratively at the configuration level in terms of structure only and provides a sound basis for the management of faults where failure of a component node is modeled as arbitrary deletion.
Abstract: A basic model for dynamic change management is proposed which permits changes to be specified declaratively at the configuration level in terms of structure only. Component nodes can be created and deleted and interconnections made and broken. Rules are provided which permit configuration management to identify the affected part of the system and to generate the required operational changes. These operations change the state of the selected component nodes to that suitable for reconfiguration, perform the particular structural changes, and provide the sequencing of the actions. A node responds by performing node-level actions to maintain local and intermode consistency. This model also provides a sound basis for the management of faults where failure of a component node is modeled as arbitrary deletion. The first steps toward unifying the management of dynamic change and fault recovery are described. >

Proceedings ArticleDOI
11 Nov 1990
TL;DR: The authors have developed the delay fault simulator DELFI-a program which is capable of simulating timing failures of combinational circuits using different delay fault models, which demonstrates that the transition fault test patterns detect gross delay faults even at nodes with redundant stuck-at faults.
Abstract: A study is presented concerning the efficiency of test pattern sets generated with the transition fault model applied to fine grained delay fault models. The authors have developed the delay fault simulator DELFI-a program which is capable of simulating timing failures of combinational circuits using different delay fault models. For the computer experiments the authors selected transition fault test pattern sets because they are very cost-effective to generate. The simulations of benchmark circuits demonstrate that the transition fault test patterns detect gross delay faults even at nodes with redundant stuck-at faults. Furthermore the results show that the transition fault test patterns are not sufficient for small delay faults in the range of a few gate delays. In order to receive a satisfactory coverage for these delay faults, the transition fault test sets must be extended. >

Proceedings ArticleDOI
05 Dec 1990
TL;DR: In this article, the authors proposed an approach to provide a tractable solution to the multiple failure problem within a reasonable time frame to permit some preventive or corrective action to be instituted before a catastrophic event occurs.
Abstract: The fault propagation aspects of an overall real-time control system are examined. A novel method is introduced that extends previous work done in the context of fault models and qualitative behavioral modeling for addressing the problem of fault propagation through a complex system. The goal of the proposed approach is to provide a tractable solution to the multiple failure problem within a reasonable time frame to permit some preventive or corrective action to be instituted before a catastrophic event occurs. Two basic approaches are being studied. The first involves the use of qualitative fault models and behavioral states to determine the impact of a failed component in a complex system. The second approach involves energy-based stability analysis and qualitative simulation. A space station thermal subsystem and a jet engine have been chosen as the basis for model and simulation work. >

Journal ArticleDOI
TL;DR: A general procedure for error detection in complex systems, called the data block capture and analysis monitoring process, is described and analyzed, concerned with detecting deviations from the normal performance of the system, known as errors, which are symptomatic of fault conditions.
Abstract: A general procedure for error detection in complex systems, called the data block capture and analysis monitoring process, is described and analyzed. It is assumed that, in addition to being exposed to potential external fault sources, a complex system will in general always contain embedded hardware and software fault mechanisms which can cause the system to perform incorrect computations and/or produce incorrect output. Thus, in operation, the system continuously moves back and forth between error and no-error states. These external fault sources or internal fault mechanisms are extremely difficult to detect. The data block capture and analysis monitoring process is concerned with detecting deviations from the normal performance of the system, known as errors, which are symptomatic of fault conditions. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e. capturing a block of data) and then analyzing the captured block in an attempt to determine whether the system is functioning correctly. >

Proceedings ArticleDOI
01 Jan 1990
TL;DR: The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham and enhanced by replacing the knowledge base of if-then rules with an object-oriented fault tree representation.
Abstract: When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.

Proceedings ArticleDOI
10 Sep 1990
TL;DR: Efficient strategies for selectively performing fault-free simulation, critical path tracing in fanout-free regions, and fault simulation of stem faults in a parallel pattern evaluation environment are presented and analyzed in an implementation-independent manner.
Abstract: Efficient strategies for selectively performing fault-free simulation, critical path tracing in fanout-free regions, and fault simulation of stem faults in a parallel pattern evaluation environment are presented and analyzed in an implementation-independent manner. The dynamic changes in the complexity of the fault simulation components as the fault simulation progresses and faults are detected are shown to be extremely significant. In particular, fault-free simulation tends quickly to become more expensive than both the critical path tracing within fanout-free regions and the explicit simulation of stem faults. In addition, the presence of redundant faults is shown to have an inhibiting effect on the reduction of the fault simulation complexity. >

Journal ArticleDOI
TL;DR: In this article, a theoretical study is made to investigate the excitation of short-period P- and S-waves, based on a stochastic fault model, and it is shown that the source effectively radiates short period S-wave, which gives rise to a seismic directivity effect peculiar to the incoherent short waves.