scispace - formally typeset
Search or ask a question

Showing papers on "Fault model published in 1991"


Journal ArticleDOI
TL;DR: The authors present DYNAMITE, a versatile and efficient automatic test pattern generation system for path delay fault that has the capability of proving large numbers of path faults as redundant by a single test generation attempt.
Abstract: The authors present DYNAMITE, a versatile and efficient automatic test pattern generation system for path delay fault. Based upon a ten-valued and a three-valued logic, the deterministic test pattern generation algorithm incorporated in DYNAMITE is capable of generating both robust and nonrobust tests for path delay faults. Particular emphasis has been placed on coping with the main disadvantage of the path delay fault model. The distinct features of DYNAMITE consist of the application of a powerful implication procedure and a stepwise path sensitization procedure, which has the capability of proving large numbers of path faults as redundant by a single test generation attempt. The delay test generation process is further optimized by the use of a new path selection procedure which aims at the identification of paths, which can successfully be sensitized, and the elimination of redundant, i.e., unsensitizable, paths. >

172 citations


Patent
08 May 1991
TL;DR: In this paper, a fault recovery system of a ring network based on a synchronous transport module transmission system, having a fault data writing unit for writing, when an input fault is detected by a node is described.
Abstract: A fault recovery system of a ring network based on a synchronous transport module transmission system, having a fault data writing unit for writing, when an input fault is detected by a node, fault data in a predetermined user byte in an overhead of a frame flowing through both a working line and a protection line running in opposite directions to each other. By detecting the fault data in a supervision node or a node just before the fault position, the supervision node or the node just before the fault position executes a loopback operation.

96 citations


ReportDOI
01 Apr 1991
TL;DR: In this paper, a method is developed to test delay-insensitive circuits, using the single stuck-at fault model, where the circuits are synthesized from a high-level specification.
Abstract: A method is developed to test delay-insensitive circuits, using the single stuck-at fault model. These circuits are synthesized from a high-level specification. Since the circuits are hazard-free by construction, there is no test for hazards in the circuit. Most faults cause the circuit to halt during test, since they cause an acknowledgement not to occur when it should. There are stuck-at faults that do not cause the circuit to halt under any condition. These are stimulating faults; they cause a premature firing of a production rule. For such a stimulating fault to be testable, the premature firing has to be propagated to a primary output. If this is not guaranteed to occur, then one or more test points have to be added to the circuit. Any stuck-at fault is testable, with the possible addition of test points. For combinational delay-insensitive circuits, finding test vectors is reduced to the same problem as for synchronous combinational logic. For sequential circuits, the synthesis method is used to find a test for each fault efficiently, to find the location of the test points, and to find a test that detects all faults in a circuit. The number of test points needed to fully test the circuit is very low, and the size of the additional testing circuitry is small. A test derived with a simple transformation of the handshaking expansion yields high fault coverage. Adding tests for the remaining faults results in a small complete test for the circuit.

91 citations


Journal ArticleDOI
TL;DR: NewHigh-level behavior fault models that are associated with high-level hardware descriptions of digital designs that are based on the failure modes of the language constructs of the high- level hardware description language are introduced.
Abstract: A critical aspect of digital electronics is the testing of the manufactured designs for correct functionality. The testing process consists of first generating a set of test vectors, then applying them as stimuli to the manufactured designs, and finally comparing the output response with that of the desired response. A design is considered acceptable when the output response matches the desired response and is rejected otherwise. Fundamental to the process of test vector generation is the assumption of an underlying fault model that is a model of the failures introduced during manufacture. The choice of the fault model influences the accuracy of testing and the computer CPU time required to generate test vectors for a given design. The most popular fault model in the industry today is the single stuck-at fault at the gate level that requires exorbitantly large CPU times for moderately complex digital designs. This article introduces new high-level behavior fault models that are associated with high-level hardware descriptions of digital designs. The derivation of these faults is based on the failure modes of the language constructs of the high-level hardware description language. Behavior faults include multiple input stuck-at faults and this article also reasons the nature of test vectors for such faults. The potential advantages of behavior fault modeling include early estimates of fault coverage in the design process prior to the synthesis of the gate-level representation of the design, faster fault simulation, and results that may be more comprehensible to the high-level architects. The behavior-fault-modeling approach is evaluated through a study of correlation of the results of behavior fault simulation of several representative digital designs with the results of gate-level single stuck-at fault simulation of equivalent gate-level representations.

80 citations


Proceedings ArticleDOI
26 Oct 1991
TL;DR: A two-stage procedure for locating V LSI faults is presented and an industrial implementation is reported in which faults were injected and diagnosed in a VLSI chip and the perjiormunce of two- stage fault location was measured.
Abstract: A two-stage procedure for locating VLSI faults is presented. The approach utilizes dynamic fault dictionaries, test set partitioning, and reduced fault lists to achieve a reduction in size and complexity over classic static fault dictionaries. An industrial implementation is reported in which faults were injected and diagnosed in a VLSI chip and the perjiormunce of two-stage fault location was measured.

79 citations


Proceedings ArticleDOI
11 Nov 1991
TL;DR: PARIS is based on the well-known approach of parallel pattern single fault propagation for combinational circuits and features several new techniques, including heuristic look-ahead of signal values, which minimizes the number of events that must be tracked.
Abstract: The authors describe PARIS, a parallel-pattern fault simulator for synchronous sequential circuits. PARIS is based on the well-known approach of parallel pattern single fault propagation for combinational circuits and features several new techniques. Every single pattern packet is simulated by an iterative, event-driven method. Heuristic look-ahead of signal values minimizes the number of events that must be tracked. Clever circuit partitioning prevents multiple evaluation of the feedback free parts of the circuit, thus reducing the required simulation effort. Experiments show that PARIS runs at a substantially higher asymptotic speed compared with a state-of-the-art fault simulator for synchronous sequential circuits. >

61 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the synthetics developed in modeling the 1988 Upland sequence were available for use in rapid assessment of the activity and obtained from long-period waves a fault-plane solution of θ = 216°, δ = 77°, and λ = 5.0°, M_0 = 2.5 × 10^(24) dyne-cm.
Abstract: The 1990 Upland earthquake was one of the first sizable local events to be recorded broadband at Pasadena, where the Green's functions appropriate for the path are known from a previous study. The synthetics developed in modeling the 1988 Upland sequence were available for use in rapid assessment of the activity. First-motion studies from the Caltech-USGS array data gave two solutions for the 1990 main shock based on the choice of regional velocity models. Although these focal mechanisms differ by less than 5° in strike and 20° rake, it proved possible to further constrain the solution using these derived Green's functions and a three-component waveform inversion scheme. We obtain from long-period waves a fault-plane solution of θ = 216°, δ = 77°, λ = 5.0°, M_0 = 2.5 × 10^(24) dyne-cm, depth = 6 km, and a source duration of 1.2 sec, for which the orientation and source depth are in good agreement with the first-motion results of Hauksson and Jones (1991). Comparisons of the broadband displacement records with the high-pass Wood-Anderson simulations suggests the 1990 earthquake was a complicated event with a strong asperity at depth. Double point-source models indicate that about 30 per cent of the moment was released from a 9-km deep asperity following the initial source by 0.0 to 0.5 sec. Our best-fitting distributed fault model indicates that the timing of our point-source results is feasible assuming a reasonable rupture velocity. The rupture initiated at a depth of about 6 km and propagated downward on a 3.5 by 3.5 km (length by width) fault. Both the inversion of long-period waves and the distributed fault modeling indicate that the main shock did not rupture the entire depth extent of the fault defined by the aftershock zone. A relatively small asperity (about 1.0 km^2) with a greater than 1 kbar stress drop controls the short-period Wood-Anderson waveforms. This asperity appears to be located in a region where seismicity shows a bend in the fault plane.

56 citations


Journal ArticleDOI
TL;DR: A variable slip fault model for the 1908 December 28 Messina (Italy) earthquake is presented in this article, which is obtained by inversion of levelling data, using a gradient method.
Abstract: SUMMARY A variable slip fault model is presented for the 1908 December 28 Messina (Italy) earthquake. It has been obtained by inversion of levelling data, using a gradient method. Fault location and mechanism are consistent with an approximately N-S orientation and normal faulting on a low-angle plane dipping 39°, with about 45° of strike-slip component. The main difference from a homogeneous slip model is represented by a higher slip area (up to 3 m), located close to the city of Reggio Calabria. It is interpretable as a major asperity, which is related to the main fracture episode. This is in agreement with some macroseismic evidence and the same structure could be responsible for the nearby recent seismic activity. The area which effectively slipped is all confined within the Messina straits. This result differs from the homogeneous slip model recently proposed by Capuano et al. (1988).

55 citations


Proceedings ArticleDOI
G. Bolt1
18 Nov 1991
TL;DR: The author describes a method by which fault models can be developed for neural networks visualized at the abstract level, thus allowing their inherent fault tolerance to be probed, and increases the possibility of their being generic in nature due to the independence of implementation.
Abstract: The author describes a method by which fault models can be developed for neural networks visualized at the abstract level, thus allowing their inherent fault tolerance to be probed. The derivation of such fault models has two stages: the location of where faults can occur, and the definition of the faults' characteristics. As an example, a fault model for the multilayer perceptron neural network model is developed for each stage. The abstract nature of such fault models increases the possibility of their being generic in nature due to the independence of implementation. Also, they will allow the inherent fault tolerance of a neural network to be constructively and realistically investigated. >

49 citations


Proceedings ArticleDOI
01 Jun 1991
TL;DR: The single transition fault model is augmented by carefully selected multiple transition faults which potentially increase the coverage of single stuck-at faults and experimental results for stuck- at faults are presented.
Abstract: A complete method is presented for generating tests for sequential machines. The transition fault model is employed, and the machine is assumed to be described by a state table. The test generation algorithm described is polynomial in the size of the state table, and is complete and accurate in the following sense. For every given transition fault, the algorithm provides either a test, or a proof that the fault is undetectable. The relationship between transition faults and stuck-at faults is investigated. The single transition fault model is augmented by carefully selected multiple transition faults which potentially increase the coverage of single stuck-at faults. A method to achieve 100% fault efficiency for stuck-at faults is then proposed, and experimental results for stuck-at faults are presented.

45 citations


Proceedings ArticleDOI
25 Jun 1991
TL;DR: It is shown that fault-tolerant protocol testing by deterministic fault injection achieve better coverages than by random fault injection.
Abstract: A deterministic test strategy consisting of deterministic fault injection at the message level is investigated. Messages sent by faulty units are replaced by such wrong messages that cause all program parts of the faultless protocol units to be executed subsequently. Since this well-aimed fault injection poses complex problems, heuristics based on the program flow of previous injections of wrong messages is dynamically applied. The program parts to be tested are selected with increasing granularity until either a design error is found or sufficient structural coverage is reached, which reflects the portion of tested program parts. Using a simplified program model, an algebraic analysis of the structural coverage and the design error coverage, which is the probability to reveal an existing design error, is carried out. It is shown that fault-tolerant protocol testing by deterministic fault injection achieve better coverages than by random fault injection. >

Journal ArticleDOI
TL;DR: In this paper, the geometry of the south flank earthquake on Kilauea volcano in 1989 is determined from leveling data using a nonlinear inversion procedure, and the best fitting model is a gently dipping thrust fault that lies at 4 km depth, significantly shallower than the 9 km hypocentral depth determined from the local seismic network.
Abstract: The geometry of the fault that ruptured during the M6.1 south flank earthquake on Kilauea volcano in 1989 is determined from leveling data. The elastic dislocation, in a homogeneous elastic half-space, that best fits the data is found using a nonlinear inversion procedure. The best fitting model is a gently dipping thrust fault that lies at 4 km depth. This is significantly shallower than the 9 km hypocentral depth determined from the local seismic network. Two-dimensional finite-element calculations indicate that at least part of this discrepancy can be attributed to the focusing of the surface deformation by the upper few kilometers of compliant, low-density lavas. We conclude that it is important to include realistic elastic structure to estimate source geometry from geodetic data.

Proceedings ArticleDOI
26 Oct 1991
TL;DR: This paper presents, for the first time, a heuristic-driven test generation procedure for obtaining maximal multiple-path-propagating robust tests, which detect the largest possible number of path faults simultaneously.
Abstract: The path delay fault model is arguably the strongest model for real delay defects in circuits. The recent availability of fully path delay fult testable designs has made it feasible to consider the problem of making test application for path delay faults more efficient by reducing the sizes of the potentially large test-sets required to obtain satisfactory coverages. This paper presents, for the first time, a heuristic-driven test generation procedure for obtaining maximal multiple-path-propagating robust tests, which detect the largest possible number of path faults simultaneously. Extensive experimental results are presented to demonstrate the efficacy of this approach, which is seen to significantly reduce test-set lengths for path delay faults by generating highly efficient robust tests.

Proceedings ArticleDOI
11 Nov 1991
TL;DR: Methods are investigated for reducing events in sequential circuit fault simulation by reducing the number of faults simulated for each test vector and the Star-algorithm is extended to handle sequential circuits and provides global information about inactive faults, based on the fault-free circuit state.
Abstract: Methods are investigated for reducing events in sequential circuit fault simulation by reducing the number of faults simulated for each test vector. Inactive faults, which are guaranteed to have no effect on the output or the next state, are identified using local information from the fault-free circuit in one technique. In a second technique, the Star-algorithm is extended to handle sequential circuits and provides global information about inactive faults, based on the fault-free circuit state. Both techniques are integrated into the PROOFS synchronous sequential circuit fault simulator. An average 28% reduction in faulty circuit gate evaluations is obtained for the 19 ISCAS-89 benchmark circuits studied using the first technique, and 33% reduction for the two techniques combined. Execution times decrease by an average of 17% when the first technique is used. For the largest circuits, further improvements in execution time are made when the Star-algorithm is included. >

Proceedings Article
01 Jan 1991
TL;DR: In this article, a new fault model that takes into account complex couplings resulting from simultaneous access of memory cells is used in order to ensure a very high fault coverage, which achieves qn complexity thanks to the use of some topological restrictions.
Abstract: In this paper we present a new approach to the test of multi-port RAMS. A new fault model that takes into account complex couplings resulting from simultaneous access of memory cells is used in order to ensure a very high fault coverage. A new algorithm for the test of dual-port memories is here detailed. This algorithm achieves qn) complexity thanks to the use of some topological restrictions. We also present a new BlST scheme, based on programmable schematic generators, that allows great flexibility for ASIC design. I - INTRODUCTION Multi-port RAMS are widely used as embedded memories in telecommunication ASlCs and also in multi-processor systems. These special RAM chips are provided by IC vendors as well as multi-port RAM generators that are already available in some ASIC vendors libraries. The testing of these memories is getting more important since they are often embedded in VLSl and ULSl circuits (RAT 901. Efficient algorithms for the test of multi-port RAMS can thus be used in two ways : ASIC libraries providing test algorithms and ASIC libraries providing BIST circuit generators as an option for the multi-port RAM generator. Multi-port RAMS differ from single port RAMS by the possibility of accessing more than one cell at a time. Various authors have studied the fault modeling of single port RAMS as well as test algorithms for the detection of such faults. Coupling faults are a widely used model for the testing of single port RAMS (SUK 811 (THA 781. Recent work on multi-port RAM testing use the same coupling fault models (RAP 88) (NAD 90) or Induction Fault Analysis extracted faults (SAS 901 (which do not take couplings into account). In (CASSl), we have shown that the above fault model is not sufficient in order to test multi-port RAMS. Thus, we have introduced the complex coupling faults. In section II we briefly review the concept of complex coupling. In this paper, we detail an algorithm based on a topdogical approach that results in an O(n) complexity test algorithm with a very high fault coverage. Conceming the implementation of the algorithm and in order to be compatible with the high flexibility provided by ASIC libraries RAM generators, we have developed a totally parameterizable BlST circuit. The development is done using GDP tools and the THOMSON Composants Militaires et Spatiaux (TMS) CSAM ASIC compiled function library.

Journal ArticleDOI
TL;DR: A preliminary fault classification is proposed, which uncovers the types of realistic faults in MOS digital ICs that are hard to detect, paving the way to derive layout rules for hard-fault avoidance.
Abstract: In order to make possible the production of cost-effective electronic systems, integrated circuits (ICs) need to be designed for testability. The purpose of this article is to present a methodology for testability enhancement at the lower levels of the design (i.e., at circuit and layout levels). The proposed strategy uses both hardware refinement and software improvement. The main areas of low-cost software improvement are test generation based on a logic description closely related to the physical design, test-vector sequencing, and the introduction of circuit knowledge in fault simulation. The strategy for hardware improvement is based on realistic fault list generation, fault classification (according to fault impact on circuit behavior), and layout-level DFT (design for testability) rules derivation. A preliminary fault classification is proposed, which uncovers the types of realistic faults in MOS digital ICs that are hard to detect, paving the way to derive layout rules for hard-fault avoidance. Simulation examples are presented ascertaining that specific subsets of line-open and bridging faults (according to their topological characteristics) are hard to detect by logic testing using test patterns derived for line stuck-at fault detection.

Proceedings ArticleDOI
01 Jan 1991
TL;DR: The authors describe a method for performing transistor-level logical fault simulation that relies on switch-level modeling and uses a switch- level matrix-equation formulation and solution into which fault models are inserted in a straightforward manner.
Abstract: The authors describe a method for performing transistor-level logical fault simulation. The method relies on switch-level modeling and uses a switch-level matrix-equation formulation and solution into which fault models are inserted in a straightforward manner. The fault models include transistor stuck-at, node stuck-at, and bridging faults. Both output voltage monitoring and current testing are used for fault detection. The approach has been implemented in a concurrent fault simulator and tested using both combinational and sequential circuit benchmarks. The results of the simulator compare very favourably with existing switch-level fault simulators while allowing more complete transistor-level fault models to be included. >

Proceedings ArticleDOI
11 Jun 1991
TL;DR: In this paper, the authors present a comprehensive study of fault modeling of the class of sample-and-hold circuits frequently used in mixed analog/digital signal processors, and validate the concept of fault equivalence for analog circuits.
Abstract: The author presents the first comprehensive study of fault modeling of the class of sample-and-hold circuits frequently used in mixed analog/digital signal processors. The faults under study consist of catastrophic faults and out-of-specification faults. Even if the faults are restricted to the passive components and MOS switches (i.e. the operational amplifiers are assumed fault-free), the effects of these faults are quite complex, especially the out-of-specification faults. For example, an incorrect value of the resistor R/sub on/ of an MOS switch and an incorrect value of the capacitor in some cases have the same faulty manifestations at the output, and may be thought of as equivalent faults. The concept of fault equivalence is validated for analog circuits. The results show that various types of faults are distinguishable, thus reducing the size of the analog fault dictionary used in further diagnosis. >

Journal ArticleDOI
TL;DR: A method for real-time fault location which is based on the causal tree construction procedure introducted in Part I, and how fault trees can be abstracted from the causal diagram is discussed.

Proceedings ArticleDOI
25 Jun 1991
TL;DR: An ATPG methodology working at an architectural level is proposed to exploit the hierarchy of the design and relieve the dependence on the gate level information in automatic test pattern generators.
Abstract: Most state-of-the-art automatic test pattern generators (ATPGs) require a detailed gate level representation for the circuits under test, information that either does not exist or may not be available to the test engineers in a hierarchical design environment. An ATPG methodology working at an architectural level is proposed to exploit the hierarchy of the design and relieve the dependence on the gate level information. The test set for each high level primitive is pregenerated by any low-level sequential ATPG tool, based on any possible fault model. The test patterns in these test sets are justified and the fault effects are propagated at high level. Due to the fault collapsing effect, several data types have been defined for the manipulation of all possible fault effects. When conflict occurs and the backtracking mechanism is invoked, a novel tracing technique and an indexed backtracking technique are used to make high-level decisions. >

Proceedings ArticleDOI
11 Nov 1991
TL;DR: A novel fault model that takes into account complex couplings resulting from simultaneous access of memory cells is used in order to ensure a very high fault coverage and a novel algorithm for the test of dual-port memories achieves O(n) complexity.
Abstract: The authors present a novel approach to the test of multi-port RAMs. A novel fault model that takes into account complex couplings resulting from simultaneous access of memory cells is used in order to ensure a very high fault coverage. A novel algorithm for the test of dual-port memories is detailed. This algorithm achieves O(n) complexity thanks to the use of some topological restrictions. The authors also present a novel built-in self-test (BIST) scheme, based on programmable schematic generators, that allows great flexibility for ASIC (application-specific integrated circuit) design. >

Patent
07 Oct 1991
TL;DR: In this article, the intentional fault is introduced into a portion of logical operation data for the logic cells of the device to produce a faulty logic operation data, which corresponds to a fault candidate which represents a location in the device at which hazard is supposed to have occurred to make it uncertain whether or not a fault exists.
Abstract: A fault in a logic IC device including a plurality of logic cells is diagnosed by the use of an intentional fault. The intentional fault is introduced into a portion of logical operation data for the logic cells of the device to produce a faulty logical operation data. That portion of the logical operation data corresponds to a fault candidate which represents a location in the device at which hazard is supposed to have occurred to make it uncertain whether or not a fault exists at the location.

Book
31 Oct 1991
TL;DR: The contribution of Con/Obs to test is discussed in this paper, with a focus on fault models and fault sets, as well as fault control functions and fault observation functions for tree circuits.
Abstract: kS FormS=k,S?k.- 7.5.5 Symmetric Functions with M-sets Integrally Divisible by a Constantc(not Divisible byc).- 7.6 Uniqueness Argument.- 7.7 OBDDs for Tree Circuits.- 7.8 OBDD Size Summary.- 8. Difference Propagation.- 8.1 The Development of Difference Propagation.- 8.2 Deriving the Input-Output Relationships.- 8.3 The Difference Propagation Algorithm.- 8.4 The Efficiency of Differences.- 8.5 Using Functional Decomposition.- 8.5.1 "Random" Decomposition.- 8.5.2 Threshold Decomposition.- 8.5.3 Minimum Circuit Width Approach.- 8.5.4 Empirical Comparison of Decomposition Techniques.- 9. Fault Model Behavior.- 9.1 Selection of Fault Models and Fault Sets.- 9.1.1 Stuck-at Fault Sets.- 9.1.2 Bridging Fault Sets.- 9.2 Fault Behavior Results and Analysis.- 9.2.1 Stuck-at Fault Behavior.- 9.2.2 Bridging Fault Behavior.- 10.The Contributions of Con/Obs to Test.- 10.1 Motivation to Study Con/Obs.- 10.2 Definitions of Con/Obs.- 10.3 Generating Con/Obs Information.- 10.3.1 Calculating Fault Controllability Functions.- 10.3.2 Calculating Fault Observability Functions.- 10.4 Con/Obs Results and Analysis.- 10.5 Con/Obs Summary.- 11.Analyzing Test Performance.- 11.1 Defect Level Motivation.- 11.2 ATPG Model Development.- 11.3 Fault Set Selectability.- 11.4 Probabilistic Non-Target Defect Coverage.- 11.5 Faults Sets.- 11.6 Test Performance Results.- 11.7 Implications to Defect Level.- 12. Conclusions.- 13.Suggestions for Future Research.- 13.1 Extensions to OBDD Size Research.- 13.2 Extensions to Difference Propagation.- 13.3 Extensions to Test Quality Research.- 13.4 Using Ordered Partial Decision Diagrams.- 13.5 General Extensions.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the rupture of the 1987 east Chiba earthquake using an empirical Green's function method and a nonlinear inversion technique and proposed a double-fault model based on this aftershock distribution.
Abstract: The rupture of the 1987 east Chiba earthquake was investigated using an empirical Green's function method and a nonlinear inversion technique. In order to discuss what happened on the fault plane in more detail, we relocated the aftershocks as well as the main shock. Relocations were done using a joint hypocenter method in order to increase the accuracy of the relative locations. Based on this aftershock distribution, we propose a double-fault model. The main fault strikes N11°W with nearly vertical dip. A subsidiary fault is perpendicular to the main fault. The results of a source inversion using strong motion waveforms are as follows: (1) moment release was large both in the deep northern area of the main fault and on the subsidiary fault, where aftershock activity was weak; (2) rupture propagated on both main and subsidiary faults simultaneously, and rupture velocity decreased at the depth of around 35 km, near the upper boundary of the Philippine Sea plate; and (3) stress drop was high on the lower northern corner of the main fault where the rupture terminated and on the deep southern area of main fault where the rupture initiated. A negative correlation was found between moment release during the main shock and that during the aftershock sequence, but they did not compensate each other quantitatively. This suggests the existence of a heterogeneous strain distribution in the fault area either before the main shock or after its aftershock sequence.

Patent
22 Oct 1991
TL;DR: In this paper, a two-stage fault measurement is performed by automatically specifying a fault generation part at its display by successively estimating fault part candidates corresponding to each alarm signal at the generation of a fault and obtaining the common collection in the collection of fault parts candidates and an analysis part 3 analyzing the cause of the specified fault generation according to the instruction of an operator.
Abstract: PURPOSE: To provide the network fault diagnostic system capable of smoothly discriminating a fault generation part and analyzing the cause of a fault as needed in a large and complicated network. CONSTITUTION: Two-stage fault measurement is performed by providing an analysis part 2 automatically specifying a fault generation part at its display by successively estimating fault part candidates corresponding to each alarm signal at the generation of a fault and obtaining the common collection in the collection of fault part candidates and an analysis part 3 analyzing the cause of the specified fault generation according to the instruction of an operator on a fault diagnostic system 1. Thus, a general-purpose fault diagnostic system capable of smoothly diagnosing a fault generation part can be provided. COPYRIGHT: (C)1993,JPO&Japio

Journal ArticleDOI
M.M. Ligthart1, R.J. Stans
TL;DR: A fault model for programmable logic arrays (PLAs) is discussed, and it is shown that multiple stuck-at faults are equivalent to multipleCrosspoint faults, multiple bridging faults are sub-equivalent to multiple crosspoint fault, and the set of patterns detecting multiple cross point faults is a subset of theset of patterns detect multiple bridges faults.
Abstract: A fault model for programmable logic arrays (PLAs) is discussed. This model maps realistic failures on four classes of faults: multiple stuck-at faults, multiple bridging faults, multiple crosspoint faults, and faults due to breaks in lines. It is shown that multiple stuck-at faults are equivalent to multiple crosspoint faults, multiple bridging faults are sub-equivalent to multiple crosspoint faults, and the set of patterns detecting multiple crosspoint faults is a subset of the set of patterns detecting multiple bridging faults. Hence, the set of patterns detecting multiple crosspoint faults also detects all multiple stuck-at faults and multiple bridging faults. This reduces the fault model for PLAs to two classes: multiple missing/extra crosspoint faults and multiple breaks. >

Journal ArticleDOI
TL;DR: A method for real-time fault location, which is based on the causal tree construction procedure introduced in Part I, and how fault trees can be abstracted from the causal diagram is discussed.

Proceedings ArticleDOI
29 Jan 1991
TL;DR: The authors study the problem of designing large-area defect-tolerant tree architectures under the fault model that each processor, switch, and wire may be defective with independent constant probability to show that, for any given constant 0
Abstract: The authors study the problem of designing large-area defect-tolerant tree architectures under the fault model that each processor, switch, and wire may be defective with independent constant probability. Using expander graphs, it is shown that, for any given constant 0 >

Proceedings ArticleDOI
11 Nov 1991
TL;DR: The authors propose a novel fault simulation technique for multiple faults, in which shared binary decision diagrams (BDDs) are used as an internal representation of Boolean functions, which is used efficiently to execute multiple fault simulation.
Abstract: The authors propose a novel fault simulation technique for multiple faults. In order to handle a large number of multiple faults, sets of multiple faults are represented by Boolean functions, in which shared binary decision diagrams (BDDs) are used as an internal representation of Boolean functions. The authors also propose a fault dropping method, prime fault dropping, which is used efficiently to execute multiple fault simulation. The authors have succeeded in simulating 39 million double faults of a circuit of 2300 gates with about 20 Mbyte storage. >

Journal ArticleDOI
TL;DR: This work presents a technique to correctly deal with non-stuck-at faults in FCMOS circuits making use of complex macrogates using any gate-level fault simulator providing the observability status that is directly related to that of individual devices in the actual macrogate implementation.
Abstract: This work presents a technique to correctly deal with non-stuck-at faults in FCMOS circuits making use of complex macrogates. This method can be applied to any gate-level fault simulator providing, for each line of the circuit, the observability status that is directly related to that of individual devices in the actual macrogate implementation. Conductance conflicts are correctly solved to detect bridgings and transistors stuck-on. Fault coverage results are presented and discussed for two typical FCMOS circuits. Results obtained on all ISCAS benchmarks show that the time required for the fault simulation of CMOS faults is comparable to that of stuck-ats.