scispace - formally typeset
Search or ask a question

Showing papers on "Fault model published in 1979"


Journal ArticleDOI
TL;DR: A new fault model is proposed for the purpose of testing programmable logic arrays and it is shown that a test set for all detectable modeled faults detects a wide variety of other faults.
Abstract: A new fault model is proposed for the purpose of testing programmable logic arrays. It is shown that a test set for all detectable modeled faults detects a wide variety of other faults. A test generation method for single faults is then outlined. Included is a bound on the size of test sets which indicates that test sets are much smaller than would be required by exhaustive testing. Finally, it is shown that many interesting classes of multiple faults are also detected by the test sets.

124 citations


Journal ArticleDOI
Ryosuke Sato1
TL;DR: In this paper, the relationship between focal parameters and earthquake magnitude was theoretically reexamined based upon the dislocation theory. And the following new empirical relations for shallow and large earthquakes (M_??_5) were suggested; logS(km2)=M-4.07, logD0(cm)=0.502M-1.40, logM0(dyne·cm)=1.50logS( km2)+22.3, where M is earthquake magnitude, S fault area, D0 mean dislocation and M0 seismic moment.
Abstract: Not only for a real earthquake but also for a hypothetical earthquake, it is very convenient if we can estimate the first approximations of focal parameters, such as fault area, dislocation, seismic moment, rise time, etc., by using empirical relations. It is desirable that these relations have as simple forms as possible, explain previous observations even rather roughly, can be predicted From the theory on the fault model available, and also are consistent with each other.In this paper, relationships between some focal parameters and earthquake magnitude proposed by several investigators are theoretically reexamined based upon the dislocation theory. The following new empirical relations for Shallow and large earthquakes (M_??_5), which favor the theoretical considerations, axe suggested; logS(km2)=M-4.07, logD0(cm)=0.502M-1.40, logM0(dyne·cm)=1.50logS(km2)+22.3, where M is earthquake magnitude, S fault area, D0 mean dislocation and M0 seismic moment. These relations provide a constant strain drop on the fault plane, dynamic similarity, proportionality of rise time to S, proportionality of M0 to seismic energy (constant apparent stress) and the relation log T0-0.5M, T0 being the predominant period of particle velocity. Of course these relations must be modified when many refined data are accumulated in the future and the theoretical studies on focal process are well developed.

75 citations


Journal ArticleDOI
TL;DR: A method of fault signature generation is presented that is based upon state space analysis of linear circults and a generalized matrix inverse method for computing the stimulus amplitudes from the pulse response of strictly proper circuits is presented.
Abstract: A method of fault signature generation is presented that is based upon state space analysis of linear circults. An input control sequence is designed to reduce a nontrivial initial state of the circuit under test to the zero state in finite time. The realization of this stimulus as a piecewise constant waveform has step amplitudes that are exponential functions of the poles of the circuit under test. Perturbations of these amplitudes, engendered by element drift failure, constitute a fault signature. Single element value perturbations engender fault signature trajectories in signal space, and the fault dictionary is constructed by defining disjoint decision regions (hypervolumes) around each fault signature trajectory in the signal space. Circuit zeros of transmission allow the dimension of the signal space to be augmented with perturbation of such response waveform parameters as zero crossings. The theory of stimulus design for fault isolation in linear networks and a generalized matrix inverse method for computing the stimulus amplitudes from the pulse response of strictly proper circuits are presented. Examples of response waveforms and fault signature trajectories are given for several circuits.

42 citations


Patent
29 May 1979
TL;DR: In this paper, a shift register connected to elementary gates is added to a digital network to form a fault simulator, which can select a connection from the digital network and simulate a fault on the selected connection.
Abstract: A shift register connected to elementary gates is added to a digital network to form a fault simulator. The shift register and the additional gates can select a connection from the digital network and simulate a fault on the selected connection. Connection selection is done at a speed comparable to that at which the digital network operates and fault simulation is nondestructive. By resetting the shift register the fault simulator performs as if the digital network were fault-free. To simulate certain faults in the digital network a predetermined fault injection pattern for the faults to be simulated is entered into the shift register.

41 citations


Journal ArticleDOI
TL;DR: In this article, phase equalization was carried out for R waves in order to obtain information on source mechanisms of four earthquakes, the Kurile earthquake of 1963, its largest aftershock, the Rat Island earthquake of 1965, and the Tokachi-oki earthquake of 1968.
Abstract: Phase equalization was carried out for R waves in order to obtain information on source mechanisms of four earthquakes, the Kurile earthquake of 1963, its largest aftershock, the Rat Island earthquake of 1965, and the Tokachi-oki earthquake of 1968. The initial phase was computed using the corresponding complete great-circle phase velocity, so that influences of the lateral heterogeneity of the earth's structure were considerably reduced. The fault length of the Kurile earthquake was determined from the initial phase radiation pattern on the basis of a simple propagating fault model. There is no trade-off in this method between the fault length and the rupture velocity. Such a trade-off is inherent to the conventional directivity method. A fault length of 250 km was obtained, which was found to be nearly equal to the length of the aftershock area. We have introduced a dynamic fault parameter ‘source time’ that can be determined from the phase measurement using the complete great-circle phase velocity rather than the average phase velocity of the individual multiple R waves. This quantity corresponds to the rise time of the source time function of an equivalent point source. Source times of 97, 62, 174, and 115 s were obtained for the above four earthquakes.

39 citations


Proceedings ArticleDOI
Charles W. Cha1
25 Jun 1979
TL;DR: An efficient algorithm is presented that generates a multiple fault detection test set and identifies redundancies and Suggestions for designing networks to yield a minimum number of tests in the multiple fault Detection test set are also included.
Abstract: The concept of prime faults is introduced for the study of multiple fault diagnosis in combinational logic networks. It is shown that every multiple fault in a network can be represented by a structurally equivalent fault with prime faults as its only components. Functional and structural masking and covering relations among faults are defined. These relations can be exploited to greatly simplify multiple fault analysis and their test generation. We present an efficient algorithm that generates a multiple fault detection test set and identifies redundancies. Suggestions for designing networks to yield a minimum number of tests in the multiple fault detection test set are also included.

23 citations


Journal ArticleDOI
01 Oct 1979
TL;DR: In this paper, a general computer program using the phase co-ordinate technique has been developed for analysing series, shunt and simultaneous faults on balanced and unbalanced polyphase electrical networks, eliminating the need for maintaining numerous fault analysing subroutines, one for each kind of fault, and making the solution of many difficult and previously unsolvable problems possible.
Abstract: A general computer program using the phase co-ordinate technique has been developed for analysing series, shunt and simultaneous faults on balanced and unbalanced polyphase electrical networks. The program eliminates the need for maintaining numerous fault analysing subroutines, one for each kind of fault, and makes the solution of many difficult and previously unsolvable problems possible. It is employed to analyse the cross-country fault involving different phases. The solution method, program outlines and postfault results of voltages, currents and apparent power of an actual power system are given.

19 citations


Proceedings ArticleDOI
04 Sep 1979
TL;DR: In this paper, the self-checking processor and the main memory unit are triplicated for the purpose of error detection and momentary fault masking.
Abstract: Following an overview of the general practice in the reliable design of digital systems and the discussion of those design considerations for selfchecking and fault tolerant machines, the Self- Checking Microprocessor proposed by Maki (3) is brought to the readers' attention. Then the posibility of using this self-checking design in a hybrid-redundant microprocessor system is explored. In this paper, the self-checking processor and the main memory unit are triplicated for the purposeof error detection and momentary fault masking. Reconfiguration, allowing stand by units to replace failed unit, is possible due to the intelligence of the individual processors. Similarly the memory modules can be switched ON/OFF line by an additional self-checking processor incorporated into the design assuming the task of the majority voter of this TMR system.

4 citations


Proceedings ArticleDOI
04 Sep 1979
TL;DR: In this article, two new techniques to system recovery are described for the case when an error is on any such data transfer path, which are implementable locally, and the system is ensured to recover from any single stuck-at fault, single AND-bridge fault, or single OR-bridges fault in a single retry.
Abstract: In most on-line diagnostic schemes whenever a fault is detected in a system, a rather involved system recovery routine is initiated irrespective of whether the fault is caused by a failure inside a chip, or by a failure outside a chip, say, on the bond connecting a pin to the chip. Failures of the latter type cause errors only when some information is being transferred from one chip to another chip. In this paper, 'two new techniques to system recovery are described for the case when an error is on any such data transfer path. These schemes are implementable locally, and the system is ensured to recover from any single stuck-at fault, single AND-bridge fault, or single OR-bridge fault in a single retry. The system- recovery from faults internal to chips can be per- formed using sophisticated routines. Thus, two- level approach to on-line system diagnosis seems to be more efficient.

4 citations


01 Apr 1979
TL;DR: Developments in reliability modeling for large fault tolerant avionic computing systems are presented and several aspects of fault coverage, including modeling and data measurement of intermittent/transient faults and latent faults, are elucidated and illustrated.
Abstract: Reliability modeling for fault tolerant avionic computing systems was developed. The modeling of large systems involving issues of state size and complexity, fault coverage, and practical computation was discussed. A novel technique which provides the tool for studying the reliability of systems with nonconstant failure rates is presented. The fault latency which may provide a method of obtaining vital latent fault data is measured.

4 citations


Journal ArticleDOI
TL;DR: In this article, a low-pass filter is applied to observed strong-motion accelerograms in the frequency range from O Hz to ν Hz, and relations between maximum amplitudes and cutoff period T (=1/ν) are obtained.
Abstract: The present fault model available in the field of seismology is the highcut source model. Since detailed behaviors of short-period components at the source are still unknown, it is quite difficult to reproduce observed accelerations (or Intensities) by modelling the earthquake source. In this paper, applying a low-pass filter to observed strong-motion accelerograms in the frequency range from O Hz to ν Hz, relations between maximum amplitudes and cutoff period T (=1/ν) are obtained. The empirical equations for maximum acceleration, um (gal), maximum velocity, um (kine), and maximum displacement, um (cm), as a function of T (sec) are found to be and If values at T=T0 are known, maximum values for short periods, for in-stance at T=0.1sec, can be estimated. For dislocation source models corresponding to M=5 to 8, theoretical seismograms are computed. The above cutoff period is defined as the period at which the spectral amplitude is equal to or less than 1/1, 000 of the maximum value and maximum short-period accelerations at T=0.1sec are estimated by using the above empirical relations. Although, in this study, only an infinite medium is considered and expected accelerations at prescribed stations computed by using I-R relations of SHIMA (1977) have large variations, it can be said that maximum accelerations obtained for the present fault models give plausible values. In the present paper, possibility is only investigated whether the fault model, assuming the simple infinite medium, can realize or not the shortperiod maximum accelerations we experience during the earthquake. It is quite important to study spatial distributions of maximum accelerations as well as maximum velocities in the period range, which bring about great damage in the epicentral area, by adopting a pertinent hypothetical earthquake model and taking more realistic multi-layered surface structures into account.

Journal ArticleDOI
TL;DR: In this article, the second part of a two-part tutorial article on the design of fault-tolerant digital systems is presented, which discusses the general principles of fault tolerance and the use of redundancy to increase system reliability and availability.
Abstract: This article is the second part of a two-part tutorial article on the design of fault-tolerant digital systems. The first part, published in November, presented the general principles of fault tolerance and discussed the use of redundancy to increase system reliability and availability. The article also commented on the design of self-testing logic circuits. In this second part, we concentrate more on what has been achieved in practice, and the article describes the JPL-Star computer. This is an experimental computer designed to evaluate many of the strategies for enhancing fault-tolerance, and it is already obsolete. This article concludes, therefore, with a brief survey of other fault-tolerant proposals and implementations

01 Jun 1979
TL;DR: To attack the long standing fault isolation problem in analog electronic circuits, this work has focused on two of the major problems: the presence of uncertainties such as indeterminacy, vagueness, randomness, and so on that naturally arise during the solution procedure of analog fault isolation.
Abstract: : There are essentially three fundamental problems involved in achieving effective automatic generation of fault isolation tests for analog electronic systems: feature extraction, fault classification and diagnosis. For practical electronic circuits having component drifts and measurement noise, how are we able to introduce fuzzy set concepts and provide methods to achieve fault classification and diagnosis? Along with the feature extraction problem, given an electrical network of known topology, what are the conditions for testability? To attack the long standing fault isolation problem in analog electronic circuits, we have focused on two of the major problems. One is the presence of uncertainties such as indeterminacy, vagueness, randomness, and so on that naturally arise during the solution procedure of analog fault isolation. The other is the presence of topological restrictions inherent in specific circuit configurations.

Journal ArticleDOI
TL;DR: Minicomputer software for fault location control in digital circuits is considered, consisting of a model of the unit under test in the form of alternative graphs, an algorithm of the selective simulation to determine the internal test points with their expected reactions in the forms of a diagnostic tree and an algorithm for the fault location process control in the dialogue mode.

Journal ArticleDOI
TL;DR: In the above paper, the authors have defined complete test, closed fault set, fault set graph, and undetected fault set as follows.
Abstract: In the above paper,1the authors have defined complete test, closed fault set, fault set graph, and undetected fault set as follows.