scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Electronic Computers in 1967"


Journal ArticleDOI
TL;DR: This paper treats the problem of automatic fault diagnosis for systems with multiple faults by means of a given arrangement of testing links (connection assignment), and a proper diagnosis can be arrived at for any diagnosable fault pattern.
Abstract: This paper treats the problem of automatic fault diagnosis for systems with multiple faults. The system is decomposed into n units u 1 , u 2 , . . . , u n , where a unit is a well-identifiable portion of the system which cannot be further decomposed for the purpose of diagnosis. By means of a given arrangement of testing links (connection assignment) each unit of the system tests a subset of units, and a proper diagnosis can be arrived at for any diagnosable fault pattern. Methods for optimal assignments are given for instantaneous and sequential diagnosis procedures.

1,389 citations


Journal ArticleDOI
TL;DR: It is proved that, by the procedures proposed here, the weight vector converges to the optimal one even under nonseparable pattern distributions, and there is an important tradeoff between speed and accuracy of convergence.
Abstract: This paper describes error-correction adjustment procedures for determining the weight vector of linear pattern classifiers under general pattern distribution. It is mainly aimed at clarifying theoretically the performance of adaptive pattern classifiers. In the case where the loss depends on the distance between a pattern vector and a decision boundary and where the average risk function is unimodal, it is proved that, by the procedures proposed here, the weight vector converges to the optimal one even under nonseparable pattern distributions. The speed and the accuracy of convergence are analyzed, and it is shown that there is an important tradeoff between speed and accuracy of convergence. Dynamical behaviors, when the probability distributions of patterns are changing, are also shown. The theory is generalized and made applicable to the case with general discriminant functions, including piecewise-linear discriminant functions.

450 citations


Journal ArticleDOI
TL;DR: Two algorithms are presented: one, DALG-II, computes a test to detect a failure in acyclic logic circuits; the other, TEST-DETECT, ascertains all failures detected by a given test.
Abstract: Two algorithms are presented: one, DALG-II, computes a test to detect a failure in acyclic logic circuits; the other, TEST-DETECT, ascertains all failures detected by a given test. Both are based upon the utilization of a ``calculus of D-cubes'' that provides the means for effectively performing the necessary computations for very large logic circuits. Strategies for combining the two algorithms into an efficient diagnostic test generation procedure are given. APL specifications of the algorithms are given in an Appendix.

368 citations


Journal ArticleDOI
TL;DR: The method is based on nonparametric estimation of a probability density function for each category to be classified so that the Bayes decision rule can be used for classification and has good extrapolating ability even when the number of training patterns is quite small.
Abstract: A practical method of determining weights for crossproduct and power terms in the variable inputs to an adaptive threshold element used for statistical pattern classification is derived. The objective is to make it possible to realize general nonlinear decision surfaces, in contrast with the linear (hyperplanar) decision surfaces that can be realized by a threshold element using only first-order terms as inputs. The method is based on nonparametric estimation of a probability density function for each category to be classified so that the Bayes decision rule can be used for classification. The decision surfaces thus obtained have good extrapolating ability (from training patterns to test patterns) even when the number of training patterns is quite small. Implementation of the method, both in the form of computer programs and in the form of polynomial threshold devices, is discussed, and some experimental results are described.

257 citations


Journal ArticleDOI
TL;DR: The consensus is extended from two to any number of terms, and it is shown that any prime implicant of a Boolean function is a generalized consensus; therefore the algorithm for the determination of the consensus relations can be used for finding theprime implicants.
Abstract: Given two implicants of a Boolean function, we can, by performing their consensus, find a third implicant. This operation has been used for finding the prime implicants of a Boolean function. In this paper, the consensus is extended from two to any number of terms. A property of these generalized consensus relations leads to a systematic way of finding them. It is shown that any prime implicant of a Boolean function is a generalized consensus; therefore the algorithm for the determination of the consensus relations can be used for finding the prime implicants. This new method is simpler than the usual process of iterative consensus. It is also shown in this paper that consensus theory can be used for finding the minimal sums of a Boolean function. The methods are applicable for any Boolean function, with or without don't care conditions, with a single or a multiple output.

159 citations


Journal ArticleDOI
TL;DR: It is shown that a set of diagnostic tests designed for a redundant circuit under the single-fault assumption is not necessarily a valid test set if a fault occurrence is preceded by the occurrence of some ( undetectable) redundant faults.
Abstract: It is shown that a set of diagnostic tests designed for a redundant circuit under the single-fault assumption is not necessarily a valid test set if a fault occurrence is preceded by the occurrence of some ( undetectable) redundant faults. This is an additional reason ( besides economy) for trying to eliminate certain kinds of redundancy from the circuit. However, single-fault analysis may remain valid for some types of redundancy which serve a useful purpose, such as the elimination of logic hazards in two-level circuits.

120 citations


Journal ArticleDOI
TL;DR: This paper proves that a signal introduced at one end of a printed wire above a ground plane in the presence of a second parallel (passive) wire must break up into two signals traveling at different velocities.
Abstract: As digital system speeds increase and their sizes diminish, it becomes increasingly important to understand the mechanism of signal crosstalk (noise) in interconnections between logic elements. The worst case is when two wires run parallel for a long distance. Past literature has been unsuccessful in explaining crosstalk between parallel wires above a ground plane, because it was assumed that only one signal propagation velocity was involved. This paper proves that a signal introduced at one end of a printed wire above a ground plane in the presence of a second parallel (passive) wire must break up into two signals traveling at different velocities. The serious crosstalk implications are examined. The new terms slow crosstalk (SX), fast crosstalk (FX) and differential crosstalk (DX) are defined.

99 citations


Journal ArticleDOI
TL;DR: Equations are developed which accurately describe the characteristic impedance and signal propagation delay for narrow microstrip transmission lines and are shown to yield exceptionally accurate results when the inherent inaccuracies of the physical measurements are considered.
Abstract: Equations are developed which accurately describe the characteristic impedance and signal propagation delay for narrow microstrip transmission lines. Differences in signal propagation delay for microstrip, strip line, and coaxial cables are compared as a function of dielectric constant. The characteristic impedance equation is verified through comparison with experimental results for impedance values from 40 to 150 ohms. The sensitivity of characteristic impedance to variations in physical parameters, such as dielectric constant, line width, and board thickness, is presented. Finally the equation is shown to yield exceptionally accurate results when the inherent inaccuracies of the physical measurements are considered.

96 citations


Journal ArticleDOI
Arthur D. Friedman1
TL;DR: It is shown that a set of diagnostic tests designed for a redundant circuit under the single-fault assumption is not necessarily a valid test set if a fault occurrence is preceded by the occurrence of some (undetectable) redundant faults.
Abstract: It is shown that a set of diagnostic tests designed for a redundant circuit under the single-fault assumption is not necessarily a valid test set if a fault occurrence is preceded by the occurrence of some (undetectable) redundant faults. This is an additional reason (besides economy) for trying to eliminate certain kinds of redundancy from the circuit. However, single-fault analysis may remain valid for some types of redundancy which serve a useful purpose, such as the elimination of logic hazards in two-level circuits.

94 citations


Journal ArticleDOI
TL;DR: This paper considers a new kind of machine, in which the continuous variable is represented as a probability of a pulse occurrence at a certain sampling time, and the technique of random-pulse computation and its potential implications.
Abstract: A new kind of machine is proposed, in which the continuous variable is represented as a probability of a pulse occurrence at a certain sampling time. It is shown that threshold gates can be used as simple and inexpensive processors such as adders and multipliers. In fact, for a random-pulse sequence, any Boolean operation among individual pulses will correspond to an algebraic expression among the variables represented by their respective average pulse-rates. So, any logical gate or network performs an algebraic operation. Considering the possible simplicity of these random-pulse processors, large systems can be built to perform parallel analog computation on large amounts of input data. The conventional analog computer has a topological simulation structure that can be readily carried over to the processing of functions of time and of one, two, or perhaps even three space variables. Facility of gating, inherent to any form of pulse-coding, allows the construction of stored-connection parallel analog computers made to process functions of time and two space variables. This paper considers this technique of random-pulse computation and its potential implications. Problems of realization, application examples, and alternate coding schemes are discussed. Speed, accuracy, and uncertainty dispersion are estimated. A brief comparison is made between random-pulse processors and biological neutrons.

85 citations


Journal ArticleDOI
TL;DR: An algorithmic ``solution'' to the assignment problem of synchronous sequential machines is described, which can assign the input, state, and output symbols of a given machine so as to ``minimize'' the total logic.
Abstract: The purpose of this paper is to describe an algorithmic ``solution'' to the assignment problem of synchronous sequential machines. The figure of merit used provides a mathematical evaluation of the reduced dependencies that may exist in the set of logic equations. If desired, the algorithm can assign the input, state, and output symbols of a given machine so as to ``minimize'' the total logic, i.e., reduced dependencies of both the state and output logic on state and input variables are optimized. The method is nonenumerative in the sense that the first assignmnent found is optimal. A restricted version of the algorithm has been programmed for an IBM 7094 computer.

Journal ArticleDOI
TL;DR: A method is developed to obtain for any arbitrary sequential machine a corresponding machine which contains the original one and which is definitely diagnosable, and simple and systematic techniques are presented for the construction, and the determination of the length, of the distinguishing sequences of these machines.
Abstract: A sequential machine for which any input sequence of a specified length is a distinguishing sequence is said to be definitely diagnosable. A method is developed to obtain for any arbitrary sequential machine a corresponding machine which contains the original one and which is definitely diagnosable. Similarly, these techniques are applied to embed machines which are not information lossless of finite order, or which do not have the finite-memory property, into machines which contain either of these properties. Simple and systematic techniques are presented for the construction, and the determination of the length, of the distinguishing sequences of these machines. Efficient fault-detection experiments are developed for machines possessing certain special distinguishing sequences. A procedure is proposed for the design of sequential machines such that they will possess these special sequences, and for which short fault-detection experiments can be constructed.

Journal ArticleDOI
TL;DR: Every symmetric function of 2m+1 variables has a modulo 2 sum of products realization with at most 3m terms; but there are functions of n variables which require at least 2n/n log 2 3 terms for sufficiently large n.
Abstract: The minimal number of terms required for representing any switching function as a modulo 2 sums of products is investigated, and an algorithm for obtaining economical realization is described. The main result is the following: every symmetric function of 2m+1 variables has a modulo 2 sum of products realization with at most 3m terms; but there are functions of n variables which require at least 2n/n log 2 3 terms for sufficiently large n.

Journal ArticleDOI
TL;DR: This paper studies ``fail-safe'' properties of logical systems, finding the conditions that the basic logical functions of fail-safe logical systems should satisfy and also identifying the allowable failures for thebasic logical function circuits.
Abstract: In this paper, the authors study ``fail-safe'' properties of logical systems, finding the conditions that the basic logical functions of fail-safe logical systems should satisfy and also identifying the allowable failures for the basic logical function circuits. With these results, the authors present a systematic representation of fail-safe logical systems, and an effective method of logical design for fail-safe systems.

Journal ArticleDOI
Fred H. Hardie1, Robert J. Suhocki1
TL;DR: A system of IBM 7090 Data Processing System computer programs was developed for the purpose of normal and/or fault simulation of the Saturn computer and several programming techniques were utilized, including logic block ordering, parallel fault simulation, stimulus bypassing, and functional simulation.
Abstract: A system of IBM 7090 Data Processing System computer programs was developed for the purpose of normal and/or fault simulation of the Saturn computer. This paper will describe the design of the simulator and cite several applications in the development of the Saturn computer. The architecture, plus several important characteristics of the simulator, are presented. These include the Design Automation input interface, the logic selection procedure, failure injection, the compilation procedure, logical simulation and functional simulation. The ability to simulate up to 4000 Saturn instructions in either normal and/or fault environments (up to 33 faults per IBM 7090 run) will be demonstrated. Simulation of single, multiple, solid or intermittent faults, plus an automated statistical analysis of intermittent fault simulation results, will be presented. The IBM 7090 execution time of a compiled logic simulator can be prohibitive. To minimize running time several programming techniques were utilized, including logic block ordering (to allow single pass simulation), parallel fault simulation, stimulus bypassing, and functional simulation. These techniques are described. Several special forms of simulator output were developed. The use of this output and the applications of the simulator are presented, including design verfication, test program evaluation, generation of a test point catalog, disagreement detector network evaluation, disagreement detector placement, intermittent failure analysis.

Journal ArticleDOI
TL;DR: A similar algorithm for the generation of a piecewise-linear initial approximation to the reciprocal of the divisor is presented and it is shown that the latter is more accurate.
Abstract: The use of a parallel multiplier for performing high-speed binary division requires that an algorithm be devised that obtains the quotient by means of multiplications and additions. Furthermore, its hardware implementation must be as simple and as fast as possible. A suitable algorithm, which applies to a first approximation to the reciprocal of the divisor, has already been proposed[1]. A similar algorithm is presented in this paper. The comparison between the two methods for equal numbers of multiplications shows that the latter is more accurate. Conversely, a given accuracy can often be obtained with a higher speed. The generation of a piecewise-linear initial approximation is also discussed.

Journal ArticleDOI
TL;DR: The machine corresponds therefore to a ``one-pass, load-and-go'' compiler except, of course, that there is no translation to a different machine language.
Abstract: A system design is given for a computer capable of direct execution of FORTRAN language source statements. The allowed types of statements are the FORTRAN DO, GO TO, computed GO TO, Arithmetic, READ, PRINT, arithmetic IF, CONTINUE, PAUSE, DIMENSION and END statements. Up to two subscripts are allowed for variables and no FORMAT statement is needed. The programmer's source program is converted to a slightly modified form while being loaded and placed in a Program Area in lower memory. His original variable names and statement numbers are retained in a Symbol Table in upper memory, which also serves as the data storage area. During execution of the program each FORTRAN statement is read and interpreted at basic circuit speeds since the machine is a hardware interpreter for these statements. The machine corresponds therefore to a ``one-pass, load-and-go'' compiler except, of course, that there is no translation to a different machine language. It is estimated that the control circuitry for this machine will require on the order of 10,000 diodes and 100 flip-flops. This does not include arithmetic circuitry.

Journal ArticleDOI
TL;DR: A new approach to a computer organization promises very effective utilization of the LSI technology, and the total system is exceedingly flexible in both performance and instruction set.
Abstract: A new approach to a computer organization promises very effective utilization of the LSI technology. Functional partitioning of both the data path and control is employed. A dramatic reduction in array pin requirements by a factor of two or more is achieved. Arrays as small as a few dozen gates can be effectively utilized. The total system is exceedingly flexible in both performance and instruction set.

Journal ArticleDOI
James F. Gimpel1
TL;DR: An algorithm for finding for any given Boolean function a least-cost TANT network, analogous to the Quine-McCluskey algorithm for two-level AND/OR networks, is presented.
Abstract: A TANT network is a three-level network composed solely of AND-NOT gates (i.e., NAND gates) having only true (i.e. uncomplemented) inputs. The paper presents an algorithm for finding for any given Boolean function a least-cost (i.e. fewest number of gates) TANT network. The method used is similar to the Quine-McCluskey algorithm for two-level AND/OR networks. Certain functions realizable by input gates or second-level gates are preselected as candidates for possible use in an optimal network. This is analogous to the preselecting of prime implicants in two-level minimization. A network is then obtained by choosing a least-cost subset of the candidates which is adequate for realizing the function. This selection phase is analogous to the use of a prime implicant table in two-level minimization. In TANT minimization, however, an extension to a prime implicant table known as a CC-table must be used. The algorithm permits hand solution of typical four-and five-variable problems. A computer program has been written to handle more complex cases.

Journal ArticleDOI
TL;DR: A property filter is developed that is suitable for recognizing translation-rotation-dilation classes of two-dimensional images and is confirmed with printed and handwritten numerals by coupling it to a standard adaptive categorizer of a type assuming linear separability.
Abstract: A property filter is developed that is suitable for recognizing translation-rotation-dilation classes of two-dimensional images. Invariant outputs corresponding to such classes are obtained by employing two successive sampled spatial harmonic transforms. The required analyses are equivalent to taking inner products of pairs of vectors only one of which is variable in each case. Subsequently, the necessary network may be realized with fixed threshold logic, independent of the character classes to be recognized. The effectiveness of the property filter has been confirmed with printed and handwritten numerals by coupling it to a standard adaptive categorizer of a type assuming linear separability. There is further evidence to show that performance is improved by coupling a categorizer that does not assume linear separability.

Journal ArticleDOI
TL;DR: The required buffer size for a random (Poisson) input word rate with constant rate removal in the same order as arrival is considered and results are tabulated for a range of values and may be used as a design guide.
Abstract: The required buffer size for a random (Poisson) input word rate with constant rate removal in the same order as arrival is considered. The method of computation rests on analytical study. Results are tabulated for a range of values and may be used as a design guide. Applications may be found in both the partitioning of computer stores and in the communications field of data compression.

Journal ArticleDOI
Wilhelm Anacker1, Chu Ping Wang1
TL;DR: A program-independent ultimate data processing rate is derived from characteristics of the processor and the fastest random access memory of the system, and degradation factors are determined by combining statistics of the data flow of actual programs and hardware parameters of the processors and all memories.
Abstract: Data transfers in computing systems with memory hierarchies usually prolong computing time and, consequently, cause degradation of system performance. A method to determine data processing rates and the relative utilization of memories for various system configurations under a variety of program loads is presented. According to this method, a program-independent ultimate data processing rate is derived from characteristics of the processor and the fastest random access memory of the system, and degradation factors are determined by combining statistics of the data flow of actual programs and hardware parameters of the processor and all memories. The statistics of data flow in the memory hierarchy are obtained by analyzing a number of recorded address traces of executed programs. The method presented permits quick evaluation of system performance for arbitrary time periods and for maximum and minimum concurrence of operation of processors and memories.

Journal ArticleDOI
TL;DR: Effectiveness of parallel processing, convergence properties of the successive approximation assignment and sequencing procedure, sensitivity to input parameter variation, the cost in computer time of the graph analysis, and comparison with more conventional SIMSCRIPT simulation are presented.
Abstract: This paper reports results of experiments on models of computational sequences and models of computer systems. The validity of these models is a step in the evolution of methods for prediction of complex computer system performance. A graph model representing computational sequences was implemented and mapped onto a model of computer systems using programmable assignment and sequencing strategies. An approximate procedure for a priori estimation of path length (computation time) through an assigned graph was checked against more conventional simulation. The graph model was also perturbed to probe sensitivity of estimates of operation times, cycle factors, and branching probabilities. Problems arising in numerical weather prediction, X-ray analysis, nuclear modeling, and graph computations were transformed into acyclic directed graphs and have undergone computer analysis. Effectiveness of parallel processing, convergence properties of the successive approximation assignment and sequencing procedure, sensitivity to input parameter variation, the cost in computer time of the graph analysis, and comparison with more conventional SIMSCRIPT simulation are presented. The reduction in time required to obtain an estimate of path length compared to conventional simulation is found to range from a little less than 102 to more than 104. Computational tests indicate that additional factors may be gained without severe loss in validity of the approximation.

Journal ArticleDOI
TL;DR: The presented skip distributions demand a relatively small number of skips and give a significant reduction in the carry propagation time.
Abstract: The methods for determining the carry skip distributions in the adders with the minimum carry propagation time and the minimum number of carry skip circuits for a given carry propagation time are presented on the assumption that every adder position is comprised either in one skip at most or in two skips at most. Two types of adders with carry skips and each of them without and with end-around-carry are considered. The first one is a classical type of adder composed of identical one-position adders, the second is a NOR-gate adder containing 6 NOR-gates for one adder position only, and 1 NOR-gate for one position of the carry line only. These numbers of gates in the adder result in its economical advantages and relatively small carry propagation time as compared with many other adders. The presented skip distributions demand a relatively small number of skips and give a significant reduction in the carry propagation time.

Journal ArticleDOI
TL;DR: The dynamic programming approach to the design of optimal pattern recognition systems when the costs of feature measurements describing the pattern samples are of considerable importance is presented and two methods of reducing the dimensionality in computation are presented.
Abstract: This paper presents the dynamic programming approach to the design of optimal pattern recognition systems when the costs of feature measurements describing the pattern samples are of considerable importance. A multistage or sequential pattern classifier which requires, on the average, a substantially smaller number of feature measurements than that required by an equally reliable nonsequential classifier is defined and constructed through the method of recursive optimization. Two methods of reducing the dimensionality in computation are presented for the cases where the observed feature measurements are 1) statistically independent, and 2) Markov dependent. Both models, in general, provide a ready solution to the optimal sequential classification problem. A generalization in the design of optimal classifiers capable of selecting a best sequence of feature measurements is also discussed. Computer simulated experiments in character recognition are shown to illustrate the feasibility of this approach.

Journal ArticleDOI
TL;DR: Theorems concerning, and algorithms operating on, multiple output switching functions in cubical array notation are presented that detect partial symmetry and redundancy sets of input varibles, and rapidly show equivalence between two functions using symmetry information.
Abstract: Functionally packaged logic can only be effectively utilized if the totality of switching functions that each package is capable of providing is recognized. Theorems concerning, and algorithms operating on, multiple output switching functions (possibly with don't care conditions) in cubical array notation are presented that 1) detect partial symmetry and redundancy sets of input varibles, 2) determine the function generated by a package with some of its inputs tied to logical 1 or 0 or tied together, and 3) rapidly show equivalence between two functions using symmetry information. While manual execution of the algorithms is possible, they are computer oriented. Results from actual computer experimentation show their efficiency.

Journal ArticleDOI
TL;DR: Two digital machine organizations are suggested which use the fast Fourier transform algorithm for the cases of N (the number of samples analyzed) being a power of 2 and the case of N being the product of two integers.
Abstract: The fast Fourier transform algorithm, reported by Cooley and Tukey, results in substantial computational savings and permits a considerable amount of parallel computation. By making use of these features, estimates of the spectral components of a time function can be calculated by a special-purpose digital machine while the function is being sampled. In this paper, two digital machine organizations are suggested which use the algorithm for the case of N (the number of samples analyzed) being a power of 2 and the case of N being the product of two integers. The first machine consists of shift registers and arithmetic units organized in stages which perform calculations in parallel. It can be used when N is a power of 2 and can accept signals being sampled at a rate exceeding 500 000 samples per second. The second machine requires fewer shift registers and only one arithmetic unit but cannot operate in a continuous manner. This means that either a dead time between adjacent records of data must be allowed or a time compression unit must be used. In the first case the obtainable sampling rate depends upon the dead time which can be allowed between adjacent records of data. In the second case sampling rates up to 8000 samples per second are feasible. For this analyzer, N is required to be expressible as the product of two integers.

Journal ArticleDOI
TL;DR: The two-dimensional ``hidden-line'' problem is the problem of determining, by means of a computer algorithm, which edges or parts of edges of an arbitrary, nonintersecting polygon are visible from a specified vantage point in the plane of the polygon.
Abstract: The two-dimensional ``hidden-line'' problem is the problem of determining, by means of a computer algorithm, which edges or parts of edges of an arbitrary, nonintersecting polygon are visible from a specified vantage point in the plane of the polygon. The problem is an important one in the field of computer graphics, and is encountered, for example, in using a computer to determine the portion of an island's coastline visible from a ship offshore. Some propositions are introduced that facilitate the solution of this problem. A general algorithm for the solution is described, and illustrative examples are given of hidden-line problems solved with a digital computer.

Journal ArticleDOI
TL;DR: The formulas provide quantitative comparison for the effectiveness of alternative formats for real number representations as well as what base is chosen.
Abstract: Real numbers can be represented in a binary computer by the form i-Be where i is the integer part, B the base, and e the exponent. The accuracy of the representation will depend upon the number of bits allocated to the integer part and exponent part as well as what base is chosen. If L(i) and L(e) are the number of bits allocated to the magnitudes of the integer and exponent parts and we define I= 2L(i) and E = 2L(e), the exponent range is given by B±E, the maximum relative representation error is given by B/2I, and the average relative representation error is given by (B-1)/(4I 1n B). The formulas provide quantitative comparison for the effectiveness of alternative formats for real number representations.

Journal ArticleDOI
TL;DR: This paper discusses cyclic to acyclic transformations performed on graphs representing computational sequences, critical to the development of models of computations and computer systems for performance prediction.
Abstract: This paper discusses cyclic to acyclic transformations performed on graphs representing computational sequences. Such transformations are critical to the development of models of computations and computer systems for performance prediction. The nature of cycles in computer programs for parallel processors is discussed. Transformations are then developed which replace cyclic graph structures by mean-value equivalent acyclic structures. The acyclic equivalents retain the noncyclic part of the structure in the original graph by evaluating a multiplicative factor associated with the mean time required for each vertex execution in the original graph. Bias introduced in the acyclic approximation is explored.