scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Computers in 1972"


Journal ArticleDOI
TL;DR: A hierarchical model of computer organizations is developed, based on a tree model using request/service type resources as nodes, which indicates that saturation develops when the fraction of task time spent locked out approaches 1/n, where n is the number of processors.
Abstract: A hierarchical model of computer organizations is developed, based on a tree model using request/service type resources as nodes. Two aspects of the model are distinguished: logical and physical. General parallel- or multiple-stream organizations are examined as to type and effectiveness?especially regarding intrinsic logical difficulties. The overlapped simplex processor (SISD) is limited by data dependencies. Branching has a particularly degenerative effect. The parallel processors [single-instruction stream-multiple-data stream (SIMD)] are analyzed. In particular, a nesting type explanation is offered for Minsky's conjecture?the performance of a parallel processor increases as log M instead of M (the number of data stream processors). Multiprocessors (MIMD) are subjected to a saturation syndrome based on general communications lockout. Simplified queuing models indicate that saturation develops when the fraction of task time spent locked out (L/E) approaches 1/n, where n is the number of processors. Resources sharing in multiprocessors can be used to avoid several other classic organizational problems.

1,982 citations


Journal ArticleDOI
TL;DR: It is established that the Fourier series expansion is optimal and unique with respect to obtaining coefficients insensitive to starting point and the amplitudes are pure form invariants as well as are certain simple functions of phase angles.
Abstract: A method for the analysis and synthesis of closed curves in the plane is developed using the Fourier descriptors FD's of Cosgriff [1]. A curve is represented parametrically as a function of arc length by the accumulated change in direction of the curve since the starting point. This function is expanded in a Fourier series and the coefficients are arranged in the amplitude/phase-angle form. It is shown that the amplitudes are pure form invariants as well as are certain simple functions of phase angles. Rotational and axial symmetry are related directly to simple properties of the Fourier descriptors. An analysis of shape similarity or symmetry can be based on these relationships; also closed symmetric curves can be synthesized from almost arbitrary Fourier descriptors. It is established that the Fourier series expansion is optimal and unique with respect to obtaining coefficients insensitive to starting point. Several examples are provided to indicate the usefulness of Fourier descriptors as features for shape discrimination and a number of interesting symmetric curves are generated by computer and plotted out.

1,973 citations


Journal ArticleDOI
TL;DR: A class of algorithms, which may be used to determine similarity in a far more efficient manner than methods currently in use, is introduced in this paper and may be a saving of computation time of two orders of magnitude or more by adopting this new approach.
Abstract: The automatic determination of local similarity between two structured data sets is fundamental to the disciplines of pattern recognition and image processing. A class of algorithms, which may be used to determine similarity in a far more efficient manner than methods currently in use, is introduced in this paper. There may be a saving of computation time of two orders of magnitude or more by adopting this new approach. The problem of translational image registration, used for an example throughout, is discussed and the problems with the most widely used method-correlation explained. Simple implementations of the new algorithms are introduced to motivate the basic idea of their structure. Real data from ITOS-1 satellites are presented to give meaningful empirical justification for theoretical predictions.

1,063 citations


Journal ArticleDOI
TL;DR: A new model for associative memory, based on a correlation matrix, is suggested, in which any part of the memorized information can be used as a key and the memories are selective with respect to accumulated data.
Abstract: A new model for associative memory, based on a correlation matrix, is suggested. In this model information is accumulated on memory elements as products of component data. Denoting a key vector by q(p), and the data associated with it by another vector x(p), the pairs (q(p), x(p)) are memorized in the form of a matrix {see the Equation in PDF File} where c is a constant. A randomly selected subset of the elements of M xq can also be used for memorizing. The recalling of a particular datum x(r) is made by a transformation x(r)=M xq q(r). This model is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories. Two classes of memories are discussed: a complete correlation matrix memory (CCMM), and randomly organized incomplete correlation matrix memories (ICMM). The data recalled from the latter are stochastic variables but the fidelity of recall is shown to have a deterministic limit if the number of memory elements grows without limits. A special case of correlation matrix memories is the auto-associative memory in which any part of the memorized information can be used as a key. The memories are selective with respect to accumulated data. The ICMM exhibits adaptive improvement under certain circumstances. It is also suggested that correlation matrix memories could be applied for the classification of data.

774 citations


Journal ArticleDOI
TL;DR: A pattern-recognition method, making use of Fourier transformations to extract features which are significant for a pattern, is described and some considerations of the technical realizability of a fast preprocessing system for reading printed text are included.
Abstract: A pattern-recognition method, making use of Fourier transformations to extract features which are significant for a pattern, is described. The ordinary Fourier coefficients are difficult to use as input to categorizers because they contain factors dependent upon size and rotation as well as an arbitrary phase angle. From these Fourier coefficients, however, other more useful features can easily be derived. By using these derived property constants, a distinction can be made between genuine shape constants and constants representing size, location, and orientation. The usefulness of the method has been tested with a computer program that was used to classify 175 samples of handprinted letters, e.g., 7 sets of the 25 letters A to Z. In this test, 98 percent were correctly recognized when a simple nonoptimized decision method was used. The last section contains some considerations of the technical realizability of a fast preprocessing system for reading printed text.

649 citations


Journal ArticleDOI
TL;DR: The Nerode realization technique for synthesizing finite-state machines from their associated right-invariant equivalence relations is modified to give a method for synthesizer machines from finite subsets of their input-output behavior.
Abstract: The Nerode realization technique for synthesizing finite-state machines from their associated right-invariant equivalence relations is modified to give a method for synthesizing machines from finite subsets of their input?output behavior. The synthesis procedure includes a parameter that one may adjust to obtain machines that represent the desired behavior with varying degrees of accuracy and that consequently have varying complexities. We discuss some of the uses of the method, including an application to a sequential learning problem.

539 citations


Journal ArticleDOI
TL;DR: In this article, the stability of state transition in an autonomous logical net of threshold elements is studied by the use of characteristics of threshold element and the stability degree of their remembering and recalling under noise disturbances is investigated theoretically.
Abstract: Various information-processing capabilities of self-organizing nets of threshold elements are studied. A self-organizing net, learning from patterns or pattern sequences given from outside as stimuli, "remembers" some of them as stable equilibrium states or state-transition sequences of the net. A condition where many patterns and pattern sequences are remembered in a net at the same time is shown. The stability degree of their remembrance and recalling under noise disturbances is investigated theoretically. For this purpose, the stability of state transition in an autonomous logical net of threshold elements is studied by the use of characteristics of threshold elements.

405 citations


Journal ArticleDOI
D.B. Armstrong1
TL;DR: A deductive method of fault simulation is described, which "deduces" the faults defected by a test at the same time that it simulates explicitly only the good behavior of logic circuit.
Abstract: A deductive method of fault simulation is described, which "deduces" the faults defected by a test at the same time that it simulates explicitly only the good behavior of logic circuit. For large logic circuits (at least several thousand gates) it is expected to be faster than "parallel" fault simulators, but uses much more computer memory than do parallel simulators.

287 citations


Journal ArticleDOI
TL;DR: A realization for arbitrary logic function, using AND and EXCLUSIVE-OR gates, based on Reed-Muller canonic expansion is given that has many of these desirable properties of "easily testable networks".
Abstract: Desirable properties of "easily testable networks" are given. A realization for arbitrary logic function, using AND and EXCLUSIVE-OR gates, based on Reed-Muller canonic expansion is given that has many of these desirable properties. If only permanent stuck-at-0 (s-a-0) or stuck-at-1 (s-a-1) faults occur in a single AND gate or only a single EXCLUSIVE-OR gate is faulty, the following results are derived on fault detecting test sets for the proposed networks: 1) only (n/4) tests, independent of the function being realized, are required if the primary inputs are fault-free; 2) only 2n, additional inputs (which depend on the function realized) are required if the primary inputs can be faulty, where n, is the number of variables appearing in even number of product terms in the Reed-Muller canonical expansion of the function; and 3) the additional 2ne inputs are not required if the network is provided with an observable point at the output of an extra AND gate.

278 citations


Journal ArticleDOI
TL;DR: Algorithms are presented for handling arithmetic assignment statements, DO loops and IF statement trees, and evidence is given that for very simple Fortran programs 16 processors could be effectively used operating simultaneously in a parallel or pipeline fashion.
Abstract: This paper is concerned with the problem of analyzing ordinary Fortran-like programs to determine how many of their operations could be performed simultaneously. Algorithms are presented for handling arithmetic assignment statements, DO loops and IF statement trees. The height of the parse trees of arithmetic expressions is reduced by distribution of multiplication over addition as well as the use of associativity and commutativity. DO loops are analyzed in terms of their index sets and subscript forms. Some general underlying assumptions about machine organization are also given. In terms of several measures which are defined, the results of experimental analyses are presented. About 20 Fortran IV programs consisting of nearly 1000 source cards were analyzed. Evidence is given that for very simple Fortran programs 16 processors could be effectively used operating simultaneously in a parallel or pipeline fashion. Thus, for medium or large size Fortran programs, machines consisting of multiples of a basic 16 processor unit could be used.

254 citations


Journal ArticleDOI
TL;DR: A set of techniques that can be used to optimally schedule a sequence of interrelated computational tasks on a multiprocessor computer system using a directed graph model to represent a computational process are described.
Abstract: This paper describes a set of techniques that can be used to optimally schedule a sequence of interrelated computational tasks on a multiprocessor computer system. Using a directed graph model to represent a computational process, two basic problems are solved here. First, given a set of computational tasks and the relationships between them, the tasks are scheduled such that the total execution time is minimized, and the minimum number of processors required to realize this schedule is obtained. The second problem is of a more general nature. Given k processors, the tasks are scheduled such that execution time is again minimized. Consideration is given to tasks of equal and unequal duration, and task preemption is not allowed.

Journal ArticleDOI
TL;DR: An infinite machine is postulate, one with an infinite memory and instruction stack, infinite registers and memory, and an infinite number of functional units, to execute a program in parallel at maximum speed by executing each instruction at the earliest possible moment.
Abstract: This note reports the results of an examination of seven programs originally written for execution on a conventional computer (CDC-3600). We postulate an infinite machine, one with an infinite memory and instruction stack, infinite registers and memory, and an infinite number of functional units. This machine wiU exectite a program in parallel at maximum speed by executing each instruction at the earliest possible moment.

Journal ArticleDOI
TL;DR: The classical signal processing technique known as Wiener filtering has been extended to the processing of one-and two-dimensional discrete data by digital operations with emphasis on reduction of the computational requirements.
Abstract: The classical signal processing technique known as Wiener filtering has been extended to the processing of one-and two-dimensional discrete data by digital operations with emphasis on reduction of the computational requirements. In the generalized Wiener filtering process a unitary transformation, such as the discrete Fourier, Hadamard, or Karhunen-Loeve transform is performed on the data that is assumed to be composed of additive signal and noise components. The transformed data is then modified by a filter function, and the inverse transformation is performed to obtain the discrete system output. The filter function is chosen to provide the best mean square estimate of the signal portion of the input data.

Journal ArticleDOI
TL;DR: A theory for describing and measuring the concavities of cellular complexes (digitized silhouettes) is developed that involves the use of the minimum-perimeter polygon and its convex hull.
Abstract: A theory for describing and measuring the concavities of cellular complexes (digitized silhouettes) is developed. This theory involves the use of the minimum-perimeter polygon and its convex hull.

Journal ArticleDOI
TL;DR: In this article, the complete pattern recognition problem is considered for the practical solution to a current significant medical question, and automated screening of chest radiographs for the detection of textural type abnormalities is approached from the view of: 1) preprocessing for standardization and data reduction; 2) feature extraction of characteristic measures (feature selection by optimization of classification accuracy); and 3) overall classification using training and test sets of selected chest radiograph.
Abstract: The complete pattern recognition problem is considered for the practical solution to a current significant medical question. Automated screening of chest radiographs for the detection of textural type abnormalities is approached from the view of: 1) preprocessing for standardization and data reduction; 2) feature extraction of characteristic measures (feature selection by optimization of classification accuracy); and 3) overall classification using training and test sets of selected chest radiographs.

Journal ArticleDOI
TL;DR: A new type of matrix operation is introduced that allows treatment of such models in a relatively straightforward manner and a form for the general response is developed in terms of this new matrix operation.
Abstract: This paper fornulates a state-space model for linear iterative circuits having more than one spatial dimension. A new type of matrix operation is introduced that allows treatment of such models in a relatively straightforward manner. Finally, a form for the general response is developed in terms of this new matrix operation.

Journal ArticleDOI
TL;DR: It is shown that the product of the transforms of two sequences is congruent to the transform of their circular convolution, and a method of computing circular convolutions without quantization error and with only very few multiplications is revealed.
Abstract: A transform analogous to the discrete Fourier transform is defined in the ring of integers with a multiplication and addition modulo a Mersenne number. The arithmetic necessary to perform the transform requires only additions and circular shifts of the bits in a word. The inverse transform is similar. It is shown that the product of the transforms of two sequences is congruent to the transform of their circular convolution. Therefore, a method of computing circular convolutions without quantization error and with only very few multiplications is revealed.

Journal ArticleDOI
TL;DR: The problem is that of recovering error-free information when an error is detected at some stage in the processing of a program, and the solution is to determine the optimum points at which the state of the program should be stored to recover after any malfunction.
Abstract: Reliability is an important aspect of any system. On-line diagnosis, parity check coding, triple modular redundancy, and other methods have been used to improve the reliability of computing systems. In this paper another aspect of reliable computing systems is explored. The problem is that of recovering error-free information when an error is detected at some stage in the processing of a program. If an error or fault is detected while a program is being processed and if it cannot be corrected immediately, it may be necessary to run the entire program again. The time spent in rerunning the program may be substantial and in some real time applications critical. Recovery time can be reduced by saving states of the program (all the information stored in registers, primary and secondary storage, etc.) at intervals, as the processing continues. If an error is detected the program is restarted from its most recently saved state. However, a price is paid in saving a state in the form of time spent storing all the relevant information in secondary storage. Hence it is expensive to save the state of the program too often. Not saving any state of the program may cause an unacceptably large recovery time. The problem that we solve is the following. Determine the optimum points at which the state of the program should be stored to recover after any malfunction.

Journal ArticleDOI
TL;DR: This note gives a brief overview of six types of pattern recognition programs that: 1) preprocess, then characterize; 2) pre Process and characterize together; 3) pre process and characterize into a "recognition cone;" 4) describe as well as name; 5) compose interrelated descriptions; and 6) converse.
Abstract: This note gives a brief overview of six types of pattern recognition programs that: 1) preprocess, then characterize; 2) preprocess and characterize together; 3) preprocess and characterize into a "recognition cone;" 4) describe as well as name; 5) compose interrelated descriptions; and 6) converse.

Journal ArticleDOI
TL;DR: A method is given for transposition of 2n×2n data matrices, larger than available high-speed storage, that should be stored on an external storage device, allowing direct access.
Abstract: A method is given for transposition of 2n×2n data matrices, larger than available high-speed storage. The data should be stored on an external storage device, allowing direct access. The performance of the algorithm depends on the size of the main storage, which at least should hold 2n+1 points. In that case the matrix has to be read in and written out n times.

Journal ArticleDOI
TL;DR: It is shown that, by properly marking the virtual as well as the real vertices of an MPP, the MPP can serve as a precise representation of any regular complex, and that this representation is often an economical one.
Abstract: The minimum-perimeter polygon of a silhouette has been shown to be a means for recognizing convex silhouettes and for smoothing the effects of digitization in silhouettes. We describe a new method of computing the minimum-perimeter polygon (MPP) of any digitized silhouette satisfying certain constraints of connectedness and smoothness, and establish the underlying theory. Such a digitized silhouette is called a ``regular complex,'' in accordance with the usage in piecewise linear topology. The method makes use of the concept of a stretched string constrained to lie in the cellular boundary of the digitized silhouette. We show that, by properly marking the virtual as well as the real vertices of an MPP, the MPP can serve as a precise representation of any regular complex, and that this representation is often an economical one.

Journal ArticleDOI
TL;DR: In this paper, it is shown how single-error correction in a residue arithmetic system can be accomplished in an efficient and fast manner using two redundant moduli, one for each error.
Abstract: It is shown how single-error correction in a residue arithmetic system can be accomplished in an efficient and fast manner. Two redundant moduli are necessary. This construction is extended to multiple errors. Another method of error detection or correction for residue arithmetic is also described. Implementation of these methods is considered. An example of the single-error correction procedure is given.

Journal ArticleDOI
TL;DR: Current research programs based on the processing of side-looking radar imagery show that spatial alignment of the various parts of the image must be highly accurate if noise in the difference picture is to be reduced to acceptably low levels.
Abstract: The problem of change detection presents itself for imaging systems that view the same scene repeatedly. Current research programs based on the processing of side-looking radar imagery show that spatial alignment of the various parts of the image must be highly accurate if noise in the difference picture is to be reduced to acceptably low levels. Typically, the spatial alignment accuracy must be better than one-fourth of the diameter of the smallest resolvable feature in the imagery, and this often requires several hundred degrees of freedom in the performance of the map warp for images that are of the order of 107picture cells (pixels) in size. Gray scale rectification of conjugate sampling points is less difficult, requiring typically only 10 to 20 percent as many degrees of freedom. Point by point adjustment for differences in mean transparency and contrast is employed. Recently developed equipment provides a continuous pipeline processing capability. With this equipment, each picture element of the second image is transformed with four degrees of freedom (two spatial and two gray scale). The digital correlator is capable of processing 4×105six-bit picture elements per second when used in conjunction with a CDC 1700 computer.

Journal ArticleDOI
TL;DR: Further work is described on simple sets of parallel operations that detect "texture edges" (abrupt discontinuities in the average values of local picture properties), as well as spots or streaks that are texturally different from their surrounds.
Abstract: Further work is described on simple sets of parallel operations that detect "texture edges" (abrupt discontinuities in the average values of local picture properties), as well as spots or streaks that are texturally different from their surrounds.

Journal ArticleDOI
TL;DR: A new representation for faults in combinational digital circuits is presented, where faults that are inherently indistinguishable are identified and combined into classes that form a geometric structure that effectively subdivides the original circuit into fan-out-free segments.
Abstract: A new representation for faults in combinational digital circuits is presented. Faults that are inherently indistinguishable are identified and combined into classes that form a geometric structure that effectively subdivides the original circuit into fan-out-free segments. This fan-out-free characteristic allows a simplified analysis of multiple fault conditions. For certain circuits, including all two-level single-output circuits, it is shown that the detection of all single faults implies the detection of all multiple faults. The behavior of any circuit under fault conditions is represented in terms of the classes of indistinguishable faults. This results in a description of the faulty circuit by means of Boolean equations that are readily manipulated for the purpose of fault simulation or test generation. A connection graph interpretation of this fault representation is discussed. Heuristic methods for the selection of efficient tests without extensive computation are derived from these connection graphs.

Journal ArticleDOI
TL;DR: A rectangular logic array is described that can realize any combinational switching function and the realizations of a number of special functions include threshold functions, parity functions, symmetric functions, and universal logic functions.
Abstract: A rectangular logic array is described that can realize any combinational switching function. Straightforward analysis and synthesis procedures are described and the realizations of a number of special functions are given. These include threshold functions, parity functions, symmetric functions, and universal logic functions. Other properties of the array which are examined include diagnostic procedures, isolating defective cells, bounds on the array size, and possible implementations of the basic cell.

Journal ArticleDOI
Se June Hong1, A.M. Patel
TL;DR: A new class of codes for single-byte-error correction is presented that the number of check bits are not restricted to the multiples of b as in the case of the codes derived from GF (2b) codes.
Abstract: The error-correcting codes for symbols from GF (2b) are often used for correction of byte-errors in binary data. In these byte-error-correcting codes each check symbol in GF (2b) is expressed as b binary check digits and each information symbol in GF (2b), likewise, is expressed by b binary information digits. A new class of codes for single-byte-error correction is presented. The code is general in that the code structure does not depend on symbols from GF (2b). In particular, the number of check bits are not restricted to the multiples of b as in the case of the codes derived from GF (2b) codes. The new codes are either perfect or maximal and are easily implementable using shift registers.

Journal ArticleDOI
TL;DR: This note investigates the increase in parallel execution rate as a function of the size of an instruction dispatch stack with lookahead hardware.
Abstract: This note investigates the increase in parallel execution rate as a function of the size of an instruction dispatch stack with lookahead hardware. Under the constraint that instructions are not dispatched until all preceding conditional branches are resolved, stack sizes as small as 2 or 4 achieve most of the parallelism that a hypothetically infinite stack would.

Journal ArticleDOI
TL;DR: It is shown experimentally that Martin's and Esau-Williams heuristics are, in fact, near-optimal heurstics in the sense that the solutions provided by these heuristic are generally very near the optimal solution.
Abstract: The problem of designing a minimum cost network with multipoint linkages which connects several remote terminals to a data processing center is studied. The important aspects of a teleprocessing network are queue behavior at the terminals and the cost and reliability of the entire system. In this paper it is assumed that the rate and manner in which information is requested at the terminals is known and that acceptable line loadings are given. An algorithm that determines (in principle) the optimum minimum cost network subject to reliability constraints is developed. A heuristic based on Vogel's approximation method (VAM) and two other heuristics presented by Martin and Esau-Williams were compared with each other and with the optimal algorithm. The Esau-Williams heuristic seems to be the one that gives the best solution and Martin's requires the least processing time. It is shown experimentally that Martin's and Esau-Williams heuristics are, in fact, near-optimal heurstics in the sense that the solutions provided by these heuristics are generally very near the optimal solution. In this paper we make the assumption that all lines of the network have the same capacity.

Journal ArticleDOI
W.W. Plummer1
TL;DR: Implementation of various priority rules (linear, ring, mixed, mixed) is discussed, and building large arbiters with trees of two-user arbiters is considered.
Abstract: When two or more processors attempt to simultaneously use a functional unit (memory, multiplier, etc.), an arbiter module must be employed to insure that processor requests are honored in sequence. The design of asynchronous arbiters is complicated because multiple input changes are allowed, and because inputs may change even if the circuit is not in a stable state. A practical arbiter and its implementation are presented. Implementation of various priority rules (linear, ring, mixed) is discussed, and building large arbiters with trees of two-user arbiters is considered.