scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Multilayer Iterative Circuit Computer

01 Dec 1963-IEEE Transactions on Electronic Computers (IEEE)-Vol. 12, Iss: 6, pp 781-790
TL;DR: The notable properties of the iterative structure are augmented by the inclusion of these features, resulting in a machine with iterative computational structure that includes a form of control or interpretation unit.
Abstract: A multilayer iterative circuit computer (I.C.C.) is described, which is capable of dealing with problems involving spatial relationships between the variables, in addition to the inherent multiprogramming capabilities of this type of machine organization. Some of the novel features presented are: 1) A path-building method which retains the short-time access characteristic of the common bus system, while still permitting the simultaneous operation of several paths in the network without mutual interference. Furthermore, the connecting method allows communication between the modules in a one-to-one, one-to-many or many-to-many way. 2) A specialization in the functions of the individual layers, separating the flow of control signals from the flow of information. This step-by-step treatment of the instructions makes it possible to preinterpret them before the actual execution, thus permitting the inclusion of instructions acting from many-to-many operands. 3) Three-phase operation, with each phase active simultaneously in each layer, and operating on different instructions. Due to the overlapping of the phases in time, the total effective time per instructions remains the same. Once an instruction has been executed, the partially processed results are transferred to the next layer, and the now vacant layer starts the processing of the next instruction. The notable properties of the iterative structure are thus augmented by the inclusion of these features, resulting in a machine with iterative computational structure that includes a form of control or interpretation unit.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
Pease1
TL;DR: This paper explores the possibility of using a large-scale array of microprocessors as a computational facility for the execution of massive numerical computations with a high degree of parallelism.
Abstract: This paper explores the possibility of using a large-scale array of microprocessors as a computational facility for the execution of massive numerical computations with a high degree of parallelism. By microprocessor we mean a processor realized on one or a few semiconductor chips that include arithmetic and logical facilities and some memory. The current state of LSI technology makes this approach a feasible and attractive candidate for use in a macrocomputer facility.

549 citations

01 Jan 1969
TL;DR: The subject of this thesis is the development of the design for a specially-organized, general-purpose computer which performs matrix operations efficiently.
Abstract: The subject of this thesis is the development of the design for a specially-organized, general-purpose computer which performs matrix operations efficiently. The content of the thesis is summarized as follows: First, a review of the relevant work which has been done with microcellular and macrocellular techniques is made, Second, the discrete Kalman filter is described as an example of the type of problem for which this computer is efficient. Third, a detailed design for a cellular, array-structured computer is presented. Fourth, a computer program which simulates the cellular computer is described. Fifth, the recommendation is made that one cell and the associated control circuits be constructed to determine the feasibility of producing a hardware realization of the entire computer. A CELLL1TAR COMPUTER TO IMPLEMENT THE JLimAN FILTER ALGORITHM

473 citations


Cites background from "A Multilayer Iterative Circuit Comp..."

  • ...3(a) it is clear that cells (5,2) and (4,5) are the only two lower-left corners....

    [...]

  • ...It consists of three layers of modules with each layer being a (5)...

    [...]

Journal ArticleDOI
TL;DR: This paper is a survey of research on microcellular techniques, since of particular interest are those techniques that are appropriate for realization by modern batch-fabrication processes, since the rapid emergence of reliable and economical batch- fabricated components represents probably the most important current trend in the field of digital circuits.
Abstract: This paper is a survey of research on microcellular techniques. Of particular interest are those techniques that are appropriate for realization by modern batch-fabrication processes, since the rapid emergence of reliable and economical batch-fabricated components represents probably the most important current trend in the field of digital circuits.First the manufacturing methods for batch-fabricated components are reviewed, and the advantages to be realized from the application of the principles of cellular logic design are discussed. Also two categorizations of cellular arrays are made in terms of the complexity of each cell (only low-complexity cells are considered) and in terms of the various application areas.After a survey of very early techniques that can be viewed as exemplifying cellular approaches, modern-day cellular arrays are discussed on the basis of whether they are fixed cell-function arrays or variable cell-function arrays. In the fixed cell-function arrays the switching function produced by each cell is fixed; the cell parameters are used only in the modification of the interconnection structure. Several versions of NOR gate arrays, majority gate arrays, adder arrays, and others are reviewed in terms of synthesis techniques and array growth rates.Similarly, the current status of research is summarized in variable cell-function arrays, where not only the interconnection structure but also the function produced by each cell is determined by parameter selection. These arrays include various general function cascades, outpoint arrays, and cobweb arrays, for example. Again, the various cell types that have been considered are pointed out, as well as synthesis procedures and growth rates appropriate for them.Finally, several areas requiring further research effort are summarized. These include the need for more realistic measures of array growth rates, the need for synthesis techniques for multiple-function arrays and programmable arrays, and the need for fault-avoidance algorithms in integrated structures.

218 citations

Journal ArticleDOI
TL;DR: Different models for parallel computation such as graph models, Petri nets, parallel flowcharts, and flow graph schemata are introduced and prediction of performance of multiprocessors either through analysis of models or by simulation is examined.
Abstract: In this paper, several theoretical aspects of multiprocessing are surveyed. First, we look at the language features that help in exploiting parallelism. The additional instructions needed for a multiprocessor architecture; problems, such as mutual exclusion, raised by the concurrent processing of parts of a program; and the extensions to existing high-level languages are examined. The methods for automatic detection of parallelism in current high-level languages are then reviewed both at the inter and intra statement levels. The following part of the paper deals with more theoretical aspects of multiprocessing. Different models for parallel computation such as graph models, Petri nets, parallel flowcharts, and flow graph schemata are introduced. Finally, prediction of performance of multiprocessors either through analysis of models or by simulation is examined In an appendix, an attempt ~s made toward the classification of existing multlprocessors

149 citations

Journal ArticleDOI
TL;DR: This paper considers a new kind of machine, in which the continuous variable is represented as a probability of a pulse occurrence at a certain sampling time, and the technique of random-pulse computation and its potential implications.
Abstract: A new kind of machine is proposed, in which the continuous variable is represented as a probability of a pulse occurrence at a certain sampling time. It is shown that threshold gates can be used as simple and inexpensive processors such as adders and multipliers. In fact, for a random-pulse sequence, any Boolean operation among individual pulses will correspond to an algebraic expression among the variables represented by their respective average pulse-rates. So, any logical gate or network performs an algebraic operation. Considering the possible simplicity of these random-pulse processors, large systems can be built to perform parallel analog computation on large amounts of input data. The conventional analog computer has a topological simulation structure that can be readily carried over to the processing of functions of time and of one, two, or perhaps even three space variables. Facility of gating, inherent to any form of pulse-coding, allows the construction of stored-connection parallel analog computers made to process functions of time and two space variables. This paper considers this technique of random-pulse computation and its potential implications. Problems of realization, application examples, and alternate coding schemes are discussed. Speed, accuracy, and uncertainty dispersion are estimated. A brief comparison is made between random-pulse processors and biological neutrons.

85 citations

References
More filters
Journal ArticleDOI
S. H. Unger1
01 Oct 1958
TL;DR: A stored program computer is described which can handle spatial problems by operating directly on information in planar form without scanning or using other techniques for transforming the problem into some other domain.
Abstract: A general purpose digital computer can, in principle, solve any well defined problem. At many tasks, such as the solution of systems of linear equations, these machines are thousands of times as fast as human beings. However, they are relatively inept at solving many problems where the data is arranged naturally in a spatial form. For example, when it comes to playing chess or recognizing sophisticated patterns, present day machines cannot match the performance of their designers. The difficulty in such cases appears to be that conventional computers can actively cope with only a small amount of information at any one time. (This circumstance is aptly illustrated by the title of an article by Samuel, "Computing Bit by Bit.") It appears that efficient handling of problems of the type mentioned above cannot be accomplished without some form of parallel action. A stored program computer is described which can handle spatial problems by operating directly on information in planar form without scanning or using other techniques for transforming the problem into some other domain. The order structure of this machine is explained and illustrated by a few simple programs. An estimate of the size of the computer (based on one possible design) is given. Programs have been written that enable the machine to recognize alphabetic characters independent of position, proportion, and size.

127 citations

Proceedings ArticleDOI
01 Dec 1959
TL;DR: This paper describes a universal computer capable of simultaneously executing an arbitrary number of sub- programs, the number of such sub-programs varying as a function of time under program control or as directed by input to the computer.
Abstract: This paper describes a universal computer capable of simultaneously executing an arbitrary number of sub-programs, the number of such sub-programs varying as a function of time under program control or as directed by input to the computer. Three features of the computer are:(1) The structure of the computer is a 2-dimensional modular (or iterative) network so that, if it were constructed, efficient use could be made of the high element density and "template" techniques now being considered in research on microminiature elements.(2) Sub-programs can be spatially organized and can act simultaneously, thus facilitating the simulation or direct control of "highly-parallel" systems with many points or parts interacting simultaneously (e.g. magneto-hydrodynamic problems or pattern recognition).(3) The computer's structure and behavior can, with simple generalizations, be formulated in a way that provides a formal basis for theoretical study of automata with changing structure (cf. the relation between Turing machines and computable numbers).

104 citations

Proceedings ArticleDOI
03 May 1960
TL;DR: An example of a computer, intended as a prototype of a practical computer, having an iterative structure and capable of processing arbitrarily many words of stored data at the same time, each by a different sub-program if desired is discussed.
Abstract: The paper first discusses an example of a computer, intended as a prototype of a practical computer, having an iterative structure and capable of processing arbitrarily many words of stored data at the same time, each by a different sub-program if desired. Next a mathematical characterization is given of a broad class of computers satisfying the conditions just stated. Finally the characterization is related to a program aimed at establishing a theory of adaptive systems via the concept of automaton generators.

51 citations

Journal ArticleDOI
Edward J. McCluskey1
TL;DR: An iterative network is a combinational switching circuit which consists of a series of identical "cells" or sub-nets; for example, the stages of a parallel binary adder.
Abstract: An iterative network is a combinational switching circuit which consists of a series of identical "cells" or sub-networks; for example, the stages of a parallel binary adder. A formal design method for iterative networks is presented. This is similar to the flow table technique for designing sequlential circuits.

36 citations

Proceedings ArticleDOI
Allen Newell1
03 May 1960
TL;DR: This paper speculates on how to program a machine that is suitable for microelectronic components to be an intelligent technician, and the volume processing capabilities normally associated with digital computers.
Abstract: This paper speculates on how to program a machine that is suitable for microelectronic components to be an intelligent technician. The point of departure is from a class of machines described by J. H. Holland in a concurrent paper entitled, "On Iterative Circuit Computers Constructed of Microelectronic Components and Systems." These machines consist of a regular lattice of active modules, each possessing both processing and memory functions. The goal is a machine with the problem solving capabilities of a smart human technical assistant, and the volume processing capabilities normally associated with digital computers. This goal is chosen because it coincides with many current developments. After discussing the eventual capabilities desired and the most striking features of Holland's machines, the speculation proceeds by considering the basic organization for information processing. This is followed by briefer treatments of the organization for problem solving, supervision, interpretation and production.

19 citations