scispace - formally typeset
Search or ask a question

Showing papers on "Degree of parallelism published in 1977"


Journal ArticleDOI
Pease1
TL;DR: This paper explores the possibility of using a large-scale array of microprocessors as a computational facility for the execution of massive numerical computations with a high degree of parallelism.
Abstract: This paper explores the possibility of using a large-scale array of microprocessors as a computational facility for the execution of massive numerical computations with a high degree of parallelism. By microprocessor we mean a processor realized on one or a few semiconductor chips that include arithmetic and logical facilities and some memory. The current state of LSI technology makes this approach a feasible and attractive candidate for use in a macrocomputer facility.

549 citations


Journal ArticleDOI
TL;DR: The degree of parallelism of computational processes is defined by the intensity with which the possibilities of simultaneously changing the values of many variables are utilized.
Abstract: The computational processes taking place in data-processing systems are generated at various levels of interaction between the hardware and the programs. The fundamental mathematical model used to study such processes is the composition of two systems control and information media. If each of these systems is represented in the form of an automaton, we arrive at the concept of the discrete converter [1]. The corresponding model, usually considered in the theory of programming, is an interpretational program scheme [2]. One of the important resources for raising the performance of data-processing systems is the utilization of parallel computations~ The possibility of rendering computational processes parallel is determined by the structure of the information medium, whose state is usually represented by the mapping b: R ~ D, where R is a set of memory elements (variables, registers, etc.), and D is a set of elementary values. The degree of parallelism of computational processes is defined by the intensity with which the possibilities of simultaneously changing the values of many variables are utilized.

34 citations


Journal ArticleDOI
TL;DR: Analysis of a study directed to the specification and procurement of a new cockpit simulator for an advanced class of heli copters showed that a particularly cost-effective approach is to employ a large minicomputer acting as host and controller for a special-purpose digital peripheral processor.
Abstract: This paper describes some of the results of a study directed to the specification and procurement of a new cockpit simulator for an advanced class of helicopters. A part of the study was the definition of a challenging benchmark problem, and detailed analyses of it were made to assess the suitability of a variety of simulation techniques. The analyses showed that a particularly cost-effective approach to the attainment of adequate speed for this extremely demanding application is to employ a large minicomputer acting as host and controller for a special-purpose digital peripheral processor. Various realizations of such peripheral processors, all employing state-of-the-art electronic circuitry and a high degree of parallelism and pipelining, are available or under development. The types of peripheral processors array processors, simulation-oriented processors, and arrays of processing elements - are analyzed and compared. They are particularly promising approaches which should be suitable for high-speed simulations of all kinds, the cockpit simulator being a case in point.

18 citations