scispace - formally typeset
Search or ask a question

Showing papers on "Massively parallel published in 1982"


Journal ArticleDOI
TL;DR: A general connectionist model is introduced and how it might be used in cognitive science is considered, among the issues addressed are: stability and noise-sensitivity, distributed decision-making, time and sequence problems, and the representation of complex concepts.

1,046 citations


Journal ArticleDOI
TL;DR: Two bit-serial parallel processing systems are developed: an airborne associative processor and a ground based massively parallel processor.
Abstract: About a decade ago, a bit-serial parallel processing system STARAN®1 was developed. It used standard integrated circuits that were available at that time. Now, with the availability of VLSI, a much greater processing capability can be packed in a unit volume. This has led to the recent development of two bit-serial parallel processing systems: an airborne associative processor and a ground based massively parallel processor.

133 citations


Book
01 Jan 1982

45 citations


01 Jan 1982
TL;DR: A massively parallel, connectionist approach is brought to bear on the problem of visual recognition that eliminates the need for establishing a search order on exploring interpretations and exhibits many similarities with the structure and behavior of animal vision systems.
Abstract: Strictly sequential approaches to computer vision are at best slow and cumbersome, at worst impossible. In this thesis, a massively parallel, connectionist approach is brought to bear on the problem of visual recognition. Computing with connections is a synthesis of results from Neuroscience, Computer Science, and Psychology. The fundamental assumption of connectionism is that individual computing units do not transmit large amounts of symbolic information. Instead, these units compute by being appropriately connected in a network of similar units. Using the communication pathways (connections) defined by the arcs of the network, the units cooperate and compete towards a globally consistent interpretation of the input scene. The problem, visual recognition, is defined as matching instances of predefined objects in the input with a fixed set of internal models. Predefined objects come from Kanade's Origami World{Kanade78}. The program represents and recognizes such pre-defined objects from line drawing input. To organize these networks, conceptual hierarchies are defined. A conceptual hierarchy is a semantic network hierarchically arranged according to abstraction levels. Levels represent the extraction of progressively more complex features. A node on a level, a computing unit, represents an instantiation of a feature defined on that level. Connections represent the composition and competition relations between feature units. Iterative relaxation is the form of control in the network. Each unit iteratively computes activation levels, a reflection of current confidence in the associated feature. Numerous testcases illustrate network behavior in presence of perfect, noised, incomplete and occluded input. The greatest benefits of this approach are seen in the systems ability to cope with incomplete and occluded input. Another advantage is an inherently parallel approach that eliminates the need for establishing a search order on exploring interpretations. The system exhibits many similarities with the structure and behavior of animal vision systems.

30 citations


Book ChapterDOI
01 Jan 1982
TL;DR: The chapter discusses the principles of parallel processing and the experiences with the Cray-1 vector computer, and provides an account of taxonomy of parallel computers that contains existing parallel computers, as well as some under development.
Abstract: Publisher Summary This chapter discusses parallel computation and some Cray-1 experiences. A slowdown in the rate of growth of computing power available from a single processor and a dramatic decrease in the hardware cost of executing an arithmetic operation have stimulated users and developers of large-scale computers to investigate the feasibility of parallel computation. The chapter discusses the principles of parallel processing and the experiences with the Cray-1 vector computer. It provides an account of taxonomy of parallel computers that contains existing parallel computers, as well as some under development. The chapter describes hardware modeling. Concerning complexity, a distinction is required between the algorithms suited for sequential processing and those suited for parallel processing. In sequential processing, the time required to process an algorithm is correlated with its complexity, whereas for parallel processing, the time required is to be determined by the number of parallel steps, that is, parallel complexity needed to implement a given algorithm. The chapter focuses on the problems that cannot be coded optimally in FORTRAN.

29 citations


Journal Article
TL;DR: A NASA-directed development of massively parallel processor (MPP) computers is outlined, noting intended applications for data processing for near term earth resource and environment mapping, radar, and television transmissions.
Abstract: A NASA-directed development of massively parallel processor (MPP) computers is outlined, noting intended applications for data processing for near term earth resource and environment mapping, radar, and television transmissions. The MPP is designed to perform 100 billion operations/sec to obtain satisfactory image processing, while separate processing units correct distortions, register images, calculate correlation functions, and classify multispectral characteristics. Arrays of 1s and 0s will be manipulated in analog-to-digital conversions generating separate planes corresponding to powers of binaries. Data wires are replaced by fiber-optic tubes or thousands of wires, and single logic gates are replaced by thousands of logic gates and every memory element by thousands of memory elements. Features of the interconnections and the images control processor units are detailed, along with implementation of sliders for program flexibility.

14 citations


Journal ArticleDOI
TL;DR: A single-instruction multiple data computer known as the Massively Parallel Processor (MPP) is being fabricated for NASA by the Goodyear Aerospace Corporation, capable of adding more than 6 billion 8-bit numbers per second.
Abstract: Future sensor systems will utilize massively parallel computing systems for rapid analysis of two-dimensional data. The Goddard Space Flight Center has an ongoing program to develop these systems. A single-instruction multiple data computer known as the Massively Parallel Processor (MPP) is being fabricated for NASA by the Goodyear Aerospace Corporation. This processor contains 16,384 processing elements arranged in a 128 x 128 array. The MPP will be capable of adding more than 6 billion 8-bit numbers per second. Multiplication of eight-bit numbers can occur at a rate of 2 billion per second. Delivery of the MPP to Goddard Space Flight Center is scheduled for 1983.

6 citations


Proceedings ArticleDOI
07 Jun 1982
TL;DR: The massively parallel processor has 16,896 PE's arranged in a 128-row by 132-column rectangular array and the array control unit, the staging memory, the program and data management unit, and the interface to a host computer.
Abstract: In 1971 NASA Goddard Space Flight Center initiated a program to develop high-speed image processing systems. These systems use thousands of processing elements (PE's) operating simultaneously to achieve their speed (massive parallelism). A typical satellite image contains millions of picture elements (pixels) that can generally be processed in parallel. In 1979 a contract was awarded to construct a massively parallel processor (MPP) to be delivered in 1982. The processor has 16,896 PE's arranged in a 128-row by 132-column rectangular array. The PE's are in the array unit (Figure 1). Other major blocks in the massively parallel processor are the array control unit, the staging memory, the program and data management unit, and the interface to a host computer.

6 citations


01 Jan 1982
TL;DR: A high level language for the Massively Parallel Processor (MPP), called Parallel Pascal, was designed and a compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language.
Abstract: A high level language for the Massively Parallel Processor (MPP) was designed This language, called Parallel Pascal, is described in detail A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included Formal descriptions of Parallel Pascal and Parallel P-Code are given A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language The code generator to complete the compiler for the MPP is being developed independently A Parallel Pascal to Pascal translator was also developed The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given

5 citations


01 Jan 1982
TL;DR: A conceptual model for parallel computations on large arrays is developed and implementation designs on several typical multiprocessor architectures are presented that are tailored to the performance realities associated with the hardware.
Abstract: A conceptual model for parallel computations on large arrays is developed in this thesis. The model provides language concepts needed to process arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used for representing arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an ADA package is described. The model presented in this thesis provides a high level conceptual unity to the area of parallel computation on large arrays in a machine independent manner. The characteristics of a particular architecture, give rise to restrictions on the overall model. Implementation designs on several typical multiprocessor architectures are presented that are tailored to the performance realities associated with the hardware. The machines studied are the NASA Massively Parallel Processor, the intel 432 system, the NASA Finite Element Machine, and the University of Maryland ZMOB. Sample algorithms using the concepts of the model, from the areas of finite element computations and image processing, are presented.

3 citations


01 Aug 1982
TL;DR: This paper considers the architectures of present and future spatially parallel computers and the types of architecture examined are cellular logic image processor, distributed array processor, and massively parallel processor.
Abstract: Cellular logic, distributed array, and massive parallel processing are all being considered as solutions to the incredible computation overhead tomorrow's computers must bear. This paper considers the architectures of present and future spatially parallel computers. This type of computer is an image processing device. The types of architecture examined are cellular logic image processor, distributed array processor, and massively parallel processor. 5 references.

Book ChapterDOI
01 Jan 1982
TL;DR: Any scientist modelling three dimensional fluid flows can, simply be halving the mesh size increase the computing requirements at least eightfold (both in computational needs and in data storage requirements).
Abstract: Parkinson’s law states that the demand for a resource rises to meet the capacity available to satisfy demand. Although the law was invented before widespread introduction of computers, there is probably no field of endeavour for which it is more appropriate. Any scientist modelling three dimensional fluid flows can, simply be halving the mesh size increase the computing requirements at least eightfold (both in computational needs and in data storage requirements).


01 Jan 1982
TL;DR: A general connectionist model is introduced and how it might be used in cognitive science is considered, among the issues addressed are: stability and noise-sensitivity, distributed decisionmaking, time and sequence problems, and the representation of complex concepts.
Abstract: Much of the progress in the fields constituting cognitive science has been based upon the use of explicit information processing models, almost exclusively patterned after conventional serial computers. An extension of these ideas to massively parallel, connectianist models appears to offer a number of advantages. After a preliminary discussion, this paper introduces a general connectionist model and considers how it might be used in cognitive science. Among the issues addressed are: stability and noise-sensitivity, distributed decisionmaking, time and sequence problems, and the representation of complex concepts. Much of the progress in the fields constituting cognitive science has been based upon the use of concrete information processing models (IPM), almost exclusively patterned after conventional sequential computers. There are several reasons for trying to extend IPM to cases where the computations are carried out by a parallel computational engine with perhaps billions of active units. As an introduction, we will attempt to motivate the current interest in massively parallel models from four different perspectives: anatomy, computational complexity, technology, and the role of formal languages in science. It is the last of these which is of primary concern here. We will focus upon a particular formalism, connectionist models (CM), which is based explicitly on an abstraction of our current understanding of the information processing properties of neurons. Animal brains do not compute like a conventional computer. Comparatively slow (millisecond) neural computing elements with complex, parallel connections form a structure which is dramatically different from a high-speed, predominantly serial machine. Much of current research in the neurosciences is concerned with tracing out these connections and with discovering how they transfer information. One purpose of this paper is to suggest how connectionist theories of the brain can be used to produce