scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 1984"


Journal ArticleDOI
TL;DR: This tutorial paper addresses some of the principles and provides examples of concurrent architectures and designs that have been inspired by VLSI technology.
Abstract: This tutorial paper addresses some of the principles and provides examples of concurrent architectures and designs that have been inspired by VLSI technology. The circuit density offered by VLSI provides the means for implementing systems with very large numbers of computing elements, while its physical characteristics provide an incentive to organize systems so that the elements are relatively loosely coupled. One class of computer architectures that evolve from this reasoning include an interesting and varied class of concurrent machines that adhere to a structural model based on the repetition of regularly connected elements. The systems included under this structural model range from 1) systems that combine storage and logic at a fine grain size, and are typically aimed at computations with images or storage retrieval, to 2) systems that combine registers and arithmetic at a medium grain size to form computational or systolic arrays for signal processing and matrix computations, to 3) arrays of instruction interpreting computers that use teamwork to perform many of the same demanding computations for which we use high-performance single process computers today.

252 citations


Journal ArticleDOI
Hennessy1
TL;DR: In a VLSI implementation of an architecture, many problems can arise from the base technology and its limitations, so the architects must be aware of these limitations and understand their implications at the instruction set level.
Abstract: A processor architecture attempts to compromise between the needs of programs hosted on the architecture and the performance attainable in implementing the architecture. The needs of programs are most accurately reflected by the dynamic use of the instruction set as the target for a high level language compiler. In VLSI, the issue of implementation of an instruction set architecture is significant in determining the features of the architecture. Recent processor architectures have focused on two major trends: large microcoded instruction sets and simplified, or reduced, instruction sets. The attractiveness of these two approaches is affected by the choice of a single-chip implementation. The two different styles require different tradeoffs to attain an implementation in silicon with a reasonable area. The two styles consume the chip area for different purposes, thus achieving performance by different strategies. In a VLSI implementation of an architecture, many problems can arise from the base technology and its limitations. Although circuit design techniques can help alleviate many of these problems, the architects must be aware of these limitations and understand their implications at the instruction set level.

216 citations


Journal ArticleDOI
TL;DR: Best-guess guidelines for what a system should be like and how it should be developed are offered and ways in which advances in research and education could result in systems with better human factors are suggested.
Abstract: While it is becoming increasingly obvious that the fundamental architecture of a system has a profound Influence on the quality of its human factors, the vast majority of human factors studies concern the surface of hardware (keyboards, screens) or the very surface of the software (command names, menu formats). In this paper, we discuss human factors and system architecture. We offer best-guess guidelines for what a system should be like and how it should be developed. In addition, we suggest ways in which advances in research and education could result in systems with better human factors. This paper is based on an address by L. M. Branscomb and a publication by the authors in the Proceedings of the IFIP 9th World Computer Congress, Paris, France, September 19-23, 1983.

184 citations


Journal ArticleDOI
01 Dec 1984
TL;DR: In this paper, a rule-based expert system is proposed to create a third-generation man/machine environment for computer-aided control engineering (CACE), and the main product of this effort is an expert system architecture for CACE.
Abstract: We propose the development of a rule-based expert system to create a third-generation man/machine environment for computer-aided control engineering (CACE). The breadth of the CACE problem is of particular concern, and provides a major motivation for the use of artificial intelligence. This approach promises to provide a high-level design environment that is powerful, supportive, flexible, broad in scope, and readily accessible to nonexpert users. We focus primarily on the high-level requirements for an improved CACE environment, and on the expert system concepts and structures that we have conceived to fulfill these needs. Our chief goal is to determine what artificial intelligence has to contribute to such an environment, and to provide as definite and credible a vision of an expert system for CACE as possible. The main product of this effort is an expert system architecture for CACE.

109 citations


Journal ArticleDOI
Gene D. Carlow1
TL;DR: Pass, perhaps the most complex flight computer program ever developed, epitomizes the benefits to be gained by establishing a well-structured system architecture at the front end of the development process.
Abstract: PASS, perhaps the most complex flight computer program ever developed, epitomizes the benefits to be gained by establishing a well-structured system architecture at the front end of the development process

78 citations


Book
01 Jan 1984

70 citations


Journal ArticleDOI
TL;DR: A mathematical model for systolic architectures is suggested and used to verify the operation of certain systols and the suggested verification technique is applied to four different systolics networks proposed in the literature.
Abstract: A mathematical model for systolic architectures is suggested and used to verify the operation of certain systolic networks. The data items appearing on the communication links of such a network at successive time units are represented by data sequences and the computations performed by the network cells are modeled by a system of difference equations involving operations on the various data sequences. The input/output descriptions, which describe the global effect of the computations performed by the network, are obtained by solving this system of difference equations. This input/output description can then be used to verify the operation of the network. The suggested verification technique is applied to four different systolic networks proposed in the literature.

54 citations


Proceedings ArticleDOI
01 Jan 1984
TL;DR: The hardware architecture and the employed algorithm of a parallel processor system for three-dimensional color graphics constructed as a two-level hierarchical multi-processor system which is particularly suited to incorporate scan-line algorithm for hidden surface elimination are described.
Abstract: This paper describes the hardware architecture and the employed algorithm of a parallel processor system for three-dimensional color graphics. The design goal of the system is to generate realistic images of three-dimensional environments on a raster-scan video display in real-time. In order to achieve this goal, the system is constructed as a two-level hierarchical multi-processor system which is particularly suited to incorporate scan-line algorithm for hidden surface elimination. The system consists of several Scan-Line Processors (SLPs), each of which controls several slave PiXel Processors (PXPs). The SLP prepares the specific data structure relevant to each scan line, while the PXP manipulates every pixel data in its own territory. Internal hardware structures of the SLP and the PXP are quite different, being designed for their dedicated tasks.This system architecture can easily execute scan-line algorithm in parallel by partitioning the entire image space and allotting one processor element to each partition. The specific partition scheme and some new data structures are introduced to exploit as much parallelism as possible. In addition, the scan-line algorithm is extended to include smooth-shading and anti-aliasing with the aim of rendering more realistic images. These two operations are performed on a per-scan-line basis so as to preserve scan-line and span coherence.Performance estimation of the system shows that a typical system consisting of 8 SLPs and 8×8 PXPs can generate, in every 1/15th of a second, the shadowed image of a three-dimensional scene containing about 200 polygons.

36 citations


Journal ArticleDOI
TL;DR: This paper reviews the present status of all-optical switching and logic elements and discusses their future potential, taking account of limitations imposed by materials, considerations of system architecture, and fundamental physical mechanisms.
Abstract: In this paper I review the present status of all-optical switching and logic elements. I then discuss their future potential, taking account of limitations imposed by materials, considerations of system architecture, and fundamental physical mechanisms. I conclude by describing two areas in which all-optical signal-processing systems are likely to have a major impact.

27 citations


Journal ArticleDOI
TL;DR: A philosophy for software development and the tools used to support it deals with quantifying such abstract terms as ``productivity,'' ``performance,'' and ``progress,'' and with measuring these quantities and applying management controls to maximize them.
Abstract: In the area of software development, data processing management often focuses more on coding techniques and system architecture than on how to manage the development. In recent years, ``structured programming'' and ``structured analysis'' have received more attention than the techniques software managers employ to manage. Moreover, these coding and architectural considerations are often advanced as the key to a smooth running, well managed project. This paper documents a philosophy for software development and the tools used to support it. Those management techniques deal with quantifying such abstract terms as ``productivity,'' ``performance,'' and ``progress,'' and with measuring these quantities and applying management controls to maximize them. The paper also documents the application of these techniques on a major software development effort.

20 citations


Journal ArticleDOI
TL;DR: The 32-bit NS16000 was specifically designed to support high-level languages and its page-based virtual memory system helps give it true mainframe capability.
Abstract: The 32-bit NS16000 was specifically designed to support high-level languages. Its page-based virtual memory system helps give it true mainframe capability.

Journal ArticleDOI
TL;DR: This paper examines how architecture, the definition of the instruction set and other facilities that are available to the user, can influence the implementation of a very large scale integration (VLSI) microsystem.
Abstract: This paper examines how architecture, the definition of the instruction set and other facilities that are available to the user, can influence the implementation of a very large scale integration (VLSI) microsystem. The instruction set affects the system implementation in a number of direct ways. The instruction formats determine the complexity of instruction decoding. The addressing modes available determine not only the hardware needed (multiported register files or three-operand adders), but also the complexity of the overall machine pipeline as greater variability is introduced in the time it takes to obtain an operand. Naturally, the actual operations specified by the instructions determine the hardware needed by the execution unit. In a less direct way, the architecture also determines the memory bandwidth required. A few key parameters are introduced that characterize the architecture and can be simply obtained from a typical workload. These parameters are used to analyze the memory bandwidth required and indicate whether the system is CPU- or memory-limited at a given design point. The implications of caches and virtual memories are also briefly considered.

Book ChapterDOI
01 Nov 1984
TL;DR: This paper first defines and describes a highly parallel external data handling system and then shows how the capabilities of the system can be used to implement a high performance relational data base machine.
Abstract: This paper first defines and describes a highly parallel external data handling system and then shows how the capabilities of the system can be used to implement a high performance relational data base machine. The elements of the system architecture are an interconnection network which implements both packet routing and circuit switching and which implements data organization functions such as indexing and sort merge and an intelligent memory unit with a self-managing cache which implements associative search and capabilities for application of filtering operations on data streaming to and from storage.

Journal Article
TL;DR: In this paper, the authors present des considerations technologiques et architecturales dans les realisations de superordinateurs a haute vitesse, convergeant vers une architecture de traitement parallele et pipeline.
Abstract: Presentation des considerations technologiques et architecturales dans les realisations de superordinateurs a haute vitesse, convergeant vers une architecture de traitement parallele et pipeline

Proceedings Article
01 Jan 1984
TL;DR: The main results of recent research on temporally sensitive data models are summarized, the lessons learned in their development are discussed, and the prospects and dimculties involved in incorporating a temporal dimension into database management systems (TODBs) are assessed.
Abstract: Attentiontothetemporalaspectsof datamanagementhasintensifiedinrecentyears,focusing on data models and related systems that are sensitive to the ubiquitous temporal aspects of data. Both the growing need for easier access to historical data, as well as the imminent availability of mass storage devices, are makingthis apromisingbranchof database research, both practically and theoretically. In this paper we summarize the main results of recent research on temporally sensitive data models, discuss the lessons learned in their development, and assess the prospects and dimculties involved in incorporating a temporal dimension into database management systems (TODBs). Inparticular, three system levels are identified: the external userview of the database; an intermediate view closer to the structure of an existing data model; and an internal or implementation view defined interms of low level data structures. This general architecture coherently incorporates a variety of related research results and development experiences, and serves as the framework for theoretical and implementation research into such systems Introduction The underlying pmmise ofthis expandingbodyof research is the recognition that time is not merely another dimension, or another data item tagged along with each tuple, Itseemsnotonlynaturalbutevensomewhattardythatin but rather a more fundamental organizing aspect that our never-ending quest to capture more semantics in humanuserstreatinvery special ways Theresultsofthis formalinformationsystems,we arebeginningtoaugment researchreinforcetheperceptionthatdesigningtemporal our conceptual models with a temporal dimension. Infeatures into information systems requires new and difdeed, there is growing research interest in the nature of ferent conceptual tools time in computer-based information systems and the handling of temporal aspects of data Roughly 50 referA recent panel broughttogether many researchers in the ences to the subject were identified and annotated by field to discuss their work and identify promising research Bolour (1982), addressing four major topical areas: areas (Ariav, 1983 (a)). At the panel, four areas of research were indentified, and in this paper we focus on two of 1. Conceptual data modeling-an extension to these issues namely the implementation of temporal the relational model to incorporate a built-in DBMS and the data models underlying them. semantics for time (Clifford, 1983 (a)). 2. Design and implementation of historical dataInmost existing information systems, aspects of the data bases-the organization of write-once, histhat refer to time are usually either neglected treated torical databases (Aliav, 1981), and implemenonly implicitly, or explicitly factored out ('Ihichritzis, tation of temporally oriented medical databases 1982). None of thethree majordatamodels incorporates (Wiederhold, 1975). a temporal dimension; users of systems based on these models who need temporal information must resort to 3. 'Dynamic databases'-the modeling of patchworksolutionstocircumventthelimitationsoftheir transition rules and temporal inferences from systems. Furthermore, most information systems these rules (May, 1981). typically differentiate between presentand past-related questions in terms of data accessibility (e.g., online and 4. AI related research-the temporal underoffline storage, current database and log tapes.) It is standing of time-oriented data (Kahn, 1975). important to note that this situation prevails not because


Proceedings Article
01 Jan 1984
TL;DR: A specification-based approach to control and data structure verification is presented which is appropriate for software and hardware fault tolerance in tightly coupled parallel processing systems.
Abstract: The h u e of concurrent detection and recovery from design errors in the software and physical failures in the hardware of paraliel procssor systems is considered in this paper. In contrast to classical N-Verhn programming and recovery block approaches to software fault tolerance, a specification-based approach to control and data structure verification is presented. The techniques use the hardware redundrrncy inherent in parallel processing systems to provide concurrent error detection and recovery. There is an ever increasing need for high-performance reliable computation in many contexts of computer system application. In response to this need a large number of industrial and academic researchers have made significant contributions to the synthesis and analysis of techniques for enforcing fault-tolerant computing. Advances have been made both in the areas of hardware and software fault tolerance. However, there is a distinct lack of research concerning an integrated approach to software and hardware appropriate for parallel processing systems. Software fault tolerance has primarily consisted up to the present time of either the N-version programming or the recovery Mock approach. N-version programming is a method of enforcing design diversity and therefore error detection and recovery through N independently coded versions of a program 111. The recovery block approach applies an acceptance test to a primary routine for purposes of error detection. A failure to pass the acceptance test results in a transfer of control to an alternate routine for attempted recomputation of the desired function 121. Both of these techniques have been used to provide for toleration of both hardware and software errors in distributed environments [3.41. while little concern has been given as to how software fault tolerance can be achieved in tightly-coupled parallel processing systems 15.61. Unfortunately, the application of N-version programming to hardware and software fault tolerance results in full replication of both hardware and software. while the recovery block technique necessitates the derivation of comprehensive acceptance tests. which is difEicult for many computational tasks. This paper introduces a specification-based approach to control and data structure verification which is appropriate for software and hardware fault tolerance in tightly coupled parallel processing systems. The techniques use the hardware redundancy inherent in a multiprocessor system to provide concurrent error detection and recovery. The focus of the ICCD 86 Paper Summary 2 paper's contributions concern the concurrent detection of software design errors and hardware physical failures. Techniques for recovery concurrently under investigation also presented in summary.


Journal ArticleDOI
TL;DR: SIRIUS-DELTA as mentioned in this paper is a distributed database system which is aimed at co-operation of heterogeneous local database systems, mainly in its architecture and its Data Manipulation Protocol.

Proceedings ArticleDOI
19 Mar 1984
TL;DR: A set of Alternate Low-Level Primitive Structures (ALPS) has been considered and some of these primitives and a new system architecture, which allows orderly VLSI/VHSIC transition are described.
Abstract: A set of Alternate Low-Level Primitive Structures (ALPS) has been considered in this context. It is envisaged that each standalone structure consists of an input queue, an output queue, the processing primitive, and mechanisms for control and synchronization. Some of these primitives and a new system architecture, which allows orderly VLSI/VHSIC transition are described.

Journal ArticleDOI
TL;DR: Improved verification techniques are applied throughout the entire life cycle and management visibility is greatly enhanced, and the critical need for improving upon past and present management methodology is discussed.

Book ChapterDOI
TL;DR: Two memory-coupled multiprocessor systems are presented and results obtained from computation of a number of applications are reported.
Abstract: An efficient use of a multiprocessor system requires appropriate mapping of the problem structure onto the multiprocessor structure. Two memory-coupled multiprocessor systems are presented and results obtained from computation of a number of applications are reported.

Journal ArticleDOI
01 Mar 1984
TL;DR: The Wang Professional Image Computer (PIC) offers image processing technology at the desktop level that can capture, create, display, alter, store, retrieve, and transmit images in real time.
Abstract: The Wang Professional Image Computer (PIC) offers image processing technology at the desktop level. The PIC can capture, create, display, alter, store, retrieve, and transmit images in real time. These images can also be merged with text. With PIC, the ability to process data, words, and images and to communicate that information locally or remotely within Wang's family of compatible office products is now available in a single workstation. In this paper, we present a description of the Wang PIC. The system's technical features, architecture, hardware and software components, functions, and applications are discussed and illustrated.

Journal ArticleDOI
TL;DR: The Database Designer's Workbench is a graphics-oriented decision support system for database design, providing designers with a convenient environment for specifying database structures and experimenting with different design strategies.

01 Jan 1984
TL;DR: This thesis studies a new technique for achieving co-ordination and consistency in distributed computer control systems making use of a global physical time reference (real-time) in the form of a set of physical clocks synchronized to within a known tolerance of one another.
Abstract: This thesis studies a new technique for achieving co-ordination and consistency in distributed computer control systems making use of a global physical time reference (real-time) in the form of a set of physical clocks synchronized to within a known tolerance of one another. The systems considered consist of a number of processing elements which communicate with one another by exchanging messages via a high-speed local area communication system. The system processors are assumed to be nearly autonomous and are not tightly synchronized. The main design goals reliability and maintainability. An interprocess synchronization model requiring a commitment by each process sending a message not to alter its state until a specified future point in time forms the kernel of the proposed co-ordination technique. This is extended into a simple interaction mechanism and a contingent interaction mechanism providing for the programming of a wide class of application protocols. The maximum lifetime of a message should be equated to the commit-time specified by the sender of the message. In distributed computer control systems, a fundamental problem related to interprocess synchronization is that of establishing a valid and consistent real-time representation of the state of the plant. This problem is analysed in detail within the framework of the interprocess synchronization model and a definition of consistency in this context is proposed. A distributed real-time algorithm for providing global consistency and a second algorithm for providing local consistency are developed. They have the advantage of automatically handling the case where redundant sources of messages carrying state values are present in the system. An initial discussion of a distributed computer system architecture and the relevant design principles provides the framework for the theoretical development in the thesis. The underlying assumption of a set of synchronized physical clocks is examined and implementation of such a time reference is shown to be technically feasible. The role of real-time in distributed computer control systems emerges as fundamental and pervasive. Real-time is an active element in the solution of problems in such systems and not merely as a performance constraint.

Proceedings ArticleDOI
21 May 1984
TL;DR: It is found that the duplex system has the lowest rate of occurrence of unsafe failures and of failures requiring maintenance action, and either a triplex or dual-duplex system provides orders-of-magnitude better freedom from service interruption than a Duplex system, which must shut down whenever one channel fails.
Abstract: Surface Transportation Systems are progressively making more use of microprocessors in vital control system applications. We have examined three types of control system architecture: duplex, triplex and dual duplex. Expressions are derived for the rate of occurrence at the system level of total failures, unsafe failures and service interruptions. We find that the duplex system has the lowest rate of occurrence of unsafe failures and of failures requiring maintenance action. Either a triplex or dual-duplex system provides orders-of-magnitude better freedom from service interruption than a duplex system, which must shut down whenever one channel fails. Sample implementations are shown for each architecture. It is shown that a duplex system can be easily expanded to a dual-duplex system and that this may be the preferable route in many cases.


Journal ArticleDOI
TL;DR: A programming and transformation system for describing, optimizing and mapping parallel algorithms onto a highly parallel multiprocessor architecture is introduced and the results concerning concurrent optimized versus pure sequential computing time (speed-up) are delineated.

Journal ArticleDOI
TL;DR: The analysis of satellite images may evolve from ad hoc methods of utilizing spatial and temporal context to the application of artificial-intelligence-oriented procedures of hierarchical scene analysis.

Proceedings ArticleDOI
25 Sep 1984
TL;DR: The SARDE project aims to replace a 5 million pages technical documentation by a fully electronic storage, retrieval and display system, able to provide high efficiency, great reliability and easy exploitation.
Abstract: SARDE project aims to replace a 5 million pages technical documentation, partly reproduced in 2000 sites, by a fully electronic storage, retrieval and display system. This system architecture is as follow : 1/ the acquisition of documents (their format is A4 to AO) is done by scanners ; the documents are then processed ans compressed, 2) the document storage uses several THOMSON GIGADISC, supported by a dedicated architecture, able to provide high efficiency, great reliability and easy exploitation ; juke-boxes can be used ; 3) the documents are accessed by a classical data base ; 4) the documents are sent to remote users through 64 Kbits links ; 5) remote users are provided with a workstation, consisting of a high definition screen (4 million pixels, 19 " size), a powerful microcomputer, a small image printer, local disk and network interface. A prototype system is to be built for 1985, and experimented in real context with end users.