scispace - formally typeset
Search or ask a question

Showing papers on "Reference architecture published in 1984"


Journal ArticleDOI
Hennessy1
TL;DR: In a VLSI implementation of an architecture, many problems can arise from the base technology and its limitations, so the architects must be aware of these limitations and understand their implications at the instruction set level.
Abstract: A processor architecture attempts to compromise between the needs of programs hosted on the architecture and the performance attainable in implementing the architecture. The needs of programs are most accurately reflected by the dynamic use of the instruction set as the target for a high level language compiler. In VLSI, the issue of implementation of an instruction set architecture is significant in determining the features of the architecture. Recent processor architectures have focused on two major trends: large microcoded instruction sets and simplified, or reduced, instruction sets. The attractiveness of these two approaches is affected by the choice of a single-chip implementation. The two different styles require different tradeoffs to attain an implementation in silicon with a reasonable area. The two styles consume the chip area for different purposes, thus achieving performance by different strategies. In a VLSI implementation of an architecture, many problems can arise from the base technology and its limitations. Although circuit design techniques can help alleviate many of these problems, the architects must be aware of these limitations and understand their implications at the instruction set level.

216 citations


Book
01 Jan 1984

70 citations


Journal ArticleDOI
TL;DR: The author outlines the major features of an extension to the architecture that will permit the high-speed display and manipulation of multiple, independent, shaded three-dimensional objects represented as a voxel (volume-element) database with gray scale.
Abstract: The features and organization of a hardware architecture that is designed to facilitate the real-time display and manipulation of a single three-dimensional object on a raster-scan video display are briefly summarized. The author then outlines the major features of an extension to the architecture that will permit the high-speed display and manipulation of multiple, independent, shaded three-dimensional objects represented as a voxel (volume-element) database with gray scale. The objective is to provide many useful capabilities at or near video rates facilitating extensive real-time interaction. The architecture is highly modular, permitting a cost tradeoff to be made to achieve a given level of performance. It also includes a great deal of regularity in its structure, making it directly suitable for VLSI implementation. A key feature is that no computational operations more complex than adds, shifts, and comparisons are required in real time. The display characteristics for each object are controlled by a concise object descriptor table, which contains all of the control parameters required to process that object.

59 citations


Proceedings ArticleDOI
25 Jun 1984
TL;DR: An architecture for application of artificial intelligence to engineering design places emphasis on evaluation and redesign, thus reflecting the iterative nature of the design process.
Abstract: An architecture for application of artificial intelligence to engineering design is presented and discussed. The architecture places emphasis on evaluation and redesign, thus reflecting the iterative nature of the design process. Six independent knowledge sources are included having the following functions: initial design, evaluation, acceptability decisions, redesign, user-designer input, and flow of control. A "blackboard" is used to store and exchange information among the knowledge sources. The implementation of the architecture is illustrated with two examples from the mechanical design domain: v-belt drives and extruded aluminum shapes.

41 citations


Proceedings ArticleDOI
TL;DR: It is argued that Ensemble represents a plausible first step toward a Session-layer protocol for “multi-endpoint connections”, a neglected area of communication protocol development.
Abstract: A layered architecture for the implementation of real-time conferences is presented. In a real-time conference a group of users each at his or her own workstation, share identical views of on-line application information. The users cooperate in a problem solving task by interactively modifying or editing the shared view or the underlying information, and can use a voice communication channel for discussion and negotiation.The lower layer in this architecture, named Ensemble, supports the sharing of arbitrary application-defined objects among the participants of a conference, and the manipulation of these objects via one or more application-defined groups of commands called activities. Ensemble provides generic facilities for sharing objects and activities, and for dynamically adding and removing participants in a conference; these can be used in constructing real-time conferencing systems for many different applications. An example is presented of how the Ensemble functions can be used to implement a shared bitmap with independent participant cursors.The relation between this layered architecture and the ISO Open Systems Interconnection reference model is discussed. In particular, it is argued that Ensemble represents a plausible first step toward a Session-layer protocol for “multi-endpoint connections”, a neglected area of communication protocol development.

25 citations



Journal ArticleDOI
TL;DR: The 32-bit NS16000 was specifically designed to support high-level languages and its page-based virtual memory system helps give it true mainframe capability.
Abstract: The 32-bit NS16000 was specifically designed to support high-level languages. Its page-based virtual memory system helps give it true mainframe capability.

18 citations


Journal ArticleDOI
TL;DR: This paper describes the software architecture of CONIC, a system to support distributed computer control applications, and emphasizes the distinction between the writing of individual software components and the construction and configuration of a system from a set of components.

17 citations


Proceedings ArticleDOI
25 Jun 1984
TL;DR: This paper attempts to address the problems of high-speed simulation in a more systematic and detailed manner to achieve an enhanced performance from a simpler architecture.
Abstract: The growing need for high-speed digital logic simulation is well-known and several special-purpose hardware architectures to provide this have, to date, been presented. This paper attempts to address the problems of high-speed simulation in a more systematic and detailed manner to achieve an enhanced performance from a simpler architecture. The proposed architecture is capable of providing all the facilities currently available in software logic simulators.

13 citations


Proceedings Article
01 Jan 1984

11 citations


Proceedings Article
01 Jan 1984
TL;DR: The main results of recent research on temporally sensitive data models are summarized, the lessons learned in their development are discussed, and the prospects and dimculties involved in incorporating a temporal dimension into database management systems (TODBs) are assessed.
Abstract: Attentiontothetemporalaspectsof datamanagementhasintensifiedinrecentyears,focusing on data models and related systems that are sensitive to the ubiquitous temporal aspects of data. Both the growing need for easier access to historical data, as well as the imminent availability of mass storage devices, are makingthis apromisingbranchof database research, both practically and theoretically. In this paper we summarize the main results of recent research on temporally sensitive data models, discuss the lessons learned in their development, and assess the prospects and dimculties involved in incorporating a temporal dimension into database management systems (TODBs). Inparticular, three system levels are identified: the external userview of the database; an intermediate view closer to the structure of an existing data model; and an internal or implementation view defined interms of low level data structures. This general architecture coherently incorporates a variety of related research results and development experiences, and serves as the framework for theoretical and implementation research into such systems Introduction The underlying pmmise ofthis expandingbodyof research is the recognition that time is not merely another dimension, or another data item tagged along with each tuple, Itseemsnotonlynaturalbutevensomewhattardythatin but rather a more fundamental organizing aspect that our never-ending quest to capture more semantics in humanuserstreatinvery special ways Theresultsofthis formalinformationsystems,we arebeginningtoaugment researchreinforcetheperceptionthatdesigningtemporal our conceptual models with a temporal dimension. Infeatures into information systems requires new and difdeed, there is growing research interest in the nature of ferent conceptual tools time in computer-based information systems and the handling of temporal aspects of data Roughly 50 referA recent panel broughttogether many researchers in the ences to the subject were identified and annotated by field to discuss their work and identify promising research Bolour (1982), addressing four major topical areas: areas (Ariav, 1983 (a)). At the panel, four areas of research were indentified, and in this paper we focus on two of 1. Conceptual data modeling-an extension to these issues namely the implementation of temporal the relational model to incorporate a built-in DBMS and the data models underlying them. semantics for time (Clifford, 1983 (a)). 2. Design and implementation of historical dataInmost existing information systems, aspects of the data bases-the organization of write-once, histhat refer to time are usually either neglected treated torical databases (Aliav, 1981), and implemenonly implicitly, or explicitly factored out ('Ihichritzis, tation of temporally oriented medical databases 1982). None of thethree majordatamodels incorporates (Wiederhold, 1975). a temporal dimension; users of systems based on these models who need temporal information must resort to 3. 'Dynamic databases'-the modeling of patchworksolutionstocircumventthelimitationsoftheir transition rules and temporal inferences from systems. Furthermore, most information systems these rules (May, 1981). typically differentiate between presentand past-related questions in terms of data accessibility (e.g., online and 4. AI related research-the temporal underoffline storage, current database and log tapes.) It is standing of time-oriented data (Kahn, 1975). important to note that this situation prevails not because


Proceedings Article
01 Jan 1984
TL;DR: A specification-based approach to control and data structure verification is presented which is appropriate for software and hardware fault tolerance in tightly coupled parallel processing systems.
Abstract: The h u e of concurrent detection and recovery from design errors in the software and physical failures in the hardware of paraliel procssor systems is considered in this paper. In contrast to classical N-Verhn programming and recovery block approaches to software fault tolerance, a specification-based approach to control and data structure verification is presented. The techniques use the hardware redundrrncy inherent in parallel processing systems to provide concurrent error detection and recovery. There is an ever increasing need for high-performance reliable computation in many contexts of computer system application. In response to this need a large number of industrial and academic researchers have made significant contributions to the synthesis and analysis of techniques for enforcing fault-tolerant computing. Advances have been made both in the areas of hardware and software fault tolerance. However, there is a distinct lack of research concerning an integrated approach to software and hardware appropriate for parallel processing systems. Software fault tolerance has primarily consisted up to the present time of either the N-version programming or the recovery Mock approach. N-version programming is a method of enforcing design diversity and therefore error detection and recovery through N independently coded versions of a program 111. The recovery block approach applies an acceptance test to a primary routine for purposes of error detection. A failure to pass the acceptance test results in a transfer of control to an alternate routine for attempted recomputation of the desired function 121. Both of these techniques have been used to provide for toleration of both hardware and software errors in distributed environments [3.41. while little concern has been given as to how software fault tolerance can be achieved in tightly-coupled parallel processing systems 15.61. Unfortunately, the application of N-version programming to hardware and software fault tolerance results in full replication of both hardware and software. while the recovery block technique necessitates the derivation of comprehensive acceptance tests. which is difEicult for many computational tasks. This paper introduces a specification-based approach to control and data structure verification which is appropriate for software and hardware fault tolerance in tightly coupled parallel processing systems. The techniques use the hardware redundancy inherent in a multiprocessor system to provide concurrent error detection and recovery. The focus of the ICCD 86 Paper Summary 2 paper's contributions concern the concurrent detection of software design errors and hardware physical failures. Techniques for recovery concurrently under investigation also presented in summary.


Journal ArticleDOI
B. Daniels1
01 Mar 1984
TL;DR: The architecture of both the hardware and the software of the Lisa is examined in detail and design goals and considerations are discussed.
Abstract: The Lisa personal computer provides a new and better way of relating to a computer This paper presents an outline of how such a complex, modern personal computer system is developed The architecture of both the hardware and the software of the Lisa is examined in detail Design goals and considerations are also discussed


Journal ArticleDOI
TL;DR: In this paper, the case method in architecture education is described. But this method is restricted to the case of a single building and cannot be applied to other building types, e.g., office buildings.
Abstract: (1984). The Case Method in Architecture Education. Journal of Architectural Education: Vol. 37, Energy, pp. 10-11.

Proceedings ArticleDOI
24 Apr 1984
TL;DR: The testbed software runtime environment to support the execution of the model and the support software to construct an existing model of the described architecture and to execute and monitor the model on the testbed are described in this paper.
Abstract: ADL/ADS is a testbed user interface tool for experimentation with critical research and design issues associated with distributed processing. One significant class of problems well-suited for investigation in a testbed are data engineering concepts. ADL/ADS provides a graphics interface for expressing a candidate distributed architecture at the Processor-Memory-Switch (PMS) level of hardware detail and the task-message-file level of software detail. Then the ADL/ADS support software is used to construct an existing model of the described architecture and to execute and monitor the model on the testbed. The testbed software runtime environment to support the execution of the model is described in this paper.

Journal ArticleDOI
01 Jan 1984
TL;DR: A multiprocessor architecture has been developed which addresses the problem of the display and manipulation of multiple shaded three dimensional objects derived from emperical data on a raster scan CRT.
Abstract: A multiprocessor architecture has been developed which addresses the problem of the display and manipulation of multiple shaded three dimensional objects derived from emperical data on a raster scan CRT. Fully general control of such parameters as position, size, orientation, rotation, tone scale, and shading are accomplished at video rates permitting real-time interaction with the display presentation.The GODPA architecture is based on a large number of relatively simple processing elements which access their own memory modules without input conflict. Reconstruction algorithms are used which do not require any complex arithmetic or logical high speed operations. This hardware organization is highly modular and expandible and is ideally suited for implementation with VLSI technology.





Book ChapterDOI
31 Jan 1984


Book ChapterDOI
01 Jan 1984
TL;DR: The Logic Machine Architecture is a layered family of software tools designed to enable the efficient and flexible use and development of significant theorem proving systems.
Abstract: The Logic Machine Architecture (LMA) is a layered family of software tools designed to enable the efficient and flexible use and development of significant theorem proving systems. As such it is an abstraction of the implementation details of many existing theorem proving programs.

Book
01 Jan 1984
TL;DR: Only for you today!
Abstract: Only for you today! Discover your favourite high performance graphics system architecture a methodology for design and evaluation book right here by downloading and getting the soft file of the book. This is not your time to traditionally go to the book stores to buy a book. Here, varieties of book collections are available to download. One of them is this high performance graphics system architecture a methodology for design and evaluation as your preferred book. Getting this book b on-line in this site can be realized now by visiting the link page to download. It will be easy. Why should be here?

Book
01 Jan 1984

Journal ArticleDOI
TL;DR: Power of the methodology will lie in its generality, i.e. it could be used to design an architecture for practically any arbitrary computing environment.
Abstract: To design a computer architecture for a class of computations (algorithms), systematically and in a top-down fashion, a general and uniform methodology should be developed. For a given class, there exists an information structure of the architecture such that efficient performance can be achieved for the given class. The methodology is used to find such an information structure and then, to define the control structure of the architecture at functional level. The control structure itself can be treated as another architecture (with a different computing environment), and therefore, again, its Information Structure and then Control Structure (at a lower level) could be found using the same methodology. This recursive application of the methodology to define and design Information Structures and Control Structures terminates when the Control Structure can be trivially 'hard-wired'. Power of the methodology will lie in its generality, i.e. it could be used to design an architecture for practically any arbitrary computing environment.

Journal ArticleDOI
01 Mar 1984
TL;DR: This paper describes the design goals and implementation strategy of the Data General Desktop Generation Model 10 and focuses on four key elements of the design: the dual-processor hardware architecture, color graphics implementation, layered software architecture, and modular packaging.
Abstract: This paper describes the design goals and implementation strategy of the Data General Desktop Generation Model 10. It focuses on four key elements of the design: the dual-processor hardware architecture, color graphics implementation, layered software architecture, and modular packaging. The paper first outlines the corporate and market requirements which the designers sought to fulfill, and the alternative approaches considered. The paper also discusses the performance of the final product and its fit with the markets as originally defined.