scispace - formally typeset
Search or ask a question

Showing papers on "Reference architecture published in 1980"


Book
01 Jan 1980
TL;DR: Applying principles and techniques of mathematics and signal processing on sub-salt images in order to determine locations of petroleum and mineral deposits.
Abstract: WORK EXPERIENCE CGGVeritas, Houston Imaging engineer May 2007 present Signal processing, mathemathical modelling, and numerical simulations in connection with the analysis, and interpretation of three-dimensional and four-dimensional seismic data. Applying principles and techniques of mathematics and signal processing on sub-salt images in order to determine locations of petroleum and mineral deposits.

189 citations


Book
01 Mar 1980

135 citations


Book ChapterDOI
TL;DR: The protocols implementing the functions of DNA are described, including the motivations for the specific designs, alternatives and tradeoffs, and lessons learned from the implementations.
Abstract: Recognizing the need to share resources and distribute computing among systems, computer manufacturers have been designing network components and communication subsystems as part of their hardware/software system offerings. A manufacturer's general purpose network structure must support a wide range of applications, topologies, and hardware configurations. The Digital Network Architecture (DNA), the architectural model for the DECnet family of network implementations, has been designed to meet these specific requirements and to create a communications environment among the heterogeneous computers comprising Digital's systems. This paper describes the Digital Network Architecture, including an overview of its goals and structure, and details on the interfaces and functions within that structure. The protocols implementing the functions of DNA are described, including the motivations for the specific designs, alternatives and tradeoffs, and lessons learned from the implementations. The protocol descriptions include discussions of addressing, error control, flow control, synchronization, flexibility, and performance. The paper concludes with examples of DECnet operation.

68 citations


Book
01 Jan 1980
TL;DR: Systems analysis and design methodologies had their conceptual beginning in two basic techniques for building abstract models, but they cannot scale up to satisfy the current needs for far more complex and vastly larger business systems.
Abstract: Why has change been so hard on information systems? What methods worked in a smaller, simpler age and why have they started failing? Why does the impact of change ricochet through our systems explosively and chaotically, and above all, why is it so hard to manage? We must have these answers to understand root causes. Only then can we fashion solutions that will fit the age of knowledge with its unceasing, pitiless, and ravenous appetite for rapid change driven by the race of survival in a shifting landscape of high stake, chimerical, and short-lived opportunities. Therefore, let us digress briefly to understand lessons learned and the reasons why older methods are failing. Systems analysis and design methodologies had their conceptual beginning in two basic techniques for building abstract models. Both approaches had their genesis in the behavior of physical and engineering, not business, systems.1 Many of our problems with managing change and reusing knowledge stem from the intrinsic limitations we inherited from these two techniques. They cannot scale up to satisfy our current needs for far more complex and vastly larger business systems. Most analysis and design techniques in use today were derived from one of two fundamental techniques, and, unaware, we still carry their hidden legacy of limitations. The two fundamental techniques are:

50 citations


Journal ArticleDOI
Justin R. Rattner1, George W. Cox1
TL;DR: The talk will be an architectural, not a product, preview and is cleared to discuss only the concepts underlying the architecture, and will specifically not discuss the implementation.
Abstract: It is unusual for a talk such as this to be given before the product is introduced, but we wanted an opportunity to focus attention on conceptual framework of the design before the practical details of its realization are made public. We are grateful to the management of Intel for the opprotunity to do so. The talk will be an architectural, not a product, preview and is cleared to discuss only the concepts underlying the architecture. It will specifically not discuss the implementation. The product is, however, real, implemented, and running and scheduled to be introduced approximately six months from now. At this point what can be said about the implementation was its goal of producing an all VLSI system. This goal was achieved: the system uses several, one or two component processors and occupies very little physical space.

19 citations


Book
01 Jan 1980
TL;DR: An abstract syntax is introduced to describe the interactions of UML interactions, based on semantics of plain interactions from Cengarle and Knapp.
Abstract: Syntax 11 / 26 Use ■ Graphical notations are often ambigous ■ UML interactions can also be interpreted in different ways ■ We introduce an abstract syntax to describe the interactions ■ Textual and unambigous ■ Basis are send and receive events Stefan Wagner – Technische Universität München PLV ’06 – Munich (Germany) – 12 / 26 Syntax Fragment Interaction ::= Event | CombinedFragment CombinedFragment ::= sd({Instances})( Interaction) | seq(Interaction, Interaction) | par(Interaction, Interaction) | alt(Interaction, Interaction) | repeat({Instances})(Times, (Times | ∞), Interaction) | variant(BExp,Interaction) Instances ::= Instance, Instances | Instance . . . Stefan Wagner – Technische Universität München PLV ’06 – Munich (Germany) – 13 / 26 4 Workshop Handout Page 39 Denotational Semantics 14 / 26 Why a Formal Semantics? ■ Yet another UML semantics? ■ UML 2.0 interactions are not handled by other semantics ■ More precision and unambiguity in language extensions ■ Important for effective tool support Stefan Wagner – Technische Universität München PLV ’06 – Munich (Germany) – 15 / 26 Main Concepts ■ Based on semantics of plain interactions from Cengarle and Knapp ■ The semantics of a plain interaction S states whether a trace t is positive or negative for the interaction, written t |=p S and t |=n S, respectively ■ if t is neither positive nor negative for S, then t is called inconclusive for S. ■ The semantics of an extended interaction S depends on a configuration C. ■ t |=Cp S and t |= C n S Stefan Wagner – Technische Universität München PLV ’06 – Munich (Germany) – 16 / 26 Case Study 17 / 26 Overview HTF VarRoute FixedRoute HFixedA HVarA HDist HNeg HTS ... ... ■ Extensive case study of a holonic flow of material in a production system ■ Autonomous vehicles (HTFs) transport engine parts between machine tools where they are processed ■ Several variations concerning the process are possible Stefan Wagner – Technische Universität München PLV ’06 – Munich (Germany) – 18 / 26 5 Workshop Handout Page 40

12 citations



Journal ArticleDOI
TL;DR: In Computer Architecture News (CAN) of December 1979, Dennis Frailey responds (with an insider's view) to an earlier plea to modernize computer architecture.
Abstract: In Computer Architecture News (CAN) of December 1979, Dennis Frailey responds (with an insider's view) to an earlier plea to modernize computer architecture. Dennis pointed out that simple, common~osense business factors dominate decisions of microprocessor manufacturers. Will the new product be a business success? Can it be manufactured cheaply in volume? Can it be brought to market quickly? Will it perform well? Is it compatible with existing products? He suggested that those of us seeking innovations in computer architecture must build our case around answers to these questions.

9 citations


01 Mar 1980
TL;DR: This thesis proposes the architecture of a personal computer that provides better support than conventional architectures for recently developed concepts of structured programming, and eliminates the need for an operating system.
Abstract: : This thesis proposes the architecture of a personal computer that provides better support than conventional architectures for recently developed concepts of structured programming. The architecture separates implementation and high level language issues. The architecture eliminates the need for an operating system by including, in a language independent manner, the features normally found in operating systems. The architecture allows multiple languages to coexist safely. It is complete; the user has no need to leave the world defined by the architecture to solve a problem, including the important case of executing untrusted programs.

4 citations


Journal ArticleDOI
TL;DR: A capability implementation which uses the memory management hardware and the TRAP instruction of the higher members of the Digital Equipment Corporation PDP-11/XX to create a capability architecture processor has a strong similarity to that of the Plessey 250.
Abstract: This paper defines a capability implementation which uses the memory management hardware and the TRAP instruction of the higher members of the Digital Equipment Corporation PDP-11/XX (XX = 34, 45, 55, 70) to create a capability architecture processor. No modifications to hardware are necessary. The architecture created has a strong similarity to that of the Plessey 250. An operating system based on this architecture could provide a basis for implementation of highly reliable and secure software systems on a common and inexpensive minicomputer.

3 citations



Proceedings ArticleDOI
06 May 1980
TL;DR: The project KENSUR aiming in breaking down a compiler into a set of concurrent processes and in having them executed onto a network of processors is currently developed at I.R.I.S.A.
Abstract: The project KENSUR aiming in breaking down a compiler into a set of concurrent processes and in having them executed onto a network of processors is currently developed at I.R.I.S.A.This paper discusses three main aspects of the construction of this network.i) The various steps involved in the design of such a distributed system are described and the architecture adapted to the logical structure and dynamic characteristics of the application is presented.ii) The architecture of the currently implemented prototype is presented and problems with inter-processor communications and memory sharing are discussed.iii) The kernel system facilities for process management and synchronisation are described.



Patent
12 Sep 1980
TL;DR: In this paper, the authors propose a fault-tolerant computer architecture based on a ring-shaped partly-meshed arrangement of microcomputers, in which each microcomputer monitors the functions of the microcomputer adjacent to it in the ring.
Abstract: The present invention is based on the object of creating a computer architecture which ensures fault-tolerant operation. The failure of a component or fault in a component of the overall system do not lead to the failure of the overall system. Defined stand-by elements which are only used when needed, or duplicated elements which are only used for carrying out specific tasks in parallel, are not provided in the computer architecture according to the invention. The exception to this are the interface devices provided at the interfaces between the computer architecture and the peripheral or the computer architecture and the user level. The object is achieved by a ring-shaped partly-meshed arrangement of microcomputers, in which each microcomputer monitors the functions of the microcomputer adjacent to it in the ring. In principle, every microcomputer can take over the tasks of all other microcomputers.

01 Feb 1980
TL;DR: A distinction between DBMS framework and DBMS architecture, and a functionalDBMS framework that was developed using a functional approach in which a DEMS is characterized abstractly in terms of functional components and their potential relationships.
Abstract: : The concept of DBMS architecture played an important role in the design, analysis, and comparison of DBMSs as well as in the development of other database concepts. the ANSI/SPARC prototypical database system architecture was a major contribution in this development. The architecture raised many issues, stimulated considerable research, and posed a number of new problems. Since the basic formulation of the ANSI architecture, in 1974, little consideration was given to resolving problems and accommodating new and future developments. The main problems concern its unnecessary ridgidity. The contributions of this paper are a distinction between DBMS framework and DBMS architecture, and a functional DBMS framework. The framework was developed using a functional approach in which a DEMS is characterized abstractly in terms of functional components and their potential relationships. The approach is based on the notions of modularity and data abstraction as developed in software engineering and programming languages. (Author)


Journal ArticleDOI
Karl Reed1
TL;DR: Professor Denning's articles (i), (2) and those of several others in CAN (3), (4), (5) raise a broader issue: How do the authors find a way forward in the Computer Architecture?
Abstract: Professor Denning's articles (i), (2) and those of several others in CAN (3), (4), (5) raise a broader issue. How do we find a way forward in the Computer Architecture? There seems to me to be three problems all of which are political. The first of these involves the egos of computer architects. Give any normal person a wiff of a chance to design a computer and he or she will immediately take steps to ensure that their design, no matter what, is adopted. I speak from bitter personal experience on this matter. The other evidence which supports this view is that, with very few exceptions, it is possible to drive a truck through the designs of most modern computers. Indeed, theemperical evidence suggests that almost nothing has been learnt by most workers in this field. I could a~d to Denning's list of horrors, but I will just mention the IBM 360/370/303X/4300 series architecture as another (negative) example, and mention the DEC 10 and B6700 as machines with reasonable instruction sets although they both have other problems. I take the extreme view that there has been very little development in CPU design as far as the general user is concerned. One must add, in the political scheme of things, that academics seem to be trained with an extremely narrow view of computer architecture, probabl~ because most of them come to it as either digital systems engineers, via courses in electronics engineering obsessed with the problems of digital circuitry, or they come to it as computer scientists in which case they never recover from their first assembly programming experienceS. Which leads me to my second point:

01 Dec 1980
TL;DR: A set of recursive functions is developed to represent computer architecture at any desired level of detail and are insensitive to whether the functions are realized in software, hardware or firmware.
Abstract: : This paper presents a framework for computer architecture which is based on the principle function of a computer to perform a mapping from some input into an output. A set of recursive functions is developed to represent computer architecture at any desired level of detail. The definitions are insensitive to whether the functions are realized in software, hardware or firmware. The approach is illustrated using examples. (Author)

Journal ArticleDOI
Ken Aupperle1
TL;DR: This computer system employs functional p&rtitioning, a segmented memory management approach to implicitly support modular programming, provide higher performance than paging scheme, and allow easy upgrade to security-enforcing and virtual memory implementations.
Abstract: Peter d. Denning (CAN Apr. 80), &s Dennis d. Frailey before him (CAN Dec. 79) and Rod Steel before both (CAN dun. 78?) bemoan the lack o~ innovations in recent computer architectures, apparently overlooking at least one very signi~igant counterexample. This computer system Ct employs functional p&rtitioning ~or performance, reliability , and ~lexibilityi [ Specialization (general-purpose processor, I/O processor , numeric processor) allows each functional unit to be optimized ~or what it does, rather than making one processor do everything tolerably well. A designer need only include those sections he needs, and the concurrent operation of those sections provides superior performance when compared to a single-CPU system. ] has a segmented memory management approach to implicitly support modular programming, provide higher performance than paging scheme, and allow easy upgrade to security-enforcing and virtual memory implementations~ [ A unique segmentation arrangement allows a referencing environment to be set up and implicitly used, reducing the size o£ addresses needed in intr&-module references. Segmentation provides support for modulariza-tion along logical program-related lines, rather than inflexible hardware pages, narrowing the semantic gap. Since segmentation is implicit, performance-robbing memory maps are unnecessary, and upgrades to enforced-security and virtual memory are clean and obvious. ] has an addressing structure carefully chosen to support High-Level-Languages~ [ A symetrical and orthogonal continuum o~ four-component addressing modes narrows the semantic gap by allowing efficient reference to HLL data structures, even in stack activation record. ]