scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 1980"


Journal ArticleDOI
TL;DR: The Pilot operating system provides a single-user, single language environment for higher level software on a powerful personal computer, whose features include virtual memory, a large “flat” file system, streams, network communication facilities, and concurrent programming support.
Abstract: The Pilot operating system provides a single-user, single language environment for higher level software on a powerful personal computer. Its features include virtual memory, a large “flat” file system, streams, network communication facilities, and concurrent programming support. Pilot thus provides rather more powerful facilities than are normally associated with personal computers. The exact facilities provided display interesting similarities to and differences from corresponding facilities provided in large multi-user systems. Pilot is implemented entirely in Mesa, a high-level system programming language. The modularization of the implementation displays some interesting aspects in terms of both the static structure and dynamic interactions of the various components.

154 citations


Journal ArticleDOI
A.D. Birrell1, R.M. Needham
TL;DR: The paper explores the dedgn issues associated with such a file server and proposes some solutions.
Abstract: A file server is a utility provided in a computer connected via a local communications network to a number of other computer. File servers exist to preserve material for the benefit of client machines or systems. It is desirable for a file server to be able to support multiple file directory and access management systems, so that the designer of a client system retains the freedom to design the system that best suits him. For example, he may wish to use the rile server to support a predefimed directory structure or as a swapping disk. The paper explores the dedgn issues associated with such a file server and proposes some solutions.

84 citations


Proceedings ArticleDOI
01 Jan 1980
TL;DR: The design of the Distributed Double-Loop Computer Network (DDLCN), which is a local-area distributed computing system that interconnects midi, mini and micro computers using a fault-tolerant double-loop network, is presented.
Abstract: This paper presents the design of the Distributed Double-Loop Computer Network (DDLCN), which is a local-area distributed computing system that interconnects midi, mini and micro computers using a fault-tolerant double-loop network. Several new features and novel concepts have been incorporated into the design of its subsystems, viz., the reliable communication network, the programming/operating system (P/OS), and the distributed loop data base system (DLDBS). The interface design is unique in that it employs tri-state control logic and bit-sliced processing, thereby enabling the network to become dynamically reconfigurable and fault tolerant with respect to communication link failure as well as component failure in the interface. Three classes of multi-destination communication protocols, each providing a different degree of reliability, have been incorporated into the network to facilitate efficient and reliable exchanges of messages.The P/OS is distinguished from other similar research efforts in that its ultimate goal is to support not only communication and cooperation among autonomous, distributed processes running at various nodes, but also to support convenient and correct resource sharing through program generation (semi-automatic programming) for application and systems programmers. A new concurrency control mechanism for DLDBS has been developed, which uses distributed control without global locking and is deadlock free. In addition to being simple to implement and having good performance (high throughput and low delay), the mechanism is also robust with respect to failure of both communication links and hosts.

45 citations


Journal ArticleDOI
TL;DR: A model is developed which incorporates both a performance and availability measure of a general gracefully-degradable shared-resource multiprocessor, and their effects are compared on the basis of a performance/cost ratio.
Abstract: Shared-resource systems are particularly vulnerable to faults which contaminate the shared-resource, and result eventually in system failure In shared-memory computer systems, resource guardians can provide some protection A model is developed which incorporates both a performance and availability measure of a general gracefully-degradable shared-resource multiprocessor Various system parameters are studied and their effects are compared on the basis of a performance/cost ratio As the number of processors increases or as the reliability of a processor or memory module decreases, some resource protection becomes neccessary There is a class of systems for which adding resource protection provides no measurable benefit

27 citations


01 Aug 1980
TL;DR: This document presents an overall view of the services provided by the facility and describes the primitives used in accessing it and is intended to be applicable to any installation whose computers are linked by a local network, and which possesses a large-capacity archival storage system.
Abstract: : The file system described here will be the primary means by which a variety of computers connected together by a local network will share files with each other The spectrum of computers using this network will range from mainframes supporting time-sharing to personal computers The services provided by the central file system include the storage, migration and archiving of files, the naming and sharing of files and directories, and the authentication of users This document presents an overall view of the services provided by the facility and describes the primitives used in accessing it The back end file system described here is being implemented in the Department of Computer Science at Carnegie-Mellon University However, the design is intended to be applicable to any installation whose computers are linked by a local network, and which possesses a large-capacity archival storage system Keywords: Prototype systems

18 citations


01 Aug 1980
TL;DR: A capability-passing model for distributed access control,Several capability-management protocols are discussed, including the inalienable right to pass capabilities, capability theft through data theft and reflection, and resource sharing with integrated network directories.
Abstract: Computer systems being incorporated into mature support networks are facing a substantial protocol-implementation effort in granting controlled access to their resources and in obtaining access to network-supplied resources. This protocol-implementation effort can be significantly reduced by use of resource-sharing protocols that are independent of specific resource semantics. A capability-passing model for distributed access control is described and several capability-management protocols are discussed. Highlights of the discussion include the inalienable right to pass capabilities, capability theft through data theft and reflection, capability management by public key encryption, a capability passing structure, and resource sharing with integrated network directories. 9 figures, 2 tables.

15 citations



Book ChapterDOI
01 Oct 1980
TL;DR: Planning and design considerations are presented for a full scale high performance heterogeneous computer network coupled by a 100 MBPS optical fiber ring bus that includes a guide subsystem to manage common resources and to control their usage.
Abstract: Planning and design considerations are presented for a full scale high performance heterogeneous computer network coupled by a 100 MBPS optical fiber ring bus. The design goals of this system are efficient resource sharing and improved RAS. All common resources, such as processors, peripheral devices, terminals and file devices, are directly connected to the ring bus, instead of directly to a processor. This network includes two or more large computers of different types, a guide subsystem to manage common resources and to control their usage, peripheral and terminal control subsystems, and a gateway processor.

5 citations


Journal ArticleDOI
TL;DR: It is suggested that it is useful to recognize a logical module for such programs which encapsulates groups of tasks which it is necessary to confine to a single processor node, due to timing or resource sharing constraints.

5 citations



Journal ArticleDOI
TL;DR: The origins of library networks and their development in the United States in the late 1960s through the present are discussed and attention is given to how these two major components of library cooperation, which have tended to be separate, might become more closely integrated.
Abstract: This paper discusses the origins of library networks and traces their development in the United States in the late 1960s through the present. The concept of resource sharing, with particular attention to the interlibrary loan and programs for the cooperative acquisition and storage of materials, is examined in relationship to library networks. In particular, attention is given to the question of how these two major components of library cooperation, which have tended to be separate, might become more closely integrated.

Journal ArticleDOI
TL;DR: It is conjectured that the use of a special purpose high-level language is a most flexible and powerful means of generating particular network agents and that this technique permits the automation of a wide variety of network access procedures.

ReportDOI
01 Dec 1980
TL;DR: The basic file transport protocol; the special features, such as parameter negotiation and recovery, that are essential to successful file transfer in a network; and details of the file transport requests and responses are discussed.
Abstract: The Standard File Transport is the method used to transfer files between the components of the Los Alamos Scientific Laboratory's (LASL's) Integrated Computer Network (ICN). The basic file transport protocol; the special features, such as parameter negotiation and recovery, that are essential to successful file transfer in a network; and details of the file transport requests and responses are discussed. 8 figures.


Journal ArticleDOI
TL;DR: The network was developed on a low budget, with few programmers, using as a basis the DECNET proprietary system and protocols, and contains features that endow it with good potential for growth and evolution for resource sharing and multi-hosting.




Book ChapterDOI
Gene Hilborn1
01 Jan 1980
TL;DR: A new methodology is given for describing the effectiveness of communicating networks of processing subsystems which are subject to communication and processing failures or losses, and a class of performance measures called the survival index is defined axiomatically on a connectivity state for the network graph and its reductions.
Abstract: A new methodology is given for describing the effectiveness of communicating networks of processing subsystems which are subject to communication and processing failures or losses. The network attributes are described as a graph of nodes and edges. A class of performance measures called the “survival index” is defined axiomatically on a connectivity state for the network graph and its reductions. This definition is based on the idea of a team of members cooperating or sharing resources by use of communication links, and on the existence of synergism or a “team effect,” where a group of connected members are more effective than isolated members. The present work is restricted to homogeneous teams — where members have equal capability.

Journal ArticleDOI
TL;DR: It is suggested that it is useful to recognize a logical module for such programs which encapsulates groups of tasks which it is necessary to confine to a single processor node, due to timing or resource sharing constraints.
Abstract: This paper discusses the programming of networks of co-operating computers in a control environment. The computers in the systems considered are weakly coupled, having no shared store and communicating by messages over a network. Such a system could be programmed in Ada, or any other language, as a set of separate machines, with the communication explicitly handled by input/output procedures. However, this would not be in the spirit of a distributed system, involving a very rigid structure, decided when the program was written. A system for control should preferably be written as a complete program, in a uniform language, regarding the manner in which it is executed, in a single machine or in a network of smaller machines, as an implementation detail. Being designed for the programming of embedded systems, Ada has many features which are appropriate for this purpose. There do seem, however, to be a number of areas where work is needed to resolve difficulties which will arise in practice. This paper attempts to identify these problem areas and offer possible solutions to some of them. It is suggested that it is useful to recognize a logical module for such programs which encapsulates groups of tasks which it is necessary to confine to a single processor node, due to timing or resource sharing constraints. These modules can be programmed in Ada as tasks, nevertheless they represent a sufficiently distinct abstraction to deserve separate nomendature; here they are referred to as zones.