scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 1976"


Proceedings ArticleDOI
13 Oct 1976
TL;DR: The single primary, multiple backup technique for resource sharing is shown to have minimal delay and is superior to multiple primary techniques.
Abstract: A technique is described which permits distributed resources to be shared (services to be offered) in a resilient manner. The essence of the technique is to a priori declare one of the server hosts primary and the others backups. Any of the servers can perform the primary duties. Thus the role of primary can migrate around the set of servers. The concept of n-host resiliency is introduced and the error detection and recovery schemes for two-host resiliency are presented. The single primary, multiple backup technique for resource sharing is shown to have minimal delay. In the general case, this is superior to multiple primary techniques.

494 citations


Journal ArticleDOI
01 Mar 1976
TL;DR: Numerical results for a variety of network configurations indicate that the heuristic algorithm, while not theoretically convergent, yields practicable low cost solutions with substantial savings in computer processing time and storage requirements.
Abstract: The problems of file allocation and capacity assignment in a fixed topology distributed computer network are examined. These two aspects of the design are tightly coupled by means of an average message delay constraint. The objective is to allocate copies of information files to network nodes and capacities to network links so that a minimum cost is achieved subject to network delay and file availability constraints. A model for solving the problem is formulated and the resulting optimization problem is shown to fall into a class of nonlinear integer programming problems. Deterministic techniques for solving this class of problems are computationally cumbersome, even for small size problems. A new heuristic algorithm is developed, which is based on a decomposition technique that greatly reduces the computational complexity of the problem. Numerical results for a variety of network configurations indicate that the heuristic algorithm, while not theoretically convergent, yields practicable low cost solutions with substantial savings in computer processing time and storage requirements. Moreover, it is shown that this algorithm is capable of solving realistic network problems whose solutions using deterministic techniques are computationally intractable.

203 citations


Proceedings ArticleDOI
James E. White1
07 Jun 1976
TL;DR: This paper proposes a high-level, application-independent framework for the construction of distributed systems within a resource sharing computer network that eliminates the need for application-specific communication protocols and support software, thus easing the task of the applications programmer and so encouraging the sharing of resources.
Abstract: This paper proposes a high-level, application-independent framework for the construction of distributed systems within a resource sharing computer network. The framework generalizes design techniques in use within the ARPA Computer Network. It eliminates the need for application-specific communication protocols and support software, thus easing the task of the applications programmer and so encouraging the sharing of resources. The framework consists of a network-wide protocol for invoking arbitrary named functions in a remote process, and machine-dependent system software that interfaces one applications program to another via the protocol. The protocol provides mechanisms for supplying arguments to remote functions and for retrieving their results; it also defines a small number of standard data types from which all arguments and results must be modeled. The paper further proposes that remote functions be thought of as remotely callable subroutines or procedures. This model would enable the framework to more gracefully extend the local programming environment to embrace modules on other machines.

125 citations


Journal ArticleDOI
17 Jan 1976
TL;DR: The motivations for the design study of a modular, shared resource computing system are given by discussing fault-tolerance and resource utilization issues in parallel processing architectures and a design is presented which employs an array of pipelined arithmetic processors to perform array operations.
Abstract: The motivations for the design study of a modular, shared resource computing system are given by discussing fault-tolerance and resource utilization issues in parallel processing architectures A design is presented which employs an array of pipelined arithmetic processors to perform array operations The design provides for fault-tolerance (“graceful degradation”) capability and is efficient in using main memory bandwidth Various architectural tradeoffs of the design are discussed Some results of simulations used for the verification of design decisions are also reported

16 citations


Proceedings ArticleDOI
07 Jun 1976
TL;DR: The functions required of a Network operating System are discussed and major differences between the role of the Network Operating System and an individual host operating system are identified.
Abstract: The viability of packet switched computer communication has been demonstrated. The potential for more effective computing through resource sharing and load leveling is evident. Realization of this potential requires expanded user support to reduce or eliminate much of the need for users to learn the command languages of the hosts being accessed and of the communications subnetwork. Such a capability can be provided by a mediating agent providing ease of access to resources and control of resource access---a role traditionally ascribed to an operating system in the context of an individual computer system. This mediating agent, hereafter termed a Network Operating System (NOS), requires careful exploration to determine its appropriate interaction with the operating systems of the hosts within the network. This paper discusses the functions required of a Network Operating System and identifies major differences between the role of the Network Operating System and an individual host operating system. As such, it is intended to provide a basic perspective on the field of Network Operating Systems.

12 citations


Proceedings ArticleDOI
07 Jun 1976
TL;DR: The data necessary to measure the many aspects of network behavior, the tools needed to gather this data and the means of collecting it at a central location are described, all in a fashion consistent with the system protocols and hardware constraints, and with minimal impact on the system operation itself.
Abstract: The growth of computer networks has proven both the need for and the success of resource sharing technology. A new resource sharing technique, utilizing broadcast channels, has been under development as a Packet Radio system and will shortly undergo testing. In this paper, we consider that Packet Radio system, and examine the measurement tasks necessary to support such important measurement goals as the validation of mathematical models, the evaluation of system protocols and the detection of design flaws. We describe the data necessary to measure the many aspects of network behavior, the tools needed to gather this data and the means of collecting it at a central location; all in a fashion consistent with the system protocols and hardware constraints, and with minimal impact on the system operation itself.

8 citations


Journal Article

7 citations


01 Jan 1976
TL;DR: The problem of synchronizing a set of operations defined on a shared resource, which is assumed that the decision as to which operations may be executed at some given time is dependent only on the sequence in which they have already executed, is studied.
Abstract: : The problem of synchronizing a set of operations defined on a shared resource is studied. It is assumed that the decision as to which operations may be executed at some given time is dependent only on the sequence in which the operations have already executed. Equivalence classes of these sequences, called states, can then be used to define synchronization. A restriction is made such that only those resources for which the synchronization can be expressed using a finite number of states will be studied. The states along with a successor function, which is defined for a state-operation pair if the operation may start execution when the resource is in that state, form what are called synchronization relationships. A distinction is made between resources on which only one process may execute an operation at a time, called serial resources, and resources on which several processes may execute operations in parallel, called concurrent sources. To handle concurrent resources, the states must be modified so that they correspond to equivalence classes of sequences of perilogues instead of operations. A periologue is either the start or the finish of the execution of some operation. Several variations of regular expressions are presented with which the synchronization for a shared resource might be expressed. Also, a method which can be used to implement the synchronization relationships is given.

4 citations


Journal ArticleDOI
17 Jan 1976
TL;DR: A generalized model of tightly-coupled multiprocessor systems is presented which is then simplified to form a stochastic model for the study of interference, and this model is used for the interacting effects of contention in systems where processors contend for several shared resources.
Abstract: This paper presents a generalized model of tightly-coupled multiprocessor systems which is then simplified to form a stochastic model for the study of interference. Analysis is performed on the resource contention which is characteristic of such systems in order to find a measure of system performance. After reviewing the problem of memory interference, the analysis is extended to contention in other individual resources, then combined to form a model for the interacting effects of contention in systems where processors contend for several shared resources.

3 citations


01 Jun 1976
TL;DR: In this paper, the problem of optimal dynamic file allocation when more than one copy is allowed to exist in the system at any given time is considered, and the use of two types of control variables, one for adding new copies to the system and the other for erasing copies, gives the model certain properties that permit the construction of an efficient algorithm to solve the optimization problem.
Abstract: : One of the main reasons computer networks are a major area of great attention and development today is their capability to provide the facilities for common use of data bases and information files by all computers in the system. When a file is used by several computers in the network, it can be stored in the memory of at least one of them and can be accessed by the other computers via the communication channels. In general the cost or querying is reduced as we increase the number of copies in the system. On the other hand, storage costs, limitations on the size of the memories and the cost of updating (every copy must be updated) will dictate decreasing of the number of copies. This thesis considers the problem of optimal dynamic file allocation when more than one copy is allowed to exist in the system at any given time. The use of two types of control variables, one for adding new copies to the system and the other for erasing copies, gives the model certain properties that permit the construction of an efficient algorithm to solve the optimization problem. In the last chapter the model and algorithms are applied to several numerical examples.

2 citations


Book ChapterDOI
31 Mar 1976
TL;DR: Using datagram-service, IPC-structure can be discussed without considerations to sequencing, routing algorithms and flow control.
Abstract: Resource sharing as we shall use the term concerns the ability to access programs (Interprogram- or Intertask-Communication) and data (message-exchange, filetransfer) from computers which act both autonomously and independently. In order to support software resource sharing the operating system facilities must be provided for interprocess-communication. Using datagram-service, IPC-structure can be discussed without considerations to sequencing, routing algorithms and flow control.

01 Feb 1976
TL;DR: Formulas are developed for the study of improvements in data base availability due to the existence of a backup copy located at an alternate site in a network.
Abstract: : Formulas are developed for the study of improvements in data base availability due to the existence of a backup copy located at an alternate site in a network. Several backup strategies are compared. (Author)

02 Aug 1976
TL;DR: This report contains results to date of a study of file allocation in a network and some new theorems for simplifying the computational problem are presented.
Abstract: : This report contains results to date of a study of file allocation in a network. Models and algorithms contained in the literature are surveyed. Some new models (for special situations and for update distribution through a primary site) are developed. Some new theorems for simplifying the computational problem are presented. (Author)


30 Sep 1976
TL;DR: This report covers three topics: the implementation of an experimental distributed data management system, the design of a network virtual file system, and recent research in query strategies for distributed data Management systems.
Abstract: : This report covers three topics: the implementation of an experimental distributed data management system, the design of a network virtual file system, and recent research in query strategies for distributed data management systems.

31 May 1976
TL;DR: This paper describes strategy which allows resources to be shared in a resilient manner while minimizing user delay, and supports two-host resiliency.
Abstract: : This paper describes strategy which allows resources to be shared in a resilient manner while minimizing user delay. The strategy described supports two-host resiliency. That is, at least two of the cooperating hosts must simultaneously malfunction while in the process of cooperation, and the malfunction must be of a very restricted form in order for undetectable or unrecoverable failure to occur. Extension to n-host resiliency is also discussed.