scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 1985"


Journal ArticleDOI
01 Dec 1985
TL;DR: This paper presents the design and rationale of a distributed file system for a network of more than 5000 personal computer workstations, with careful attention paid to the goals of location transparency, user mobility and compatibility with existing operating system interfaces.
Abstract: This paper presents the design and rationale of a distributed file system for a network of more than 5000 personal computer workstations. While scale has been the dominant design influence, careful attention has also been paid to the goals of location transparency, user mobility and compatibility with existing operating system interfaces. Security is an important design consideration, and the mechanisms for it do not assume that the workstations or the network are secure. Caching of entire files at workstations is a key element in this design. A prototype of this system has been built and is in use by a user community of about 400 individuals. A refined implementation that will scale more gracefully and provide better performance is close to completion.

298 citations


Journal ArticleDOI
TL;DR: Three algorithms necessary to maintain load balancing in this system are introduced: the local load algorithm, used by each processor to monitor its own load; the exchange algorithm, for exchanging load information between the processors, and the process migration algorithm that uses this information to dynamically migrate processes from overloaded to underloaded processors.
Abstract: This paper deals with the organization of a distributed load-balancing policy for a multicomputer system which consists of a cluster of independent computers that are interconnected by a local area communication network. We introduce three algorithms necessary to maintain load balancing in this system: the local load algorithm, used by each processor to monitor its own load; the exchange algorithm, for exchanging load information between the processors, and the process migration algorithm that uses this information to dynamically migrate processes from overloaded to underloaded processors. The policy that we present is distributed, i.e. each processor uses the same policy. It is both dynamic, responding to load changes without using an a priori knowledge of the resources that each process requires; and stable, unnecessary overloading of a processor is minimized. We give the essential details of the implementation of the policy and initial results on its performance. Our results confirm the feasibility of building distributed systems that are based on network communication for uniform access, resource sharing and improved reliability, as well as the use of workstations without a secondary storage device.

198 citations


Journal ArticleDOI
01 Aug 1985
TL;DR: Load sharing is the process of sharing computational resources by transparently distributing the system workload by transferring work from node that are heavily loaded to nodes that are lightly loaded.
Abstract: One goal of locally distributed systems is to facilitate resource sharing. Most current locally distributed systems, however, share primarily data, data storage devices, and output devices; there is little sharing of computational resources. Load sharing is the process of sharing computational resources by transparently distributing the system workload. System performance can be improved by transferring work from nodes that are heavily loaded to nodes that are lightly loaded.Load sharing policies may be either static or adaptive. Static policies use only information about the average behavior of the system; transfer decisions are independent of the actual current system state. Static policies may be either deterministic (e.g., “transfer all compilations originating at node A to server B”) or probabilistic (e.g., “transfer half of the compilations originating at node A to server B, and process the other half locally”).

46 citations


Journal ArticleDOI
TL;DR: The Butler relies upon a capability-based accounting system called the Banker to monitor the use of local resources and provides support for resource sharing.
Abstract: A network of personal computers may contain a large amount of distributed computing resources. For a number of reasons it is desirable to share these resources, but sharing is complicated by issues of security and autonomy. A process known as the Butler addresses these problems and provides support for resource sharing. The Butler relies upon a capability-based accounting system called the Banker to monitor the use of local resources.

37 citations


Proceedings ArticleDOI
01 Mar 1985
TL;DR: The basic design of the file server is described with emphasis on the performance-critical areas and it is concluded that the buffering and disk layout strategies used work well under load.
Abstract: Network file access efficiency is a key issue in a distributed system's performance, especially when many of the network nodes are diskless and rely on a shared network file server. We have designed and implemented a file server that uses the network interprocess communication of the V kernel for file access. This paper describes the basic design of the file server with emphasis on the performance-critical areas. We also give its performance under a variety of workloads and compare these measurements with results predicted by other modeling studies.We conclude that the buffering and disk layout strategies we have used work well under load. Performance results are consistent with a previous modeling study that the file server processor is the most critical resource. However, our experiments with high load were limited by the small amount of buffering on the network interface, i.e. large numbers of packets are dropped at high load giving poorer than predicted performance.

25 citations


Journal ArticleDOI
TL;DR: The authors propose an efficient method of utilizing the primitive operations of collision detection and broadcast in multiaccess networks to efficiently distribute status information and to identify the extremum.
Abstract: A study of resource scheduling based on a distributed state-dependent discipline for a system of processors connected by a local multiaccess network is made. The scheduling problem is reduced to the identification of the extremum from a set of physically dispersed random numbers. The authors propose an efficient method of utilizing the primitive operations of collision detection and broadcast in multiaccess networks to efficiently distribute status information and to identify the extremum. The optimal performance of extremum identification is found to be constant and on the average independent of the number of contending processors. The protocol can be implemented either by minor hardware modification of existing multiaccess-network interfaces or in software.

22 citations


Journal ArticleDOI
TL;DR: It is suggested that a new basis for measuring the performance of interlending systems needs to be established in order to ensure equity and to limit resource sharing to specialized materials that support the research efforts of faculty and doctoral students.
Abstract: Resource sharing has expanded significantly in the past decade. While the benefits have been obvious, the mounting costs of lending and borrowing are causing serious concern. Higher costs and the physical deterioration of library research collections are likely to influence not only national but also international resource-sharing agreements. In this article the roles and structures of several European national lending systems are described. The purpose is to find useful models to guide future developments both at home and abroad. The author suggests that a new basis for measuring the performance of interlending systems needs to be established in . order to ensure equity and to limit resource sharing to specialized materials that support the research efforts of faculty and doctoral students.

7 citations


Journal ArticleDOI
TL;DR: The Desperanto Research project as discussed by the authors investigated the design and development of software to support distributed computing environments consisting of networks of autonomous, heterogeneous computer systems and found universally applicable solutions to problems (i.e. solutions which are not specific to one system or application domain).
Abstract: The Desperanto Research project has been investigating the design and development of software to support distributed computing environments consisting of networks of autonomous, heterogeneous computer systems. The design goals which distinguish this project from other related efforts are (1) finding universally applicable solutions to problems (i.e. solutions which are not specific to one system or application domain), and (2) finding solutions which do not require changing existing software.

7 citations


Proceedings ArticleDOI
15 Dec 1985
TL;DR: A simulation model of a file system is introduced and it is shown how the decision about where to place a file or to assign a process can be based on the workload's and the system's characterization.
Abstract: This paper presents a number of solutions to the problems of data access, concurrency control, transaction serialization, and deadlock control based on the examples of existing file systems. Also, policies for file placement and process assignment are discussed from the viewpoint of their impact on system performance and reliability. The impact of these policies depends on the solutions to the problems referred above. A simulation model of a file system is introduced. This model is validated using data collected in a small business installation. A number of simulation experiments are presented. It is shown how the decision about where to place a file or to assign a process can be based on the workload's and the system's characterization. An algorithm for improving performance and reliability by file placement and/or process assignment is finally presented.

4 citations


01 Nov 1985
Abstract: AUTHOR Sarason, Seymour; And Others TITLE Fulfilling the Promise: A Fresh Look at Collaboration and Resource Sharing in Education. Pathways to Growth. INSTITUTION Northwest Regional Educational Lab., Portland, Oreg. SPONS AGENCY National Inst. of Education (ED), Washington, DC. PUB DATE Nov 85 CONTRACT 400-83-0005 NOTE 65p.; For the other documents in this set, see SP 027 182-184. PUB TYPE Guides Non-Classroom Use (055) -Reports Descriptive (141)

3 citations


01 Jan 1985
TL;DR: Evaluations indicated that distributed state-dependent scheduling schemes are preferable for resource scheduling, and a methodology has been proposed to optimize the resource mapping, reduce the scheduling overhead, and facilitate a fast implementation.
Abstract: In this thesis, we have studied the resource-allocation problem in resource sharing computer systems. A resource sharing computer system is characterized by a pool of request generators and a pool of resources interconnected by a resource sharing interconnection network. Central issues in resource scheduling include the minimization of resource conflicts, the reduction of the probability of network blockage or congestion, and the balance of workload among resources. Evaluations indicated that distributed state-dependent scheduling schemes are preferable. Moreover, integrating the scheduling schemes into the network protocol significantly reduces the overhead of collecting status information. A methodology has been proposed to optimize the resource mapping, reduce the scheduling overhead, and facilitate a fast implementation. The methodology has been applied to design resource-allocation schemes for three representative networks of increasing complexities. For a single contention-bus network, the resource scheduling problem is reduced to the problem of identifying a station with the minimum parameter among a set of physically dispersed random numbers. A distributed minimum-search algorithm that utilizes the collision detection capability of the contention bus has been proposed. The window-search procedure can resolve the global minimum in an average of 2.4 contention steps. No explicit message transfer is required in this process. For a multiple contention-bus network, the resource-allocation problem is reduced to the problem of ordered selections. A multi-window search procedure that is an extension of the single-bus search procedure has been proposed to select the minimum numbers in parallel. The average time complexity of this search procedure is about O(log(,2)t), where t is the number of buses. The mapping between the selected resources and processors has been found to be the classical stable-marriage problem. For resource allocation on multistage interconnection networks, the problem is transformed into different network-flow optimization problems, for which there exists many efficient algorithms. The network-flow algorithms have been integrated into the network protocol with a VLSI systolic-array architecture that allows resource scheduling to be carried out at signal propagation speed.

Proceedings ArticleDOI
01 Mar 1985
TL;DR: A new approach to estimating serialization delays in computer systems by introducing an analytic model to describe synchronization of accesses to shared resources and comparing performance measures provided by the simulation and analytic models.
Abstract: This paper presents a new approach to estimating serialization delays in computer systems. An analytic model is introduced to describe synchronization of accesses to shared resources. The model parameters are estimated from the values of measurable quantities. The probabilities that a shared resource is locked, thereby causing other processes to wait for the resource to be accessible, are calculated, under certain assumptions, as functions of the probabilities of access and of the number of active transactions. The service times of lock servers are calculated as functions of the loads on all service centers. Performance measures applicable to the evaluation of computer systems are introduced and evaluated. An example involving a distributed file system and measurement data collected in a small business installation is given to compare performance measures provided by the simulation and analytic models.

Proceedings ArticleDOI
01 May 1985
TL;DR: This work describes the design and implementation of a resource sharing system for IBM Personal Computers, which generalizes the traditional file and device server approach, allowing applications of any kind to be offered as services on the network.
Abstract: The software and hardware available today for personal computers provides a broad range of support for personal productivity, business applications, research, programming, and other activities. If personal computers are connected in a local area network, they can form a system whose total resources are very great compared to those of each computer. With appropriate system mechanisms, users can share these resources.We describe the design and implementation of a resource sharing system for IBM Personal Computers. The system generalizes the traditional file and device server approach, allowing applications of any kind to be offered as services on the network. The system supports services by maintaining service definitions, queuing requests by priority, creating server processes, loading service programs, and combining services into larger distributed applications. A user may start several independent activities that proceed concurrently. Each activity can span several machines. The system is built upon an existing operating system, PC-DOS, extending the view it provides to users. Multitasking and enhanced memory management are provided. Interprocess communication is supported by a high-level service request protocol. The discussion emphasizes the problems encountered in building the system and the solutions devised.

13 Mar 1985
TL;DR: This paper demonstrates how a simulation model can be developed to estimate the average time to run a computer simulation with a large number of users in a computer equipment environment where some of the equipment is dedicated to individual users during their session and other equipment is shared by the users.
Abstract: This paper demonstrates how a simulation model can be developed to estimate the average time to run a computer simulation with a large number of users in a computer equipment environment where some of the equipment is dedicated to individual users during their session and other equipment is shared by the users. We explain the rationale for building such a simulation, develop the analytical basis upon which the simulation is built and present the details of the simulation. In essence, we advocate the use of one simulation model to insure the successful use of another simulation model. In this paper we refer to the simulation used by the participants in the lab as the “object simulation” and the simulation used to estimate the average session time as the “estimator simulation.”

Book ChapterDOI
01 Jan 1985
TL;DR: Inter-process communication is essential whenever applications, such as multi-user database access, require multiple processes to interact asynchronously, in real time, with user terminals, while maintaining a consistent view of a shared resource.
Abstract: Whenever there are many distinct, inter-dependent processes running in a computer system, there is a requirement for a means of communication between them. Inter-process communication allows complex tasks to be decomposed into a number of separate functional elements and unrelated processes to be made aware of each other&;#x2019;s resource requirements or the occurrence of external events which may affect their activities. Inter-process communication is essential whenever applications, such as multi-user database access, require multiple processes to interact asynchronously, in real time, with user terminals, while maintaining a consistent view of a shared resource.

Journal Article
TL;DR: The Desperanto Research project has been investigating the design and development of software to support distributed computing environments consisting of networks of autonomous, heterogeneous computer systems.

Journal ArticleDOI
01 Nov 1985
TL;DR: An analytic model of a file server was an experimental system designed to provide an environment for storage and retrieval of bulk files and results obtained were in close agreement with simulation data.
Abstract: An analytic model of a file server is presented. The file server was an experimental system designed to provide an environment for storage and retrieval of bulk files. The file server was envisaged to be accessed by single-user workstations, equipped with limited secondary storage, via a local area network. The analytic model is a hierarchical model involving an open/closed queueing network of the BCMP type and an open queueing network with blocking. These two models were combined together through the means of an iterative scheme. The results obtained from the analytic model were in close agreement with simulation data.

01 Jan 1985
TL;DR: In this paper, an object-based reliability model is proposed that captures the properties of the environment of interest and provides for reliable resource sharing in spite of failure, where the model is based on an abstract object called the Recoverable Module that models a resource which is tolerant to failure and able to manipulate exceptions.
Abstract: One of the main advantages of distributed computer systems is the ability to share resources among heterogeneous autonomous single systems. The distributed system can be viewed as a number of resources provided by the collection of single systems. A number of resources may be combined to construct a distributed resource or may cooperate to implement a distributed application or subsystem. A number of failures, such as a node crash, link failure, and network partition, in addition to concurrent access of the resource, may cause system state inconsistency and resource unavailability, rendering the distributed system unreliable. In this dissertation we design an efficient software support for reliable and modular resource sharing in this distributed environment. An object based reliability model is proposed that captures the properties of the environment of interest and provides for reliable resource sharing in spite of failure. The model is based on an abstract object called the Recoverable Module that models a resource which is tolerant to failure and able to manipulate exceptions. The Recoverable Module forms the basic construct for building the reliable distributed system. In addition, the Recoverable Module forms the unit of recovery and controls synchronization. We believe that such an approach is more superior in terms of containing failure effects and achieving more efficient recovery than the approaches that are proposed in transaction based recovery models, where the entire distributed transaction or subtransaction is used as the unit of synchronization and recovery. One important feature of the model is that it allows for the specification of resource dependent recovery and synchronization semantic knowledge. Such knowledge aids in achieving flexibility, a high degree of concurrency, and efficient recovery. A number of mechanisms are proposed to implement the reliability properties of the model. These mechanisms include: (1) Node recovery, to detect, isolate, and provide for the recovery of a crashed node. (2) Module Recovery, to restore the Module to a consistent state and localize the effects of failure. (3) Local Module synchronization mechanism, to express and reserve the Module's dependent synchronization constraints and synchronization semantic knowledge. (4) An Optimistic concurrency mechanism, to synchronize multiple transactions and increase the degree of concurrency. (5) An exception handling mechanism that allows for specification of the Module's dependent recovery semantic knowledge and control error propagation.

01 Oct 1985
TL;DR: The Microcomputer-based Local Automated Model (MicroLAM) project will demonstrate the integration of a local collection management system with access to remote bibliographic data bases through an intelligent gateway processor.
Abstract: : The Microcomputer-based Local Automated Model (MicroLAM) project will demonstrate the integration of a local collection management system with access to remote bibliographic data bases Through an intelligent gateway processor, users of the system will be able to access a local catalog and the Defense Technical Information Center (DTIC) Technical Reports (TR) data base simultaneously The system will consist of a commercial software package (for local collection management) integrated with a subset of the Lawrence Livermore National Laboratory (LLNL) Technology Information System (an intelligent gateway) The intelligent gateway permits sharing of bibliographic resources between the network of technical libraries and information centers within the Department of Defense (DoD) and the DoD technical information clearinghouse -- DTIC Contained in this functional description are summary description, detailed characteristics and operating environment of the system, as well as the cost factors and development schedule of the LAM project Keywords include: Library automation, Automated system design, Cataloging, Retrieval, Intelligent gateway, Shared cataloging, and Resource sharing

Patent
05 Jun 1985
TL;DR: In this paper, an occupancy status checking device is inserted between a resource operation status-checking device and a resource accessing device, and when a resource access request from a user 1 is aproved by the checking device 8, checks whether the extent of use of the resource 2 for which access request is made is occupied by other users or not referring to the occupancy controlling table.
Abstract: PURPOSE:To prevent an access request that does not follow a specified procedure, and ensure a right of a occupancy of resource obtained by users of a shared resource by executing the access request only when the shared resource is not in the state of an exclusive occupancy referring to an occupancy controlling table. CONSTITUTION:An occupancy status checking device 11 is inserted between a resource operation status checking device 8 and resource accessing device 9. The device has the same function with an occupancy permitting device 5, and when a resource access request from a user 1 is aproved by the checking device 8, checks whether the extent of use of the resource 2 for which access request is made is occupied by other users or not referring to the occupancy controlling talbe 7. If it is not occupied by other users 1, it is shifted to the next resource accessing device 9 and access is executed. If the extent of the resource 2 is already occupied by other users 1, an access denial NG signal is sent back to the user 1.

Journal ArticleDOI
TL;DR: A new flexible approach to process-control task scheduling that meets the above requirements is presented and proved to be particularity useful in memory and re-entrant, code management.