scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 1989"


Journal ArticleDOI
Anna Hać1
TL;DR: In this article, the authors present a distributed algorithm that considers the number of read and write accesses to files for every process type and their demands on system resources, the utilization of bottlenecks on all machines, and file sizes.
Abstract: The author presents a distributed algorithm that considers the number of read and write accesses to files for every process type, the number of processes and their demands on system resources, the utilization of bottlenecks on all machines, and file sizes. Performance improvement obtained with the algorithm is discussed and proved. A number of experiments executed in a distributed system in order to predict the impact on performance of various algorithm strategies are examined. The experiments show changes in system performance due to file and process placement, file replication, and file and process migration. >

67 citations


Proceedings ArticleDOI
05 Jun 1989
TL;DR: The Bullet server is a file server that outperforms traditional file servers by more than a factor of three and uses the concept of an immutable file to improve performance, to enable caching, and to provide a clean semantic model to the user.
Abstract: The Bullet server is a file server that outperforms traditional file servers by more than a factor of three. It achieves high throughput and low delay by a software design radically different from that of file servers currently in use. Whereas files are normally stored as a sequence of disk blocks, each Bullet server file is stored contiguously, both on disk and in the server's random access memory cache. Furthermore, it uses the concept of an immutable file to improve performance, to enable caching, and to provide a clean semantic model to the user. The authors describe the design and implementation of the Bullet server in detail, present measurements of its performance, and compare this performance to that of the SUN file server running on the same hardware. >

61 citations


Patent
27 Oct 1989
TL;DR: In this paper, the authors propose a system for providing a high speed digital communication path between the processor of a host computer and the processors of one or more remote computers, which allows a remote computer to efficiently share the resources of the larger host computer such as high speed magnetic disk drives and printers.
Abstract: A system for providing a high speed digital communication path between the processor of a host computer and the processors of one or more remote computers. The high speed digital communication path allows a remote computer to efficiently share the resources of the larger host computer such as high speed magnetic disk drives and printers. A host interface located at the host computer is connected to the internal host bus of the host computer. The host interface includes a host port and components which provide for the transfer of data from the host bus to the host port. The data bits presented at the host port are arranged in a parallel format. A communication cable conveys the parallel data bits from the host port to a remote port in a remote interface at a remote computer. The remote interface provides a random access memory and components for transferring data from the remote bus to the random access memory and vice-versa. Data bits presented at the remote port are also conveyed to the random access memory. The host interface and the remote interface include control structures to supervise and arbitrate accesses to the communication cable and to the random access memory.

47 citations


DOI
01 Jan 1989
TL;DR: A new methodology for resource sharing algorithms in distributed systems is presented, proposing Pareto-optimality as a definition of optimality and fairness for the flow control problem and proving that the resource allocations computed by the economy are Pare to-optimal.
Abstract: In this thesis, we present a new methodology for resource sharing algorithms in distributed systems. We propose that a distributed computing system should be composed of a decentralized community of microeconomic agents. We show that this approach decreases complexity and can substantially improve performance. We compare the performance, generality and complexity of our algorithms with non-economic algorithms. To validate the usefulness of our approach, we present economies that solve three distinct resource management problems encountered in large, distributed systems. The first economy performs CPU load balancing and demonstrates how our approach limits complexity and effectively allocates resources when compared to non-economic algorithms. We show that the economy achieves better performance than a representative non-economic algorithm. The load balancing economy spans a broad spectrum of possible load balancing strategies, making it possible to adapt the load balancing strategy to the relative power of CPU vs. communication. The second economy implements flow control in virtual circuit based computer networks. This economy implements a general model of VC throughput and delay goals that more accurately describes the goals of a diverse set of users. We propose Pareto-optimality as a definition of optimality and fairness for the flow control problem and prove that the resource allocations computed by the economy are Pareto-optimal. Finally, we present a set of distributed algorithms that rapidly compute a Pareto-optimal allocation of resources. The final economy manages replicated, distributed data in a distributed computer system. This economy substantially decreases mean transaction response time by adapting to the transactions' reference patterns. The economy reacts to localities in the data access pattern by dynamically assigning copies of data objects to nodes in the system. The number of copies of each object is adjusted based on the write frequency versus the read frequency for the object. Unlike previous work, the data management economy's algorithms are completely decentralized and have low computational overhead. Finally, this economy demonstrates how an economy can allocate logical resources in addition to physical resources.

47 citations


Proceedings ArticleDOI
10 Oct 1989
TL;DR: Efficient algorithms for a file server that do not require the logging of messages on stable storage are presented by the authors, which reduce the additional synchronization that needs to be introduced to ensure locality.
Abstract: A checkpointing scheme can be used to ensure forward progress of a computation (program) even when failures occur. In a distributed system, many autonomous programs can execute concurrently and obtain services from a set of shared servers. In such a system, it is desirable to to restrict a checkpoint or rollback operation to a single program to localize the effects of failures, even when processes of different programs communicate with servers. This can be achieved by a scheme based on message logging and consistent checkpoints when the system is deterministic. When the system (communication network or programs) is nondeterministic, the semantics of the server functions should be exploited to reduce the additional synchronization that needs to be introduced to ensure locality. The authors illustrate this by presenting efficient algorithms for a file server that do not require the logging of messages on stable storage. >

20 citations


Proceedings ArticleDOI
05 Jun 1989
TL;DR: The system that is described attempts to solve the seemingly incompatible needs of the two application domains by utilizing a high-capacity, low-latency interconnection network called the HPC supported by the VORX multiprocessor computing environment.
Abstract: A system is described that combines the major strengths of local area networks and multicomputers, namely resource sharing over geographically significant distances and small communication latencies. The result is a new architecture called a local area multicomputer (LAM). The system that is described attempts to solve the seemingly incompatible needs of the two application domains by utilizing a high-capacity, low-latency interconnection network called the HPC supported by the VORX multiprocessor computing environment. For systems with hundreds of nodes, the network capacity is in the Gb/s range, and the latency for small messages is about 10 mu s. The HPC can connect resources located several kilometers apart. The VORX environment provides the necessary program development tools and resource management functions needed to experiment with distributed applications. Two applications are outlined to show the diverse uses for a LAM system and the status of an 80-node system that includes 70 adjunct processors and ten SUN 3 workstations/fileservers. >

19 citations


Patent
Alan J. Schiffleger1
09 Feb 1989
TL;DR: In this article, a shared register resource is distributed among the processors with each processor including a portion of the total system resource. But the shared register resources can be accessed by any one of the processors through internal communication paths.
Abstract: A system for interprocessor communication including a shared register resource accessible by any one of the processors through the using internal communication paths. The shared register resource is distributed among the processors with each processor including a portion of the total system resource. Each processor includes an access circuit for receiving instructions from the CPU and generating control bytes to be distributed to the shared register resource circuits in each of the processors, which use the control byte to control shared resource access. Each shared register resource circuit is capable of controlling the I/O channels associated with its respective processor. A local access circuit for each CPU is capable of obtaining access to and controlling any of the I/O channels in the system via the shared register resource circuits.

13 citations


Proceedings ArticleDOI
09 Nov 1989
TL;DR: Decomposing a large network into a system of smaller subnets is explored, and advantages include distributed processing, reduced number of interconnections, and resource sharing.
Abstract: Decomposing a large network into a system of smaller subnets is explored. Each subnet is an independent network that performs some task. These subnets will be linked together in such a way as to collectively solve some larger task. Only the signals from the output units of a subnet are allowed to connect to the inputs of other subnets. That is, no connections will be made from within one subnet to another. Training is applied to each subnet independently. Advantages include distributed processing, reduced number of interconnections, and resource sharing. The consequences of pursuing such a system are investigated. >

5 citations


Journal ArticleDOI
TL;DR: A heuristic multiwindow protocol that utilizes the collision-detection capability of multiple contention buses is proposed and analyzed and can identify the t smallest variates out of N distributed random variates in an average of approximately (0.8 log 2 t + 0.2 log 2 N + 1.2) contention steps.

5 citations




Journal ArticleDOI
TL;DR: A distributed architecture to realize a maximum flow algorithm using token propagations is described and is applicable to any general loop-free network configuration in which the requesting processors and free resources can be partitioned into two disjoint subsets.
Abstract: Circuit-switched interconnection networks for resource sharing in multiprocessors are studied. Resource scheduling in systems with such an interconnection network entails the efficient search for a mapping from requesting processors to free resources such that circuit blockages in the network are minimized and resources are maximally used. The optimal mapping is obtained by transforming the scheduling problems into various network flow problems to which existing algorithms can be applied. A distributed architecture to realize a maximum flow algorithm using token propagations is described. The method is applicable to any general loop-free network configuration in which the requesting processors and free resources can be partitioned into two disjoint subsets. >

Patent
17 Aug 1989
TL;DR: In this paper, the authors propose to prevent the overlap of access rights to a common resource by inhibiting the use of a common bus by other bus masters during a period when the bus master intends to acquire the access right of the shared resource.
Abstract: PURPOSE:To prevent the overlap of access rights to a common resource by inhibiting the use of a common bus by other bus master during a period when the bus master intends to acquire the access right of the shared resource CONSTITUTION:In a first bus master 21-n-th bus master 2n, one bus master reads access right information to show that a shared memory 3 can be made access from a storage area Hereafter, the use of the common bus 1 by other bus master is inhibited until this bus master writes the bus right information to show that the shared memory 3 can not be made access in the storage area Thus, it can be surely prevented that plural bus masters acquire the access right of the shared resource such as the shared memory while being overlapped

Journal ArticleDOI
Anna Hac1
TL;DR: A model for the distributed file system is presented that allows multiple types of processes and files, and shared resources to be represented, and uses measurable parameters only.

Patent
27 Sep 1989
TL;DR: In this paper, a deletion response of the intranetwork shared resource from the group of a level just under the computer system, and checks the enquiry pass of the shared resource in a local directory in a computer system.
Abstract: PURPOSE:To obtain a guarantee that information related to a shared resource is the same on the local directories of all computer systems to enquire it by automatically registering the intranetwork shared resource. CONSTITUTION:A deletion responding means 19 receives the deletion response of the intranetwork shared resource from the computer system group of a level just under the computer system, and checks the enquiry pass of the intranetwork shared resource in a local directory 7 of the computer system. The deletion of the enquiry pass is completed when the change responses from all enquiry passes are gathered, and when the intranetwork shared resource is the registration by the enquiry, the deletion response is notified to the computer system just over the computer system through the use of a communicating means 5 to a high order. Thus, the guarantee that the information related to the shared resource is the same on the local directories of all computer systems to enquire it can be obtained.


Book ChapterDOI
Kurt Geihs1, Hermann Schmutz1
18 Apr 1989
TL;DR: This work describes the DACNOS design goals and the resulting architecture, and comments on the portability of the system and the experience with newly written applications.
Abstract: DACNOS (Distributed Academic Computing Network Operating System) is a prototype network operating system that facilitates cooperation in heterogeneous multi-vendor computing environments. It is an add-on software system that does not replace nor interfere with the existing host operating system, but enables resource sharing with integrated access control and resource management across the network of heterogeneous computers. We describe the DACNOS design goals and the resulting architecture. Our main focus is on our experiences with the development and use of the prototype. In particular we comment on the portability of the system and the experience with newly written applications. It has been shown that DACNOS is a powerful and convenient application enabling platform for distributed computing in heterogeneous environments.

Proceedings ArticleDOI
11 Apr 1989
TL;DR: The objective of this study is to develop analytical models and computational algorithms for the determination of the state subset with better performance for slotted time systems, with call traffic modeled as stationary independent arrival processes and with service time modeled as a general discrete time distribution.
Abstract: The authors examine resource allocation aspects in an ISDN (integrated services digital network) or in a computer network where more than one packet class is distinguished in cases of random contention for M identical resources from two or more statistically different packet types Particular attention is focused on sharing bandwidth among voice and data virtual circuits, and sharing buffer before a multiserver system Generally speaking the problem is to determine the optimal policy for accepting or rejecting a call when the type of the requesting packet is known The optimal choice of buffer size and bandwidth is involved in the design of the service facility as well as the rules for sharing resources among users The objective of this study is to develop analytical models and computational algorithms for the determination of the state subset with better performance for slotted time systems, with call traffic modeled as stationary independent arrival processes and with service time modeled as a general discrete time distribution The parameters optimized are throughput, utilization, and blocking of the system >


Book ChapterDOI
Carl A. Sunshine1
01 Jan 1989
TL;DR: Computer networking as the authors know it today may be said to have gotten its start with the Arpanet development in the late 1960s and early 1970s, with its reliance on the then novel technique of packet switching to efficiently share communication resources among “bursty” users.
Abstract: Computer networking as we know it today may be said to have gotten its start with the Arpanet development in the late 1960s and early 1970s Prior to that time there were computer vendor “networks” designed primarily to connect terminals and remote job entry stations to a mainframe But the notion of networking between computers viewing each other as equal peers to achieve “resource sharing” was fundamental to the arpanet design [1] The other strong emphasis of the Arpanet work was its reliance on the then novel technique of packet switching to efficiently share communication resources among “bursty” users, instead of the more traditional message or circuit switching

01 Sep 1989
TL;DR: Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria and provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system.
Abstract: Allocation scheme gives jobs to idle computers. Ideal resource-sharing algorithm should have following characteristics: Dynamics, decentralized, and heterogeneous. Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria. Provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system. Adjusts dynamically to traffic load of each station.

DOI
01 Jan 1989
TL;DR: This paper analyzes recent research in the field of distributed file systems and concludes that a low degree of replication, a liberal use of client and server caching, and optimistic behavior in the face of network failure are all necessary to ensure high availability.
Abstract: This paper analyzes recent research in the field of distributed file systems, with a particular emphasis on the problem of high availability. Several of the techniques involved in building such a system are discussed individually: naming, replication, multiple versions, caching, stashing, and logging. These techniques range from extensions of ideas used in centralized file systems, through new notions already in use, to radical ideas that have not yet been implemented. A number of working and proposed systems are described in conjunction with the analysis of each technique. The paper concludes that a low degree of replication, a liberal use of client and server caching, and optimistic behavior in the face of network failure are all necessary to ensure high availability.

12 Oct 1989
TL;DR: In this article, the authors analyze real-time data processing systems using a host mainframe and a number of engineering workstations interconnected by a local area network and generate general criteria principles for software residency decisions for a diverse yet coupled group of users.
Abstract: In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.

Proceedings ArticleDOI
H. Schmutz1, G. Muller
08 May 1989
TL;DR: The authors introduce the components which are necessary to extend a communication subsystem to a comprehensive support system for distributed processing in a heterogeneous multivendor environment and gives particular attention to DACNOS, a comprehensive architecture and implementation of a support environment for distributed applications in a heterogeneity environment.
Abstract: The authors introduce the components which are necessary to extend a communication subsystem to a comprehensive support system for distributed processing in a heterogeneous multivendor environment. Major components are services for distributed invocation, for directors, and for security. Known solutions are discussed, and some open problems are identified. Particular attention is given to the distributed academic computing network operating system (DACNOS), a comprehensive architecture and implementation of a support environment for distributed applications in a heterogeneous environment. The main goal of DACNOS was the development of a systematic approach to resource sharing and distributed processing in a heterogeneous environment. Users can access remote files, databases, programs, and other resources from within their local environment. Without learning alien interfaces, users can share information and resources with the users of different operating systems. DACNOS is a running prototype supporting real applications. >