scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 1986"


Proceedings Article
01 Jan 1986
TL;DR: A novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context is presented, which provides a powerful structuring framework for distributed systems that can be implemented cheaply without sacricing much exibility.
Abstract: We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must rst acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacricing much exibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation

320 citations


Book
09 Jan 1986
TL;DR: The LOCUS Distributed System Architecture is included in the Computer Systems series, edited by Herb Schwetman and contains architectural extensions to support extensive interprocess communication and internetworking.
Abstract: Computer systems consisting of many machines will be the norm within a few years. However, making a collection of machines appear as a single, coherent system--in which the location of files, servers, programs, or users is invisible to users who do not wish to know--is a very difficult problem. LOCUS, a distributed version of the popular operating system Unix, provides an excellent solution. It makes a collection of computers, whether they are workstations or mainframes, as easy to use as a single computer by providing a set of supports for the underlying network that is virtually invisible to users and applications programs. This "network transparency" dramatically reduces the cost of developing and maintaining software, and considerably improves the user model of the system. It also permits a variety of system configurations, including diskless workstations, full duplex I/O to large mainframes, transparently shared peripherals, and incremental growth from one workstation to a large network including mainframes with no effect on applications software required to take advantage of the altered configurations. In addition to transparent, distributed operation, LOCUS features also include high performance and reliability; full Unix compatibility, support for heterogeneous machines and systems, automatic management of replicated file storage; and architectural extensions to support extensive interprocess communication and internetworking. Contents: The LOCUS Architecture. Distributed Operation and Transparency. The LOCUS Distributed Filesystem. Remote Tasking. Filesystem Recovery. Dynamic Reconfiguration of LOCUS. Heterogeneity. System Management. Appendixes: LOCUS Version Vector Mechnism. LOCUS Internal Network Messages. The LOCUS Distributed System Architecture is included in the Computer Systems series, edited by Herb Schwetman.

282 citations


Journal ArticleDOI
TL;DR: The problem of the ice cream vendor on the beach is extended to the siting of multiple entering servers into the midst of existing servers which are already in place at multiple sites on a network.
Abstract: . The problem of the ice cream vendor on the beach is extended to the siting of multiple entering servers into the midst of existing servers which are already in place at multiple sites on a network. The network of population nodes, joining arcs, and eligible sites for servers provides the underlying matrix of the problem. The goal of the single firm which is siting the multiple entering servers is the capture of the maximum population from existing servers. This combinatorial problem can be described as a linear integer program for which solutions can easily be derived. Computational experience and insights derived therefrom are reported.

210 citations


Journal ArticleDOI
TL;DR: This work model the system by a general multiserver queueing system where customers demand service from a random number of servers, and presents some analytic and simulation results of the performance of the system under several server allocation policies.
Abstract: A common digital transmission facility in a wide-band integrated service digital network (ISDN) provides shared access to a community of heterogeneous users. Traffic demands from these users vary in their arrival rate, their service time, and their bit rate. In order for this type of communication system to handle its traffic demands with high efficiency and flexibility, a close control of access to the shared bandwidth is required. We model the system by a general multiserver queueing system where customers demand service from a random number of servers. If no waiting is allowed, this queueing model is readily analyzed, and various server allocation strategies can be studied. If the various access requests are queued for service, then the system calls for efficient strategies for allocating servers to waiting customers. In this case, exact analysis of the underlying queueing model becomes quite difficult. For this case, we present some analytic and simulation results of the performance of the system under several server allocation policies.

69 citations


01 Dec 1986
TL;DR: This work is intended for educational and research purposes only and should not be used as a substitute for further studies or as a basis for defamation or other legal action.
Abstract: ©1986 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.

68 citations


Journal ArticleDOI
TL;DR: The 2-server districting heuristic is further extended to treat the general case of m servers, and combined with the location algorithm for a single server it forms a general location-allocation heuristic for n nodes and m servers.

53 citations


Journal ArticleDOI
TL;DR: A multi-priority, nonpreemptive, multi-server queueing system, with Poisson arrivals and negative exponential service times, which is extended to systems in which any subset of priority levels may overflow to some other system rather than join infinite capacity queues.
Abstract: We consider a multi-priority, nonpreemptive, multi-server queueing system, with Poisson arrivals and negative exponential service times. In order to save available servers for higher priority customers, the system deliberately queues arriving lower priority customers whenever the number of busy servers equals or exceeds a given priority-dependent cutoff number. A queued, priority i customer enters service the instant there are fewer than the respective cutoff number of servers busy and all higher priority queues are empty. We derive several performance measures for the waiting time of the ith priority class: its mean, its second moment, and its distribution in transforms. We also extend the analysis to systems in which any subset of priority levels may overflow to some other system rather than join infinite capacity queues.

48 citations


Journal ArticleDOI
01 Aug 1986
TL;DR: This paper describes a scheme based on non-discretionary control which allows interconnecting organizations to combine gateway, network, and system-level mechanisms to enforce cross-boundary control over invocation and information flow, while minimizing interference with internal operations.
Abstract: When two or more distinct organizations interconnect their internal computer networks they form an Inter-Organization Network(ION). IONs support the exchange of cad/cam data between manufacturers and subcontractors, software distribution from vendors to users, customer input to suppliers' order-entry systems, and the shared use of expensive computational resources by research laboratories, as examples. This paper analyzes the technical implications of interconnecting networks across organization boundaries.After analyzing the organization context in which IONs are used, we demonstrate that such interconnections are not satisfied by traditional network design criteria of connectivity and transparency. To the contrary, a primary high-level requirement is access control, and participating organizations must be able to limit connectivity and make network boundaries visible. We describe a scheme based on non-discretionary control which allows interconnecting organizations to combine gateway, network, and system-level mechanisms to enforce cross-boundary control over invocation and information flow, while minimizing interference with internal operations.Access control requirements such as these impose new requirements on the underlying interconnection protocols. We demonstrate such alternative interconnection protocols that support loose coupling across administrative boundaries and that accommodate the necessary control mechanisms. Message-based gateways that support non-real-time invocation of services (e.g., file and print servers, financial transactions, VLSI design tools, etc.) are a promising basis for such loose couplings.

48 citations


Journal ArticleDOI
01 Aug 1986
TL;DR: The Mach operating system supports secure local communication within one node of a distributed system by providing protected communication abstraction that can be extended over a network by Network Server tasks.
Abstract: The Mach operating system supports secure local communication within one node of a distributed system by providing protected communication abstraction can be extended over a network by Network Server tasks. The network servers effectively act as local representatives for remote tasks by implementing an abstraction of Network Ports. To extend the security of the port-based communication abstraction into the network environment, the network servers must protect both the messages sent over the network to network ports and the access rights to network ports. This paper describes in detail the protocols used by the network servers to support security.

47 citations


Journal ArticleDOI
TL;DR: In this article, a queueing system with two or more servers, each with its own queue with infinite capacity, is considered and customers arrive according to some stochastic process e.g., a Poisson process and immediately up.
Abstract: Consider a queueing system with two or more servers, each with its own queue with infinite capacity. Customers arrive according to some stochastic process e.g., a Poisson process and immediately up...

41 citations


Journal ArticleDOI
TL;DR: A two-layer learning system has been implemented for MINDS (Multiple Intelligent Node Document Servers), a distributed system of knowledge-based query engines for efficiently retrieving multimedia documents in an office environment of distributed workstations.
Abstract: MINDS (Multiple Intelligent Node Document Servers) is a distributed system of knowledge-based query engines for efficiently retrieving multimedia documents in an office environment of distributed workstations. By learning document distribution patterns, as well as user interests and preferences during system usage, it customizes document retrievals for each user. A two-layer learning system has been implemented for MINDS. The knowledge base used by the query engine is learned at the lower level with the help of heuristics for assigning credit and recommending adjustments; these heuristics are incrementally refined at the upper level. © 1986 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: The paper gives a notation for the pattern of rendezvous, a framework for translating a software/hardware system structure into an active-server queueing network model, and an implicit decomposition algorithm for solving for the system performance.

Journal ArticleDOI
TL;DR: Approximation expressions for the server utilization factor of each server in a heterogeneous-server G/G/n queueing loss system with ordered entry with a stationary counting process are derived.
Abstract: Approximation expressions for the server utilization factor of each server in a heterogeneous-server G/G/n queueing loss system with ordered entry are derived. The system is assumed to face a stationary counting process. Service times are generally distributed with possibly different service rates. The numerical results from this approximation method are then compared with those from a simulation study.

Proceedings ArticleDOI
09 Jan 1986
TL;DR: The development of the hardware that was the basis of the first distributed personal computing system, built at the Xerox Palo Alto Research Center, is described.
Abstract: Between 1972 and 1980, the first distributed personal computing system was built at the Xerox Palo Alto Research Center. The system was composed of a number of Alto workstations connected by an Ethernet local network. It also included servers that provided centralized facilities. This paper describes the development of the hardware that was the basis of the system.

Journal ArticleDOI
TL;DR: A globally optimal allocation for files in a local network environment is presented and it is shown that all the files of each workstation should be placed on one file server, with the workstations divided as equally as possible among the file servers.
Abstract: A globally optimal allocation for files in a local network environment is presented. The principal concern is the delays due to contention at the file servers; storage space is assumed to be adequate. A queuing network model is used to represent the file servers and the workstations. The workloads generated by the workstations are statistically identical. The model assumes that the communications medium is lightly loaded. In this case there is very little queuing, so that a message transmission requires an approximately constant average delay which can be included in the local processing time of the workstation. Under these assumptions the model can be applied to any of the various LAN technologies. It is shown that all the files of each workstation should be placed on one file server, with the workstations divided as equally as possible among the file servers.

01 Jan 1986
TL;DR: Five major topics are addressed in this work: provision of network-transparent execution environments for programs, structuring migration facilities such that they interfere with the normal operation of the system in a minimal manner, elimination of residual dependencies that occur when a program migrates but has state information left in machine-relative servers on the original machine, provision of global scheduling facilities for finding idle/lightly loaded machines for remote execution and migration of programs, and provision of fair access to global resources.
Abstract: A loosely-coupled distributed system consisting of a cluster of workstations and server machines represents a large amount of computational power, much of which is frequently idle Users would like to take advantage of this idle processing power by running one or more jobs in parallel on underutilized workstations The use of underutilized workstations as computation servers not only increases the processing power available to users, but also improves the utilization of the hardware base However, this use must not compromise a workstation owner's claim to his machine: A user must be able to quickly reclaim his workstation to avoid interference with personal activities, implying removal of "guest" programs within a few seconds time In addition, use of remote machines as computation servers should not require programs to be written with special provisions for executing remotely That is, remote execution should be preemptable and transparent On the other hand, rather than simply terminate the guest program it should be possible to migrate it to another available workstation In this thesis, we study the key design and performance issues that affect preemptable remote execution in a loosely-coupled distributed system Five major topics are addressed in our work: (1) provision of network-transparent execution environments for programs, (2) structuring migration facilities such that they interfere with the normal operation of the system in a minimal manner, (3) elimination of residual dependencies that occur when a program migrates but has state information left in machine-relative servers on the original machine, (4) provision of global scheduling facilities for finding idle/lightly loaded machines for remote execution and migration of programs, and (5) provision of fair access to global resources among the programs and users of a system In the process of addressing these topics we delineate when remote execution facilities, with or without migration facilities, are useful and under what conditions they are easy (or difficult) to provide

Journal ArticleDOI
TL;DR: In this article, the authors consider a queueing system consisting of s (s ≧ 2) single-server service stations and show that the departure process is stochastically fastest in the case where the servers are homogeneous.
Abstract: We consider tandem queues which have a general arrival process. The queueing system consists of s (s ≧ 2) single-server service stations and the servers have exponential service-time distributions. Firstly we give a new proof for the fact that the departure process does not depend on the particular allocation of the servers to the stations. Secondly, considering the service rates, we prove that the departure process becomes stochastically faster as the homogeneity of the servers increases in the sense of a given condition. It turns out that, given the sum of the service rates, the departure process is stochastically fastest in the case where the servers are homogeneous.

Journal ArticleDOI
TL;DR: It is shown that for closed queueing networks of m arbitrarily connected single server queues with n customers, throughput, as a function of a scaled, constrained workload, is not concave, and the function appears to be strictly quasiconcave.

Journal ArticleDOI
TL;DR: An explicit steady-state solution is given for any queuing loop made up of two general servers, whose distribution functions have rational Laplace transforms, and it is shown that the algorithm requires at least an order of magnitude less computational effort than any previously reported algorithm.
Abstract: An explicit steady-state solution is given for any queuing loop made up of two general servers, whose distribution functions have rational Laplace transforms. The solution is in matrix geometric form over a vector space that is itself a direct or Kronecker product of the internal state spaces of the two servers. The algebraic properties of relevant entities in this space are given in an appendix. The closed-form solution yields simple recursive relations that in turn lead to an efficient algorithm for calculating various performance measures such as queue length and throughput. A computational-complexity analysis shows that the algorithm requires at least an order of magnitude less computational effort than any previously reported algorithm.

Journal ArticleDOI
TL;DR: For light traffic, it is optimal to assign the server with the smallest assignment cost while, for heavy traffic, an efficient algorithm is derived for finding the optimal policy.

Journal ArticleDOI
TL;DR: In this article, a distributed algorithm for distributed match-making in store-and-forward computer networks is presented, and the theoretical limitations of distributed matchmaking are established, and techniques are applied to several network topologies.
Abstract: In the very large multiprocessor systems and, on a grander scale, computer networks now emerging, processes are not tied to fixed processors but run on processors taken from a pool of processors. Processors are released when a process dies, migrates or when the process crashes. In distributed operating systems using the service concept, processes can be clients asking for a service, servers giving a service or both. Establishing communication between a process asking for a service and a process giving that service, without centralized control in a distributed environment with mobile processes, constitutes the problem of distributed match-making. Logically, such a match-making phase precedes routing in store-and-forward computer networks of this type. Algorithms for distributed match-making are developed and their complexity is investigated in terms of message passes and in terms of storage needed. The theoretical limitations of distributed match-making are established, and the techniques are applied to several network topologies.

Journal ArticleDOI
TL;DR: A simple n -dimensional birth-death steady-state traffic model is used to evaluate the blocking probability experienced by n different classes of customers in a two-way interactive distributed community antenna television (CATV) communication system.
Abstract: The service quality of a circuit-switched network is usually measured in terms of probability of rejecting a call and the delay encountered by the customers in accessing the common resources. In this paper, a simple n -dimensional birth-death steady-state traffic model is used to evaluate the blocking probability experienced by n different classes of customers in a two-way interactive distributed community antenna television (CATV) communication system. In the computation of the end-to-end blocking probability, we have assumed that the links between the call's originating node and the destination node are statistically independent. It is assumed, without loss of generality, that the number of channels (servers) is the same for all the links. In addition to the closed-form solution obtained for different cases, a simple recursive formula that simplifies the computational complexity is presented.

Journal ArticleDOI
David D. Yao1
TL;DR: It is shown that adding a sequence of servers with non-increasing (non-decreasing) service rates will yield a decreasing convex sequence of overflow probabilities, and an optimal server allocation problem is solved.

Journal ArticleDOI
01 Aug 1986
TL;DR: This paper expands on the UNIX notion of pipes to a generalized pipeline mechanism containing services from a variety of servers and gives a design and performance evaluation of a general mechanism for multi-machine pipes using the DARPA UDP and TCP protocols.
Abstract: As a user's computing environment grows from a single time-shared host to a network of specialized and general-purpose machines, the capability for the user to access all of these resources in a consistent and transparent manner becomes desirable. Instead of viewing commands as binary files, we expect the user to view commands as services provided by servers in the network. The user interacts with a personal workstation that locates and executes services on his behalf.Executing a single service provided by any server in the network is useful, but the user would also like to combine services from different machines to perform complex computations. To provide this facility we expand on the UNIX notion of pipes to a generalized pipeline mechanism containing services from a variety of servers. In this paper we explain the merits of a multi-machine pipeline for solving problems of accessing services in a heterogeneous environment. We also give a design and performance evaluation of a general mechanism for multi-machine pipes using the DARPA UDP and TCP protocols.

Patent
11 Jun 1986
TL;DR: In this article, an address server storing the address data of all terminal stations to each local area network sub system so as to retrieve the address server is provided to attain the ease of the correction of an address list.
Abstract: PURPOSE:To attain the ease of the correction of an address list by providing an address server storing the address data of all terminal stations to each local area network sub system so as to retrieve the address server. CONSTITUTION:Plural local area network sub systems LAN1-LANn are connected via node stations N1, N2-. The sub systems LAN1-LANn are provided respectively with plural terminal stations S11-S1m,-,Sn1-Snm and address servers S1A-SnA, and each address server stores a list for sub system names belonging to all terminal stations are addresses specific to each terminal station. When the address of an opposite terminal station desired for transmission is not clear to a terminal station, the address server is retrieved and the data is transmitted after the opposite address is obtained. Thus, the memory area of each terminal station is saved and the correction jib of the address table at increase/decrease in the terminal station attended with the system change is simplified.

Journal ArticleDOI
Harrison1
TL;DR: An approximation for the distribution of time delays experienced by a customer in a network of queues is presented and it is proved that the correlation between the sojourn times at successive servers on a customer's path in a closed queueing network with exponential servers is negative.
Abstract: An approximation for the distribution of time delays experienced by a customer in a network of queues is presented. Approximate analytical models are necessary since exact solutions are only available for a very restricted class of networks, and are too complex computationally to be viable in practice. Approximations have so far often proved inadequate, particularly for closed networks with first come first served queueing disciplines. We also prove that the correlation between the sojourn times at successive servers on a customer's path in a closed queueing network with exponential servers is negative.

Journal ArticleDOI
TL;DR: A closed single node queueing system with multiple classes is analyzed numerically with a view to obtaining performance measures such as throughput, distribution of busy servers, and queue-length distribution.

Journal ArticleDOI
TL;DR: This work describes issues related to the design and performance of a multimedia information server for an office environment and analyzes the performance of various access methods for multimedia documents that reside in the server.
Abstract: We describe issues related to the design and performance of a multimedia information server for an office environment. Multimedia documents are composed of text, image, voice, and attribute information. We describe the multimedia document structure, their internal representation and their presentation form. Content addressibility in this environment is achieved by specifying conditions on attributes, text, images, and document presentation form. Query reformulation, multimedia document formation in this environment and communication with the outside world are also discussed. Archived documents are stored in a centralized server. Users submit their queries in their local workstations using a local area network. The server retrieves the qualifying documents and returns them to the users. We analyze the performance of various access methods for multimedia documents that reside in the server.

Journal ArticleDOI
A. Hac1
TL;DR: The author presents a new approach to modeling file systems using queueing networks based on the analysis of execution of transactions in the system, which allows multiple classes of transactions and shared files to be represented.
Abstract: The author presents a new approach to modeling file systems using queueing networks. The delays due to locking the files are modeled using service centers whose service times and probabilities of access are estimated from the values of measurable quantities. The model of a lock is based on the analysis of execution of transactions in the system. The lock for every file is modeled as a sequence of service centers. The decomposition method can be used to solve the model, which allows multiple classes of transactions and shared files to be represented. An example involving measurement data collected in a small business installation is given to compare performance measures provided by the simulation and analytic models.

01 Dec 1986
TL;DR: This work is intended for educational and research purposes only and should not be used as a substitute for further studies or as a basis for defamation or other legal action.
Abstract: ©1986 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.