scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 1983"


Journal ArticleDOI
TL;DR: Some experience with the DOMAIN architecture is presented and lessons derived from that experience are presented, as are some performance data.
Abstract: The DOMAIN system is an architecture for networks of personal workstations and servers which creates an integrated distributed computing environment. Its distinctive features include: a network-wide file system of objects addressed by unique identifiers (UID's); the abstraction of a single level store for transparently accessing all objects, regardless of their location in the network; and a network-wide hierarchical name space. The implementations of these facilities exhibit several interesting approaches to layering the system software. In addition to network transparent data access, interprocess communication is provided as a basis for constructing distributed applications; as a result, we have some experience to guide the choice between these two alternative implementation techniques. Networks utilizing this architecture have been Operational for almost three years; some experience with it and lessons derived from that experience are presented, as are some performance data.

168 citations


Journal ArticleDOI
TL;DR: In this paper, an alternative charging scheme for an exponential service case with pre-emptive LIFO service is presented which confirms that differences between individual and social optima occur precisely because individuals fail to consider the inconvenience that they cause to others.
Abstract: Customers arrive at a service area according to a Poisson process. An arriving customer must choose one of K servers without observing present congestion levels. The only available information about the kth server is the service time distribution with expected duration µk-1 and the cost per unit time of waiting at the kth server hk. Although service distributions may differ from server to server and need not be exponential, it is assumed that they share the same coefficient of variation. Individuals acting in self-interest induce an arrival rate pattern λ??1, λ??2, ', λ??k. In contrast, the social optimum is the arrival rate pattern λ1*, λ2*, ', λk* which minimizes long-run average cost per unit time for the entire system. The main result is that λ??k's and λ??k*'s differ systematically. Individuals overload the servers with the smallest hk/µk values. For an exponential service case with pre-emptive LIFO service an alternative charging scheme is presented which confirms that differences between individual and social optima occur precisely because individuals fail to consider the inconvenience that they cause to others.

163 citations


Journal ArticleDOI
G.J. Foschini1, B. Gopinath
TL;DR: The structure of optimal policies for the model considered with three types of users is determined, which consists of limiting the number of waiting requests of each type, and reserving a part of the memory to each type.
Abstract: Efficient design of service facilities, such as data or computer networks that meet random demands, often leads to the sharing of resources among users. Contention for the use of a resource results in queueing. The waiting room is a part of any such service facility. The number of accepted service requests per unit of time (throughput), or the fraction of the time the servers are busy (utilization), are often used as performance measures to compare designs. Most common models in queueing theory consider the design of the waiting rooms with the assumption that, although individual requests may differ from one another, they are statistically indistinguishable. However, there are several instances where available information allows us to classify the requests for service into different types. In such cases the design of the service facility not only involves the determination of an optimum size for the waiting room but also the rules of sharing it among the different types. Even with a fixed set of resources, the rules of sharing them can influence performance. In data networks (or computer networks) the "waiting room" consists of memory of one kind or another. Messages (jobs) destined for different locations (processors) sharing common storage is an important example of shared use of memory. Recently, Kleinrock and Kamoun have modeled such use of memory and computed the performance of various policies for managing the allocation of memory to several types of users. Decisions to accept or reject a demand for service were based on the number of waiting requests of each type. However, the optimal policy was not determined even in the case where there were only two types of users. We determine the structure of optimal policies for the model considered with three types of users. The optimal policy consists of limiting the number of waiting requests of each type, and reserving a part of the memory to each type.

132 citations


Journal ArticleDOI
TL;DR: A machine maintenance problem, in which n machines are to be served on a regular, periodic basis, is studied and how the maintenance cycles of the machines can be initiated so that all service requirements can be fulfilled by k servers is studied.

83 citations


Journal ArticleDOI
TL;DR: The present paper gives the generalization of Chow’s result (by means of a different proof) to the case of arbitrarily many exponential servers.
Abstract: In [1] Chow obtained the steady-state distribution of the cycle time of a customer in a closed tandem queue composed of two exponential FIFO servers and populated by an arbitrary fixed number of customers. The present paper gives the generalization of Chow’s result (by means of a different proof) to the case of arbitrarily many exponential servers.

62 citations


Journal ArticleDOI
TL;DR: It is shown that probabilistic control is sub-optimal to minimize the mean number of customers in the system and an approximation to the optimum policy is analyzed which is computationally simple and suffices for most operational applications.
Abstract: A dynamic control policy known as "threshold queueing" is defined for scheduling customers from a Poisson source on a set of two exponential servers with dissimilar service rates. The slower server is invoked in response to instantaneous system loading as measured by the length of the queue of waiting customers. In a threshold queueing policy, a specific queue length is identified as a "threshold," beyond which the slower server is invoked. The slower server remains busy until it completes service on a customer and the queue length is less than its invocation threshold. Markov chain analysis is employed to analyze the performance of the threshold queueing policy and to develop optimality criteria. It is shown that probabilistic control is sub-optimal to minimize the mean number of customers in the system. An approximation to the optimum policy is analyzed which is computationally simple and suffices for most operational applications.

59 citations


Journal ArticleDOI
TL;DR: The basic conceptual framework for designing a server intervention approach to the prevention of drinking driving is provided and six existing or recently completed programs throughout the United States are reviewed.

52 citations


Journal ArticleDOI
TL;DR: Results for normal load and overload reveal high throughput and low transfer times which are basic for a wide range of applications in LAN's.
Abstract: This paper reports on the implementation of a local area network (LAN) operating under a new CSMA-CD protocol with dynamic priorities (CSMA-CD-DP). User terminals, host computers, and other servers are connected to a common broad-band channel through N network access stations in a clustered manner. This concept reduces the number of network access stations and enhances the utilization of hardware and software resources greatly. A new protocol has been developed which organizes the decentralized operation of the distributed network access stations and which allows for a number of specific features. In the idle state the channel is operated in the contention mode. After the beginning of a transmission, the channel is operated in a reservation mode. Channel arbitration after a completed transmission is resolved by staggered delays; at any time, each station owns a distinct transmission delay which is changed after every successful transmission by broadcasted acknowledgments. This protocol strictly limits the possibility of collisions and approaches the effectiveness of token and polling protocols with increasing load. Through specific allocations of transmission delays, static priorities or dynamic overload control can be realized easily. The performance of the CSMA-CD-DP protocol has been modeled and analyzed analytically as well as by simulation. Results for normal load and overload reveal high throughput and low transfer times which are basic for a wide range of applications in LAN's.

49 citations


Journal ArticleDOI
TL;DR: The aggregate server method is an approximate iterative method for analyzing computer systems and involves inroduction of aggregate servers into a queueing network to represent the serialization delay.
Abstract: The aggregate server method is an approximate, iterative technique for analyzing the delays programs encounter while waiting for entry into critical sections, non-reentrant subroutines, and similar software structures that cause processing to become serialized. The method employs a conventional product form queueing network comprised of servers that represent actual I/O devices and processors, plus additional aggregate servers that represent serialized processing activity. The parameters of the product form network are adjusted iteratively to account for contention among serialized and non-serialized customers at each physical device.

33 citations


Posted Content
TL;DR: The problem is what to do with a slow server in a service facility which has fast and slow servers, or is it better not to use it at all?
Abstract: The problem is what to do with a slow server in a service facility which has fast and slow servers. Should the slow server be used to render service, or is it better not to use it at all? Simple models for answering this question are formulated and studied.

28 citations


Journal ArticleDOI
TL;DR: This paper considers the service system MX/G/∞ characterized by an infinite number of servers and a general service time distribution, which is a random variable, the time between group arrivals being exponentially distributed.

Journal ArticleDOI
TL;DR: In this article, an explicit expression for the traffic handled by the various servers in a GI/M/s queueing system with ordered entry is derived and the probability distribution of the number of busy servers among the first k(k = 1,2, - - -, s) servers is given.
Abstract: Consider a many-server queueing system in which the servers are numbered. If a customer arrives when two or more servers are idle he selects the server with lowest index (this is called the ordered entry selection rule). An explicit expression for the traffic handled by the various servers in a GI/M/s queueing system with ordered entry is derived. For the M/M/s queueing system the probability distribution of the number of busy servers among the first k(k = 1,2, - - -, s) servers will be given. Finally, a formula for the traffic handled by the first server in an MID/s system will be derived. All results are derived under steady-state conditions. As an application some numerical data for the server utilizations will be given and compared to data obtained from simulation studies of a closed-loop continuous belt-conveyor.

Journal ArticleDOI
TL;DR: The convolution and mean value analysis algorithms are extended to incorporate composite servers that result from the aggregation of stations in a multiple class queueing network model to handle the approximate analysis of multiclass networks.

Journal ArticleDOI
TL;DR: A numerical solution to multichannel queueing systems with ordered entry and finite or infinite source and bounds on the sizes of finite sources is presented.

Journal ArticleDOI
TL;DR: The report below presents first experiences gained from this use from the viewpoint of the developers and the user support staff of KOMEX.

Journal ArticleDOI
TL;DR: The patrol-initiated-activity hypercube queuing model, which allows servers to handle both calls assigned by a central dispatcher and activities initiated by the servers, is applied to historical data from a police agency.
Abstract: This paper applies the patrol-initiated-activity hypercube queuing model to historical data from a police agency. The model allows servers to handle both calls assigned by a central dispatcher and activities initiated by the servers. By duplicating a fairly complex dispatch strategy, the model was found to predict both assigned and server-initiated work loads accurately for the overall system and individual servers. The model is apparently ineffective in predicting small unit-travel-time differences for this police agency.

Journal ArticleDOI
TL;DR: In this article, the equilibrium behavior of the M/G/k loss system with servers subject to breakdown is studied and a generalization of the Erlang B-formula is given.
Abstract: This paper studies the equilibrium behaviour of the M/G/k loss system with servers subject to breakdown. Such a system has k servers, whose customers arrive in a Poisson process. They are served if there is an idle server, otherwise they leave and do not return. Each server is subject to breakdown with probability of occurrence depending on the length of the time the server has been busy since his last repair. On a breakdown the customer waits and his service is continued just after the repair of the server. Among other things, a generalization of the Erlang B-formula is given and it is shown that the equilibrium departure process is Poisson. In fact these results are obtained for the more general case where customers may balk and service and repair rates are state dependent.

Journal ArticleDOI
10 Oct 1983
TL;DR: Grapevine as discussed by the authors is a distributed, replicated system that provides message delivery, naming, authentication, resource location, and access control services in an internet of computers, which was designed and implemented several years ago.
Abstract: Grapevine is a distributed, replicated system that provides message delivery, naming, authentication, resource location, and access control services in an internet of computers. The system, described in a previous paper [1], was designed and implemented several years ago. We now have had operational experience with the system under substantial load. This experience has proved the original design sound in most aspects, but there also have been some surprises. In this paper we report what we have learned from using Grapevine. Our experience may offer some help to designers of new systems.Grapevine is implemented as a program that is run on a set of dedicated server computers. Client programs o f Grapevine run on various workstation and server computers attached to an internet. The services provided by Grapevine are divided into the message service and the registration service.The message service accepts messages prepared by clients for delivery to individual recipients and distribution lists. Messages are buffered in inboxes on message servers until the recipient requests them. Any message server can accept any message for delivery, thus providing a replicated submission service. A computer system mall user has inboxes on at least two message servers, thus replicating the delivery path for the user.

Journal ArticleDOI
TL;DR: The lumping method is applied to get stationary probabilities numerically for a loss system with two types of input streams to improve service availability by intro­ ducing several well-trained servers who are capable to deal with any service requirements for both types of customers.
Abstract: This paper applies the lumping method, proposed by Takahashi(4), to get stationary probabilities numerically for a loss system with two types of input streams. By using the method we can evaluate numerically the effect of introducing several servers who are capable to serve both types of customers. ny, some reception clerks serve only for domestic line passangers and others serve only for international line passangers. In this manner of service there often happens such an unbalanced situation that all clerks for domestic lines are hard at work, while some servers for international lines are idle, or vice­ versa. So it is expected that we could improve service availability by intro­ ducing several well-trained servers who are capable to deal with any service requirements for both types of customers.. In this paper we would examine how these commonly usable servers could decrE~ase the total loss probability. In the section 2, we will introduce a model to ana1yze above phenomena, which is described as a Markov process with a large state space. Such the largeness of state space makes it almost impossible even for a powerful compu­ ter to get stationary probabilities for this model by using ordinary Gauss­ Seide1 method or Gauss elimination method. So we provide an abbreviated algo­ rithm of lumping method, originally proposed in Takahashi(4) and Takahashi & Takami(5) , to get stationary probabilities in the section 3. In the section 4, we will examine some numerical examples.

01 Feb 1983
TL;DR: This memo extracts the number of hosts that accepted the connection to their server for each of Telnet, FTP, and SMTP, and compares it to the total host in the Internet (not counting TACs or ECHOS).
Abstract: This is a summary of the surveys of Telnet, FTP and Mail (SMTP) servers conducted by David Smallberg in December 1982, January and February 1983 as reported in RFC 832-843, 845-846. This memo extracts the number of hosts that accepted the connection to their server for each of Telnet, FTP, and SMTP, and compares it to the total host in the Internet (not counting TACs or ECHOS).

Journal ArticleDOI
01 Mar 1983-Networks
TL;DR: This article deals with some dispatching aspects of a system in which the dispatcher, in addition to stationary service units, has also the option to assign service units in motion.
Abstract: This article deals with some dispatching aspects of a system in which the dispatcher, in addition to stationary service units, has also the option to assign service units in motion. The nonstationary servers leave their home location at different times and move on different paths. The response time is the shortest time path of a server from the incident. The objective is to identify efficiently the appropriate unit to dispatch to a random incident. The analysis ends as soon as one of the servers is dispatched. The paper contains an efficient procedure based on the notion of the general time path. The general time path is a single path in which we superimpose the various different paths of all the moving servers. This simplifies the analysis since once the general time path is constructed it contains all the information that is provided by all those different paths. The procedure shows how to divide the general time path to disjoint segments according to the server which is closest to the incident (in each one of the segments exactly one identifiable server is closest to the incident).

Journal ArticleDOI
TL;DR: The general purpose of the Message system is described and the functions of the Name Server and the Message Server, an end-to-end Message protocol, are briefly described.


Journal ArticleDOI
TL;DR: A simple observation allows use of the exponential solution directly for the case of hyperexponential servers and it is possible to state thatCharacterization of a server in terms of its mean and coefficient of variation is not sufficient to predict even the mean response time for a class using weighted processor sharing.
Abstract: In a recent paper by Fayolle, Mitrani, and Iasnogorodski [2], some general multidimensional integral equations were derived in order to solve for the mean response time of each of several classes in a queue whose service discipline was weighted processor sharing. The arrival processes were Poisson. The weighting means that each job within a class k is given an amount of processing proportional to the priority weight gk associated with that class. For exponential service times, the general equations were solved. In this note, a simple observation allows use of the exponential solution directly for the case of hyperexponential servers. As a result, it is possible to state the following. •Characterization of a server in terms of its mean and coefficient of variation is not sufficient to predict even the mean response time for a class using weighted processor sharing. In unweighted or egalitarian processor sharing, only the mean is sufficient. •The Kleinrock conservation law [4] does not hold for nonexponential servers. Fayolie et al. [2] had showed that it did hold for exponential servers.

01 Jan 1983
TL;DR: This paper deals with an iterative method to obtain approximations for mean values in closed queueing networks with two-phase servers, which is essentially a Gordon and Newell network.
Abstract: This paper deals with an iterative method to obtain approximations for mean values in closed queueing networks with two-phase servers. The servers are particular in a sense that the first phase, a preparatory one, can be done while no customers are at the server. 2 I. Introduc tion The study of queueing networks using exact and computationally attractive methods like the convolution algorithm (see: Reiser and Kobayashi [1975J) and the mean-value algorithm (see: Reiser and Lavenberg [1980J and Reiser [198IJ) is restricted to a very special class of networks. the networks with a product-form solution. However. we have the feeling that the use of iterative approximations gives a good tool to study networks not fitting in the exact methods mentioned above. This paper deals with an example. an iterative method to approximate stead.y-state quantities in a closed queueing network with a special kind of two-phase servers. The system to be considered is essentially a Gordon and Newell network. However. some of the queues are special. Such a special queue is a single server FCFS queue where the service falls apart in two independent negative exponentially distributed phases. The first phase is a preparatory one and can be executed while no customers are present at the queue. the second phase can be executed only if the customer is present. No more than one preparatory phase can be done in advance. This feature arises in a natural way in several examples. For us the impetus to study this type of model came from the analysis of a container terminal for sea-bearing ships as performed by K.M. van Hee. In this example the cranes are the two-phase servers. Wi th the two-phase servers the network can still be analyzed as a continuoustime Markov-process on a finite state space. However. the solution no longer has the product-form property and a direct mean-value approach is not possible. An exact solution of the corresponding set of equilibrium equations is very unattractive from a computational point of view.

Book ChapterDOI
12 Sep 1983
TL;DR: The mechanisms for flow control for queued systems in the simulation package SPIRO are described and can be regarded as consisting of four discrete levels: control of server networks, control of servers, controls of stores, and interaction between tokens.
Abstract: The mechanisms for flow control for queued systems in the simulation package SPIRO are described. These can be regarded as consisting of four discrete levels: control of server networks, control of servers, control of stores, and interaction between tokens. Examples are given for each level.

Journal ArticleDOI
TL;DR: It is shown that the optimal CPU and I/O schedules are either one of preemptive priority disciplines depending on certain conditions; the conditions are obtained and the results of Spirn's are generalized.
Abstract: Optimal schedules in a central server model, consisting of a `CPU' server and m `I/O' servers, are obtained for the case of two customers. Each customer has exponential service time distributions with a distinct mean at the CPU server and with the same mean at any I/O servers. The performance measure considered is an arbitrarily weighted sum of the CPU utilization for each customer. It is shown that the optimal CPU and I/O schedules are either one of preemptive priority disciplines depending on certain conditions; the conditions are also obtained. A proof is given to the conjecture made by Spirn on the model and the results of Spirn's are generalized.

Journal ArticleDOI
TL;DR: This work presents some restrictions that ensure that requests are received at a server in the same order that they are sent, and shows how each server can safely service requests in the order of arrival.