scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 1987"


Journal ArticleDOI
01 Nov 1987
TL;DR: An application of the Butler system known as gypsy servers, which allow network server programs to be run on idle workstations instead of using dedicated server machines, is described.
Abstract: The Butler system is a set of programs running on Andrew workstations at CMU that give users access to idle workstations. Current Andrew users use the system over 300 times per day. This paper describes the implementation of the Butler system and tells of our experience in using it. In addition, it describes an application of the system known as gypsy servers, which allow network server programs to be run on idle workstations instead of using dedicated server machines.

151 citations


Journal ArticleDOI
TL;DR: This result is then applied to solve the optimal server allocation problem in a system of multi-server stations with a fixed buffer capacity, and for a single-station system, the simultaneous optimal allocation of both servers and buffer capacity is studied.
Abstract: Consider a closed queueing network Gordon and Newell [Gordon, W. J., G. F. Newell. 1967. Closed queueing networks with exponential servers. Oper. Res.15 252-267.] with a set of stations. The service rate at each station is an increasing concave function of the number of jobs at that station. Suppose there also exists a station that has c ≥1 parallel servers, each of which has a fixed service rate. We show that the throughput of this network is an increasing concave function with respect to c. This result is then applied to solve the optimal server allocation problem in a system of multi-server stations with a fixed buffer capacity for the total number of jobs at each station. For a single-station system, the simultaneous optimal allocation of both servers and buffer capacity is also studied.

130 citations


Journal ArticleDOI
TL;DR: If implemented on a large scale, server intervention programs have the potential of reducing drunken driving by helping to decrease the exit BACs of bar patrons.
Abstract: The potential of a server intervention program to decrease the likelihood that a bar patron will leave a bar intoxicated was evaluated. Research assistants posing as regular patrons ("pseudopatrons") visited two bars where about half of the servers had received server intervention training. Pseudopatrons set the occasion for server intervention to occur by drinking six alcoholic beverages in two hours. The blood-alcohol concentration (BAC) of the pseudopatrons was measured after they left the bar. Results revealed that trained servers initiated more server interventions than did untrained personnel. Moreover, pseudopatrons served by trained personnel reached substantially lower BACs than those served by untrained servers. These results suggest that, if implemented on a large scale, server intervention programs have the potential of reducing drunken driving by helping to decrease the exit BACs of bar patrons.

121 citations


Proceedings ArticleDOI
01 Aug 1987
TL;DR: A yellow-pages service that maps service names into server addresses and associates a set of attributes with each server is introduced, which can be integrated with the available internet communication protocols to enable clients from throughout the internet to access local servers.
Abstract: We introduce a yellow-pages service that maps service names into server addresses. The service is novel in that it associates a set of attributes with each server. Clients specify the attributes the server should possess when requesting a service and the yellow-pages service determines what servers satisfy the request. In addition to describing the implementation of the yellow-pages service within a local-area network, we show how the service can be integrated with the available internet communication protocols to enable clients from throughout the internet to access local servers.

89 citations


Journal ArticleDOI
TL;DR: This paper provides a design framework based on the notion of workstation agent objects representing principal objects in the database, and distinguishes two types of agents: proxies, which forward most messages to the principal objects, and deputies, which can cache state for their principal and act with more autonomy.
Abstract: Object-oriented database servers are beginning to appear on the commercial market in response to a demand by application developers for increased modeling power in database systems. Before these new servers can enhance the productivity of application designers, systems designers must provide simple interfaces to them from both procedural and object-oriented languages. This paper first describes a successful interface between an object server and two procedural languages (C and Pascal). Because C and Pascal do not support the object-oriented paradigm application, designers using these languages must deal with database objects in less than natural ways. Fortunately, workstations supporting object-oriented languages have the potential for interacting with database objects in a much more integrated manner. To integrate these object-oriented workstations with an object server, we provide a design framework based on the notion of workstation agent objects representing principal objects in the database. We distinguish two types of agents: proxies, which forward most messages to the principal objects, and deputies, which can cache state for their principal and act with more autonomy. The interaction of cache, transaction, and message management strategies makes the implementation of deputies a nontrivial problem. The agent metaphor is being used currently to integrate an object server with a Smalltalk-8O™ workstation.

66 citations


Proceedings ArticleDOI
01 Dec 1987
TL;DR: It is argued that a high performance, microprocessor based processing node can support a log server if it uses efficient communication protocols and low latency, non volatile storage to buffer log data.
Abstract: Increased interest in using workstations and small processors for distributed transaction processing raises the question of how to implement the logs needed for transaction recovery. Although logs can be implemented with data written to duplexed disks on each processing node, this paper argues there are advantages if log data is written to multiple log server nodes. A simple analysis of expected logging loads leads to the conclusion that a high performance, microprocessor based processing node can support a log server if it uses efficient communication protocols and low latency, non volatile storage to buffer log data. The buffer is needed to reduce the processing time per log record and to increase throughput to the logging disk. An interface to the log servers using simple, robust, and efficient protocols is presented. Also described are the disk data structures that the log servers use. This paper concludes with a brief discussion of remaining design issues, the status of a prototype implementation, and plans for its completion.

65 citations


Patent
18 Mar 1987
TL;DR: In this article, a method for controlling arrivals to servers, comprising: regulating arrivals, by initiating a processor generating arrivals or by rerouting or turning away arrivals, measuring durations of service; computing and updating time statistics, including mean, minimum, maximum expected and standard deviation of duration of service, and estimating time remaining in service for each customer currently in service.
Abstract: A method for controlling arrivals to servers, comprising: regulating arrivals, by initiating a processor generating arrivals or by re-routing or turning away arrivals; measuring durations of service; computing and updating time statistics, including mean, minimum, maximum expected and standard deviation of duration of service thereby estimating expected duration of service; measuring transit times from the point of regulating arrivals to start of service and thereby estimating expected transit times; measuring a ratio of numbers of service initiations to numbers of arrivals at the point of regulating arrivals; recording starting time of each current service; noting present time; comparing present time to starting time, and thereby determining time in progress for each service; comparing time in progress to expected duration of service, and estimating time remaining in service for each customer currently in service; comparing estimated time remaining in service to estimated transit time from the point of regulating arrivals to the start of service, determining a number of currently busy servers expected to be available by the time another arrival could reach them; measuring the number of available servers not currently serving; adding the number of currently available servers and the number of currently busy servers expected to be available and thereby computing total number of servers expected to be available; subtracting the number of arrivals currently queued awaiting service thereby producing a resultant number of servers expected to be available; and controlling a means for regulating arrivals to correspond to the resultant number of servers expected to be available.

60 citations


Journal ArticleDOI
TL;DR: This paper proposes a distributed system architecture in which communication follows a programming paradigm, and a programming language provides remote service interfaces for the heterogeneous distributed system environment.
Abstract: The 1980s have witnessed the emergence of a new architecture for computing based on networks of personal computer workstations. The performance requirements of such systems of workstations places a strain on traditional approaches to network architecture. The integration of diverse systems into this environment introduces functional compatibility issues that are not present in homogeneous networks. Effective prescriptions for functional compatibility, therefore, must go beyond the communication paradigms used in present distributed systems, such as remote procedure calls.This paper proposes a distributed system architecture in which communication follows a programming paradigm. In this architecture a programming language provides remote service interfaces for the heterogeneous distributed system environment. This language is a flexible and efficient medium for implementing service function protocols. In essence, clients and servers communicate by programming one another.

54 citations


Journal ArticleDOI
Rakesh Agrawal1, A.K. Ezzat
TL;DR: This paper presents the design and implementation of a scheme for augmenting the UNIX® operating system with the location independent remote execution capability, which allows processes to be offloaded to the compute servers and preserves the execution environment of these processes as if they were still executing locally at the originating machine.
Abstract: We consider a computing environment consisting of a network of autonomous, yet cooperating personal computer workstations and shared servers. Computing cycles in such an environment can be shared by creating a pool of compute servers in the network that may be used by the workstations to supplement their computing needs. Some processors may be permanently designated to be the compute servers. In addition, through an advertisement mechanism, any workstation may make itself temporarily available for a specific duration of time to be used as a compute server. In this paper, we present the design and implementation of a scheme for augmenting the UNIX® operating system with the location independent remote execution capability. This capability allows processes to be offloaded to the compute servers and preserves the execution environment of these processes as if they were still executing locally at the originating machine. Our model provides execution location independence of processes by preserving the process view of the file system, parent-child relationships, process groups, and process signaling across machine boundaries in a transparent way. We also present our scheme that allows processors to advertise themselves as available to some or all nodes in the network and withdraw as a compute server in a distributed manner. The scheme is robust in presence of node failures.

50 citations


Proceedings ArticleDOI
01 Dec 1987
TL;DR: This work describes a prototype that uses objects to represent, query, display, and edit information in the Telesophy system, a multimedia system intended to provide transparent access to all of a community's online information.
Abstract: The Telesophy system is intended to provide transparent access to all of a community's online information. The scale of the system requires that it be distributed across many machines via a network; the multiple types and formats of the information require that it be a multimedia system. We describe a prototype that uses objects to represent, query, display, and edit information. A two-level storage system is used to store the objects on multiple servers; queries are processed by indexing servers layered on top of the storage system. New media types and new indexing schemes can be added simply by defining new classes within the existing framework.

49 citations


Journal ArticleDOI
Edward G. Coffman1, E. N. Gilbert1
TL;DR: Theoretical results show, in most cases, that the polling server has a high probability of moving toward the nearest customer, i.e. moving as a greedy server would.
Abstract: A single server moves with speed υ on a line interval (or a circle) of length (circumference)L. Customers, requiring service of constant durationb, arrive on the interval (or circle) at random at mean rate λ customers per unit length per unit time. A customer's mean wait for service depends partly on the rules governing the server's motion. We compare two different servers: thepolling server and thegreedy server. Without knowing the locations of waiting customers, a polling server scans endlessly back and forth across the interval (or clockwise around the circle), stopping only where it encounters a waiting customer. Knowing where customers are waiting, a greedy server always travels toward the current nearest one. Except for certain extreme values of υ,L, b, andλ, the polling and greedy servers are roughly equally effective. Indeed, the simpler polling server is often the better. Theoretical results show, in most cases, that the polling server has a high probability of moving toward the nearest customer, i.e. moving as a greedy server would. The greedy server is difficult to analyze, but was simulated on a computer.

08 Aug 1987
TL;DR: Algorithms are presented for implementing a fault-tolerant time service that meets both requirements and the resynchronization protocol must allow a certain number of faulty links or faulty processors since network protocols ought to work in the presence of partial failures.
Abstract: When I joined DEC in 1985, they were the world leader in networking. Using their VMS operating system, I could type a simple copy command to a computer in California, specifying a file and machine name, to copy a file from a computer in Massachusetts. Even today, I can’t copy a file from Massachusetts to California nearly as easily with Unix or Windows. The people responsible for DEC’s network systems were the Network and Communications group (NAC). Around 1987, NAC asked for my help in designing a network time service. I decided that there were two somewhat conflicting requirements for a time service: delivering the correct time, and keeping the clocks on different computers closely synchronized. This paper describes the algorithms I devised for doing both. I withdrew the paper because Tim Mann observed that the properties I proved about the algorithms were weaker than the ones needed to make them interesting. The major problem is that the algorithms were designed to guarantee both a bound epsilon on the synchronization of each clock with a source of correct time and an independent bound delta on the synchronization between any two clocks that could be made much smaller than epsilon. Mann observed that the bound I proved on delta was not the strong one independent of epsilon that I had intended to prove. We believe that the algorithms do satisfy the necessary stronger properties, and Mann and I began rewriting the paper with the stronger results. But that paper is still only partly written and is unlikely ever to see the light of day.

Journal ArticleDOI
D. J. Houck1
TL;DR: This paper shows, using a theoretical upper bound, that an optimal policy would produce delays that are almost identical to what would result from combining the two groups of servers, and shows that SDR performs nearly optimally in every case considered.
Abstract: This paper studies a queueing system with two groups of servers, each with a separate queue, and with arriving customers routed irrevocably to one of the two queues. One natural policy for routing arriving customers is to send them to the queue with the shortest expected delay. Although this shortest delay routing policy SDR is known to be optimal if each server group has one server and the service time distribution has nondecreasing failure rate, little is known about the general multiserver case, even with exponential service times. In this paper we show, using a theoretical upper bound, that an optimal policy would produce delays that are almost identical to what would result from combining the two groups. In addition, our simulation results show that SDR performs nearly optimally in every case considered.

Proceedings Article
01 Sep 1987
TL;DR: The AIM-P database managcmcnt system is based on NIT2 (Non First Normal Form) relations and follows the stratczv of a ’ multi-level cooperation]communication bctwccn wo& station and scrvcr database system in order to rcducc redundant work at both sides as much as possible.
Abstract: In the CAD, CAM. and Robotics environment the on-line construction and manipulation of data ob’ects is very often done at dedicated workstations rather than at h ost systems. As the storage space of workstations is usually not that large and as large designs are also not performed by a single designer but by a group of designers, in general, one would like to use host database systems as central servers to store, to retrieve, and to “communicate” data objects. Current database management systems, however, have mainly been designed for business administration purposes where much simpler structured data objects occur. But even if the server database system offers adequate complex object support, the question remains how workstation and server database system should work together. That is, how the changes pcrformcd at the workstation should be communicated back to the Server such that a new version of an object can be created at the host site in an eflicicnt and storage saving way. In this oaocr the aopreach implemented in the Advance2 Information ‘Managemcht Prototype (AIM-P) at the IBM lleidelberg Scientific Center is described. The AIM-P database managcmcnt system is based on NIT2 (Non First Normal Form) relations and follows the stratczv of a ’ multi-level cooperation]communication bctwccn wo& station and scrvcr database system in order to rcducc redundant work at both sides as much as possible.


Proceedings ArticleDOI
01 May 1987
TL;DR: These experiments show that the Chandy-Misra approach to distributed simulation is not a viable approach to parallel simulation of queueing network models, and there is little prospect that they can be reduced to acceptable levels.
Abstract: With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

Journal ArticleDOI
TL;DR: It is proved that the optimal policy in a system with N identical servers, operating in parallel, is of the following simple form: Have all N servers serve ordinary customers until the queue of the locking customers builds up to some threshold number k*.
Abstract: We consider a system with N identical servers, operating in parallel. There are two types of customers, ordinary and locking, having exponential service times with rates μ O and μ L and completion rewards r O and r L (Problem P1) or completion reward r O and holding cost h L (Problem P2), respectively. An ordinary customer needs a single server to be processed, and there are an infinite number of ordinary customers available in the system. Locking customers arrive in a Poisson process with rate λ and require processing by all N servers simultaneously. The servers are allocated in a nonpreemptive manner to both classes. The scheduling decision in such a system consists of determining how the N servers should be allocated, so that the discounted reward, as well as the long run average reward in P1 and P2, is maximized. We prove that the optimal policy is of the following simple form: Have all N servers serve ordinary customers until the queue of the locking customers builds up to some threshold number k*. Then empty the ordinary customers from the N servers as fast as possible by keeping each server idle after it completes the ordinary customer currently in service. Finally, serve all available locking customers, and continue by repeating the above procedure.

Journal ArticleDOI
TL;DR: A necessary and sufficient condition for stability of this locking system can be expressed in terms of the mean of the maximum of N random variables, each represent...
Abstract: Resource sharing systems, such as database management systems, utilize various types of locking to maintain consistency. Most locking mechanisms cause some resources to remain idle at certain times when there is work for them to do, inducing a decrease in the system’s capacity. This decrease of capacity is reflected in the stability condition for the locking system as compared to the system without locking.We consider the following locking system. There are N servers operating in parallel and two types of incoming customers. The first type corresponds to simple customers, i.e., customers with no locking requirements, and the second corresponds to customers that have to be processed simultaneously by all N servers. When a server is ready to serve such a customer, it has to wait until all servers are ready to serve that same customer.We determine a necessary and sufficient condition for stability of this system, which can be expressed in terms of the mean of the maximum of N random variables, each represent...

Journal ArticleDOI
David D. Yao1
TL;DR: This work represents the arrangement of servers by a permutation vector of their service rates, and develops a partial order on these vectors that can reduce blocked traffic, thereby improving system performance.
Abstract: We consider a service system whose servers may work at different rates systems with "heterogeneous servers". Jobs arriving at the system access the servers in accordance with an "entry order" that relates to the arrangement of the servers. When all servers are occupied, arriving jobs are blocked and lost. This model has applications in the study of conveyors and other circulation systems. In this paper we wish to compare various server arrangements in such systems. We represent the arrangement of servers by a permutation vector of their service rates, and develop a partial order on these vectors. By moving in the direction pointed out by this partial order, one can reduce blocked traffic, thereby improving system performance. Our results may help system designers compare directly the performance of different server arrangements.

Proceedings ArticleDOI
27 Apr 1987
TL;DR: A visa scheme for implementing access control in Inter-Organization Network (ION) gateways to allow an organization to modify and trust only them internal system that require ION access all other internal system can not communicate with the outside.
Abstract: In this paper we describe a visa scheme for implementing access control in Inter-Organization Network (ION) gateways. The purpose of the scheme is to allow an organization to modify and trust only them internal system that require ION access all other internal system can not communicate with the outside. Control is distributed among the ION participants so that each may make its own design tradeoffs between performance and trust.


Journal ArticleDOI
01 Nov 1987
TL;DR: In QuickSilver, recovery is based on the database notion of atomic transactions, which are made available as a system service to be used by other, higher-level servers, which allows meeting all the above design goals.
Abstract: One price of extensibility and distribution, as implemented in QuickSilver, is a more complicated set of failure modes, and the consequent necessity of dealing with them. In traditional operating systems, services (e.g., file, display) are intrinsic pieces of the kernel. Process state is maintained in kernel tables, and the kernel contains explicit cleanup code (e.g., to close files, reclaim memory, and get rid of process images after hardware or software failures). QuickSilver, however, is structured according to the client-server model, and as in many systems of its type, system services are implemented by user-level processes that maintain a substantial amount of client process state. Examples of this state are the open files, screen windows, address space, etc., belonging to a process. Failure resilience in such an environment requires that clients and servers be aware of problems involving each other. Examples of the way one would like the system to behave include having files closed and windows removed from the screen when a client terminates, and having clients see bad return codes (rather than hanging) when a file server crashes. This motivates a number of design goals:Properly written programs (especially servers) should be resilient to external process and machine failures, and should be able to recover all resources associated with failed entities.Server processes should contain their own recovery code. The kernel should not make any distinction between system service processes and normal application processes.To avoid the proliferation of ad-hoc recovery mechanisms, there should be a uniform system-wide architecture for recovery management.A client may invoke several independent servers to perform a set of logically related activitites (a unit of work) that must execute atomically in the presence of failures, that is, either all the related activities should occur or none of them should. The recovery mechanism should support this.In QuickSilver, recovery is based on the database notion of atomic transactions, which are made available as a system service to be used by other, higher-level servers. This allows meeting all the above design goals. Software portability is important in the QuickSilver environment, dictating that transaction-based recovery be accessible to conventional programming languages rather than a special-purpose one such as Argus [Liskov84]. To accommodate servers with diverse recovery demands, the low-level primitives of commit coordination and log recovery are exposed directly rather than building recovery on top of a stable-storage mechanism such as in CPR [Attanasio87] or recoverable objects such as those in Camelot [Spector87] or Clouds [Allchin&McKendry83].

Journal ArticleDOI
TL;DR: A partial ordering is established among entry orders, and is shown to result in some quite strong orderings among the associated stochastic processes that reflect the congestion and the service characteristics of the system.
Abstract: We study a queueing system withm exponential servers with distinct service rates. Jobs arrive at the system following an arbitrary point process. Arrived jobs receive service at the first unoccupied server (if any) according to an entry order π, which is a permutation of the integers 1, 2,...,m. The system has a finite buffer capacity. When the buffer limit is reached, arrivals will be blocked. Blocked jobs will either be lost or come back as New arrivals after a random travel time. We are concerned with the dynamic stochastic behavior of the system under different entry orders. A partial ordering is established among entry orders, and is shown to result in some quite strong orderings among the associated stochastic processes that reflect the congestion and the service characteristics of the system. The results developed here complement existing comparison results for queues with homogeneous servers, and can be applied to aid the design of conveyor and communication systems.

Book ChapterDOI
01 Jan 1987
TL;DR: This examination of the Apollo DOMAIN system will deal primarily with the design and implementation of the DOMAIN distributed file system.
Abstract: The Apollo DOMAIN® system is a fully operational distributed computing environment for a network of personal workstations and network servers. When it was first developed in 1980, its distributed system focus was on a file system that provided users of autonomous workstations with the same ease of file sharing they enjoyed with central time-sharing systems. While the DOMAIN system has since been extended to provide a stronger base for additional distributed system facilities, this examination of the Apollo DOMAIN system will deal primarily with the design and implementation of the DOMAIN distributed file system.

Journal ArticleDOI
TL;DR: The model of Ibe and Maruyama is a special case of the one considered by the present author, who gives an exact and easily computable solution for the case of arbitrary n, and which remains valid under less restrictive assumptions about the arrival process and the queue discipline.

Journal ArticleDOI
Douglas B. Terry1, Dan Swinehart1
01 Nov 1987
TL;DR: The major technical contributions presented in this paper involve the use of simple databases to describe the results of editing operations such that existing voice passages need not be moved, copied, or decrypted, and provide a modified style of reference counting that allows the automatic reclamation of obsolete voice.
Abstract: The Etherphone™ system was developed at Xerox PARC to explore methods of integrating voice into existing distributed personal computing environments. An important component of the Etherphone system, the voice manager, provides operations for recording, playing, editing, and otherwise manipulating digitized voice based on an abstraction that we call voice ropes. It was designed to allow:unrestricted use of voice in client applications,sharing among various clients,editing of voice by programs,integration of diverse workstations into the system,security at least as good as that of conventional file servers, andautomatic reclamation of the storage occupied by unneeded voice. As with text, we want the ability to incorporate voice easily into electronic mail messages, voice-annotated documents, user interfaces, and other interactive applications. Because the characteristics of voice differ greatly from those of text, special mechanisms are required for managing and sharing stored voice. The voice manager reduces the work generally associated with building voice applications by providing a convenient set of application-independent abstractions for stored voice.Clients view voice ropes as immutable sequences of voice samples referenced by unique identifiers. In actuality, a voice rope consists of a list of intervals within voice files that are stored on a special voice file server. A database stores the many-to-many relationships that exist between voice ropes and files. Maintaining voice on a publicly accessible server facilitates sharing among various clients.These facilities for managing stored voice in the Etherphone system were designed with the intent of moving voice data as little as possible. Once recorded in the voice file server, voice is never copied until a workstation sends a play request; at this point the voice is transmitted directly to an Etherphone, a microprocessor-based telephone instrument. In particular, although workstations initiate most of the operations in the Etherphone system, there is little reason for them to receive the actual voice data since they have no way of playing it.Adding such voice facilities to a diverse and complex software base presents challenging problems to the systems builder since much of the existing workstation and server software cannot be changed or extended. Manipulating stored voice solely by textual references, besides allowing efficient sharing and resource management, has made it easy to integrate voice into documents. The only requirements placed on a workstation in order to make use of the voice services are that it have an associated Etherphone and an RPC implementation.The Etherphone system uses secure RPC for all control functions and DES encryption for transmitted voice. These ensure the privacy of voice communication, which is important even in a research environment, although the network is inherently vulnerable to interception of information. Storing the voice in its encrypted form protects the voice on the server and also means that the voice need not be reencrypted when played. All in all, the voice manager provides better security than most conventional file servers.The performance of operations for editing and managing recorded voice must be compatible with human response times: sub-second response at a peak rate of several operations per second is more than adequate. Performance measurements confirm that the voice manager easily meets these requirements.In conclusion, the major technical contributions presented in this paper involve the use of simple databases to:describe the results of editing operations such that existing voice passages need not be moved, copied, or decrypted, andprovide a modified style of reference counting that allows the automatic reclamation of obsolete voice. Approximately 50 Etherphones are in daily use in the Computer Science Laboratory at Xerox PARC. We have had a voice mail system running since 1984 and a prototype voice editor available for demonstrations and experimental use since the spring of 1986.

ReportDOI
01 Dec 1987
TL;DR: The authors derive an analytic model of the system and present experimental results that validate this model, and present their observations on using MultiRPC to contact up to 100 servers in parallel.
Abstract: : MultiRPC is an invocation mechanism that enables a client to access multiple servers in a single remote procedure call. Parallelism is obtained from concurrency of processing on servers and from the overlapping of retransmissions and timeouts. Each of the parallel calls retains the semantics and functionally of the underlying remote procedure call mechanism. These include secure, authenticated communication and the use of application-specific side effects. The underlying communication medium does not have to support multicast of broadcast transmission. This paper describes the design and evolution of MultiRPC, focusing on the issues of runtime efficiency, versatility, and ease of use. The authors derive an analytic model of the system and present experimental results that validate this model. They also present their observations on using MultiRPC to contact up to 100 servers in parallel. Keywords: UNIX operating system; C programming language.

Journal ArticleDOI
TL;DR: In this paper, the invariant probability vector of the underlying finite state Quasi-Birth-and-Death process is shown to admit a matrix-geometric representation for all values of the arrival parameter λ.
Abstract: This paper considers a two node tandem queueing system with phase-type servers and Bernoulli arrivals where the servers operate in discrete-time and are subject to blocking and failures. The invariant probability vector of the underlying finite state Quasi-Birth-and-Death process is shown to admit a matrix-geometric representation for all values of the arrival parameter λ. The corresponding rate matrix is given explicitly in terms of the model parameters and the resulting closed-form expression provides the basis for an efficient calculation of the invariant probability vector. The cases and 1 are studied separately and the irreducibility of the underlying Markov chain is investigated in each situation. The continuous-time formulation is briefly discussed and only major differences with the discrete-time results are pointed out. Some numerical examples are also provided

Journal ArticleDOI
TL;DR: In this paper, the authors considered a queueing system with N servers and two types of customers: simple customers which require a service from one of the servers and locking customers which have to be served simultaneously by all N servers.

Journal ArticleDOI
TL;DR: An approach to access control in ION's is summarized, based on nondiscretionary control, that allows interconnecting organizations to combine gateway, network, and system-level mechanisms to enforce cross-boundary control over invocation and information flow while minimizing interference with internal operations.
Abstract: This paper analyzes the technical implications of interconnecting networks across organization boundaries. Such Interorganizational Networks (ION's) are used increasingly to support exchange of CAD/CAM data between manufacturers and subcontractors, software distribution from vendors to users, customer input to suppliers' orderentry systems, and the shared use of expensive computational resources by research laboratories, as examples. We begin by demonstrating that interorganization connections are not satisfied by traditional network design criteria of connectivity and transparency. A primary high-level requirement is access control, and therefore, participating organizations must be able to limit connectivity and make network boundaries visible. We summarize an approach to access control in ION's, based on nondiscretionary control, that allows interconnecting organizations to combine gateway, network, and system-level mechanisms to enforce cross-boundary control over invocation and information flow while minimizing interference with internal operations [6], [4]. The focus of this paper is on the underlying interconnection protocols that are needed to support these access control mechanisms. We describe in detail a particular protocol, called a visa scheme [7]. The visa scheme uses access control servers to authorize a session request and visas to authenticate that successive packets belong to the authorized connection. Control is distributed among the ION participants and each may make its own design tradeoffs between performance and trust. In order to support interorganization communication two (or more) organizations must be able to communicate with one another's access control servers and their respective packet-level gateways and nodes (source/destination) must implement the visa scheme. The security of the proposed mechanism varies according to the security of an organization's components (access control server, gateway, and select hosts) and the encryption function used. The visa scheme's purpose is to allow an organization to modify and trust only those internal systems that require ION access; all other internal systems are inaccessible from and to the ION gateway. We conclude by comparing and contrasting the visa approach to the use of higher level gateways.