scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 1991"


Journal ArticleDOI
TL;DR: The NTP synchronization system is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks.
Abstract: The network time protocol (NTP), which is designed to distribute time information in a large, diverse system, is described. It uses a symmetric architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical configuration synchronizes local clocks within the subnet and to national time standards via wire, radio, or calibrated atomic clock. The servers can also redistribute time information within a network via local routing algorithms and time daemons. The NTP synchronization system, which has been in regular operation in the Internet for the last several years, is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks. >

2,114 citations


Patent
Drew Major1, Kyle Powell1, Dale Neibaur1
09 Aug 1991
TL;DR: In this paper, the authors propose a software solution for providing a fault-tolerant backup system, such that if there is a failure of a primary processing system, a replicated system can take over without interruption.
Abstract: A method and apparatus for providing a fault-tolerant backup system such that if there is a failure of a primary processing system, a replicated system can take over without interruption. The invention provides a software solution for providing a backup system. Two servers are provided, a primary and secondary server. The two servers are connected via a communications channel. The servers have associated with them an operating system. The present invention divides this operating system into two "engines." An I/O engine is responsible for handling and receiving all data and asynchronous events on the system. The I/O engine controls and interfaces with physical devices and device drivers. The operating system (OS) engine is used to operate on data received from the I/O engine. All events or data which can change the state of the operating system are channeled through the I/O engine and converted to a message format. The I/O engine on the two servers coordinate with each other and provide the same sequence of messages to the OS engines. The messages are provided to a message queue accessed by the OS engine. Therefore, regardless of the timing of the events, (i.e., asynchronous events), the OS engine receives all events sequentially through a continuous sequential stream of input data. As a result, the OS engine is a finite state automata with a one-dimensional input "view" of the rest of the system and the state of the OS engines on both primary and secondary servers will converge.

374 citations


Patent
07 Jun 1991
TL;DR: In this article, an integrated network security system is provided which permits log-on to a normally locked client on the network in response to at least one coded non-public input to the client by a user.
Abstract: An integrated network security system is provided which permits log-on to a normally locked client on the network in response to at least one coded non-public input to the client by a user. At least a selected portion of the coded input is encrypted and sent to a network server where the user is authenticated. After authentication, the server preferably returns a decryption key, an encryption key for future use and any critical files previously stored at the server to the client. The decryption key is utilized to decrypt any material at the client which were encrypted when the client was locked, including any material sent from the server, thereby unlocking the client. The decryption key may be combined with untransmitted portions of the original coded input in a variety of ways to generate an encryption key for the next time the terminal is to be locked. When one of a variety of client locking conditions occurs, the previously generated encryption key is utilized to encrypt at least selected critical material at the client. Critical directories or the like in encrypted form may be sent to the server and a message is sent to the server that the client is locked, which message is utilized by the server to inhibit the client from further access to at least selected resources on the network.

313 citations


Journal ArticleDOI
TL;DR: Acme, a network server for digital audio and video I/O, is presented and the nature of logical time systems is discussed, some examples are given, and their implementation is addressed.
Abstract: Acme, a network server for digital audio and video I/O, is presented. Acme lets users specify their synchronization requirements through an abstraction called a logical time system. The nature of logical time systems is discussed, some examples are given, and their implementation is addressed. >

265 citations


Patent
08 Jul 1991
TL;DR: In distributed heterogeneous data processing networks, dispatcher and control server software components execute code of a single application or of many portions of the code of one or more applications in response to a method object received from a client application as discussed by the authors.
Abstract: In distributed heterogeneous data processing networks, dispatcher and control server software components execute the code of a single application or of many portions of the code of one or more applications in response to a method object received from a client application. The method object includes a reference to the code to be executed.

259 citations


Proceedings ArticleDOI
20 May 1991
TL;DR: The authors describe how some functions of distributed systems can be designed to tolerate intrusions, and a prototype of the persistent file server presented has been successfully developed and implemented as part of the Delta-4 project of the European ESPRIT program.
Abstract: An intrusion-tolerant distributed system is a system which is designed so that any intrusion into a part of the system will not endanger confidentiality, integrity and availability. This approach is suitable for distributed systems, because distribution enables isolation of elements so that an intrusion gives physical access to only a part of the system. In particular, the intrusion-tolerant authentication and authorization servers enable a consistent security policy to be implemented on a set of heterogeneous, untrusted sites, administered by untrusted (but nonconspiring) people. The authors describe how some functions of distributed systems can be designed to tolerate intrusions. A prototype of the persistent file server presented has been successfully developed and implemented as part of the Delta-4 project of the European ESPRIT program. >

259 citations


Journal ArticleDOI
TL;DR: Several paradigms—examples or models—for process interaction in distributed computations are described, illustrated by solving problems, including parallel sorting, file servers, computing the topology of a network, distributed termination detection, replicated databases, and parallel adaptive quadrature.
Abstract: Distributed computations are concurrent programs in which processes communicate by message passing. Such programs typically execute on network architectures such as networks of workstations or distributed memory parallel machines (i.e., multicomputers such as hypercubes). Several paradigms—examples or models—for process interaction in distributed computations are described. These include networks of filters, clients, and servers, heartbeat algorithms, probe/echo algorithms, broadcast algorithms, token-passing algorithms, decentralized servers, and bags of tasks. These paradigms are appliable to numerous practical problems. They are illustrated by solving problems, including parallel sorting, file servers, computing the topology of a network, distributed termination detection, replicated databases, and parallel adaptive quadrature. Solutions to all problems are derived in a step-wise fashion from a general specification of the problem to a concrete solution. The derivations illustrate techniques for developing distributed algorithms.

252 citations


Proceedings ArticleDOI
02 Apr 1991
TL;DR: This paper uses detailed simulation studies to evaluate the performance of several different scheduling strategies, and shows that in situations where the number of processes exceeds thenumber of processors, regular priority-based scheduling in conjunction with busy-waiting synchronization primitives results in extremely poor processor utilization.
Abstract: Shared-memory multiprocessors are frequently used as compute servers with multiple parallel applications executing at the same time. In such environments, the efficiency of a parallel application can be significantly affected by the operating system scheduling policy. In this paper, we use detailed simulation studies to evaluate the performance of several different scheduling strategies, These include regular priority scheduling, coscheduling or gang scheduling, process control with processor partitioning, handoff scheduling, and affinity-based scheduling. We also explore tradeoffs between the use of busy-waiting and blocking synchronization primitives and their interactions with the scheduling strategies. Since effective use of caches is essential to achieving high performance, a key focus is on the impact of the scheduling strategies on the caching behavior of the applications.Our results show that in situations where the number of processes exceeds the number of processors, regular priority-based scheduling in conjunction with busy-waiting synchronization primitives results in extremely poor processor utilization. In such situations, use of blocking synchronization primitives can significantly improve performance. Process control and gang scheduling strategies are shown to offer the highest performance, and their performance is relatively independent of the synchronization method used. However, for applications that have sizable working sets that fit into the cache, process control performs better than gang scheduling. For the applications considered, the performance gains due to handoff scheduling and processor affinity are shown to be small.

232 citations


Journal ArticleDOI
TL;DR: A fast algorithm for oflline computing of an optimal schedule is given, and it is shown that finding an optimal offline schedule is at least as hard as the assignment problem.
Abstract: In the k-server problem, one must choose how k mobile servers will serve each of a sequence of requests, making decisions in an online manner. An optimal deterministic online strategy is exhibited when the requests fall on the real line. For the weighted-cache problem, in which the cost of moving to x from any other point is $w( x )$, the weight of x, an optimal deterministic algorithm is also provided. The nonexistence of competitive algorithms for the asymmetric two-server problem and of memoryless algorithms for the weighted-cache problem is proved. A fast algorithm for oflline computing of an optimal schedule is given, and it is shown that finding an optimal offline schedule is at least as hard as the assignment problem.

208 citations


Proceedings ArticleDOI
18 Apr 1991
TL;DR: A description is presented of the fine technical details and knowledge required to understand and replicate the work which went into developing XTV.
Abstract: XTV is a distributed system for sharing X Window applications synchronously among a group of remotely located users at workstations running X and interconnected by the Internet. The major components of the system are designed and implemented in such a way that make them reusable in other collaborative systems and applications. A description is presented of the fine technical details and knowledge required to understand and replicate the work which went into developing XTV. The following concepts are discussed: interception, distribution and translation of traffic between X clients and display servers; regulation of access to tools using a token passing mechanism and reverse-translation of server traffic; and accommodation of systems with different architectures which may have different byte orders for integer representation. >

207 citations



Patent
09 Aug 1991
TL;DR: In this article, the authors present a network system for computer aided instruction, which includes a main computer including a repository for storing courseware, a network of servers connected to the main computer, a number of local area networks, each of the networks connected to a server, and each including a many interconnected workstations, a distributed delivery system responsive to a student's request for a course, operable to search the network for a server where the requested course resides, and operable retrieving the course from the repository, and an authoring system distributed over the workstation, the servers
Abstract: A network system for computer aided instruction, includes a main computer including a repository for storing courseware, a network of servers connected to the main computer, a number of local area networks, each of the networks connected to a server, and each including a number of interconnected workstations, a distributed delivery system responsive to a student's request for a course, operable to search the network for a server where the requested course resides, and operable to retrieve the course from the repository, and an authoring system distributed over the workstation, the servers and the main computer, and operable to transfer courses of the courseware from a workstation to the repository, and a course management system distributed over the workstation, the servers and the main computer, and operable to manage course enrollment and to monitor student performance at the servers, and further operable to transfer information concerning course enrollment from the servers to the main computer. A computer program having provisions for training users in its use, includes application code, hook points connected to the application code, and embedded training routines connected to the hook points, so that there is a mapping between the hook points and the embedded training routines. The application code has code responsive to a user action to transfer control from the hook points to the embedded training routines.

Proceedings ArticleDOI
23 Jun 1991
TL;DR: In this article, a store-and-forward architecture is presented that can provide video-on-demand (VOD) as well as other database distribution services, assuming a B-ISDN network to be in place.
Abstract: A store-and-forward architecture is presented that can provide video-on-demand (VOD) as well as other database distribution services. It assumes a B-ISDN network to be in place. The four major elements in this architecture are the information warehouse (IWH), where video material is archived; the central office (CO) server, which contains a processor responsible for service management and a video buffer that interacts directly with network customers; and the customer premise equipment. A requested video program is provided in a real-time fashion from the CO server to the customer. At the information warehouse the video program is retrieved from the archival storage in blocks, and with transfer rates much faster than real-time. Subsequently, it is sent in a bursty mode to the CO servers via high speed trunks. >

01 Jan 1991
TL;DR: Weighted caching is a generalization of paging in which the cost to evict an item depends on the item as mentioned in this paper, and it is studied as a restriction of the well-known k-server problem.
Abstract: Weighted caching is a generalization ofpaging in which the cost to evict an item depends on the item. We study both of these problems as restrictions of the well-knownk-server problem, which involves moving servers in a graph in response to requests so as to minimize the distance traveled.

Book
01 Feb 1991
TL;DR: Part I The Extended Camelot Interface Chapter 1 Introduction to Camelot 1.3 Overview of the Camelot Distributed Transaction Facility 1.4 Major Camelot Functions 1.5 Camelot from a User's Point of View
Abstract: Part I The Extended Camelot Interface Chapter 1 Introduction to Camelot 1.1 Background 1.2 A Transaction Example 1.3 Overview of the Camelot Distributed Transaction Facility 1.4 Major Camelot Functions 1.5 Camelot from a User's Point of View Chapter 2 An Introduction to Mach for Camelot Users 2.1 Tasks and Threads 2.2 Virtual Memory Management 2.3 Interprocess Communication 2.4 Mach Interface Generator Chapter 3 The Camelot Library 3.1 Introduction 3.2 Using the Camelot Library 3.3 Application Basics 3.4 Server Basics 3.5 Caveats 3.6 Advanced Constructs Chapter 4 Camelot Node Configuration 4.1 Server Maintenance 4.2 Account Maintenance 4.3 Accessing a Remote Node Server 4.4 The Node Server Database 4.5 Commands Listed Chapter 5 A Sample Camelot Application and Server 5.1 Introduction 5.2 Sample Execution 5.3 The Application 5.4 The Server 5.5 Installation Part II The Primitive Camelot Interface Chapter 6 The Structure of Camelot 6.1 The Camelot Architecture 6.2 An Example Message Flow Chapter 7 Mach for Camelot Implementors 7.1 Interprocess Communication 7.2 The External Memory Management Interface 7.3 C Threads Chapter 8 Recoverable Storage Management in Camelot 8.1 Recoverable Segments and Regions 8.2 Initialization 8.3 Mapping 8.4 Forward Processing 8.5 The Shared Memory Queues 8.6 Recovery Processing Chapter 9 Transaction Management in Camelot 9.1 The Nested Transaction Model 9.2 Transaction Services for Applications 9.3 Transaction Services for Servers Chapter 10 Camelot Node Management 10.1 The NA Interface Part III Design Rationale Chapter 11 The Design of Camelot 11.1 Introduction 11.2 Architecture 11.3 Algorithms 11.4 Related Systems Work 11.5 Conclusions Chapter 12 The Design of the Camelot Library 12.1 Introduction 12.2 Architecture 12.3 Related Work 12.4 Conclusions Chapter 13 The Design of the Camelot Local Log Manager 13.1 Introduction 13.2 Architecture 13.3 Algorithms 13.4 Related Work 13.5 Conclusions Chapter 14 The Design of the Camelot Disk Manager 14.1 Introduction 14.2 Architecture 14.3 Algorithms and Data Structures 14.4 Related Work 14.5 Discussion Chapter 15 The Design of the Camelot Recovery Manager 15.1 Introduction 15.2 Architecture 15.3 Algorithms 15.4 Related Work 15.5 Conclusions Chapter 16 The Design of the Camelot Transaction Manager 16.1 Introduction 16.2 Architecture 16.3 Algorithms 16.4 Related Work 16.5 Conclusions Chapter 17 The Design of the Camelot Communication Manager 17.1 Introduction 17.2 Architecture 17.3 Algorithms 17.4 Related Work 17.5 Conclusions Chapter 18 Performance of Select Camelot Functions 18.1 Performance Metrics 18.2 Library Costs 18.3 Recoverable Virtual Memory Costs 18.4 Recovery Costs Part IV The Avalon Language Chapter 19 A Tutorial Introduction to the Avalon Language 19.1 Terminology 19.2 Array of Atomic Integers 19.3 FIFO Queue 19.4 Atomic Counters Chapter 20 Reference Manual 20.1 Lexical Considerations 20.2 Servers 20.3 Base Classes 20.4 Control Structures 20.5 Transmission of Data Chapter 21 Library 21.1 Non-atomic Avalon/C++ Types and Type Generators 21.2 Atomic Types 21.3 Catalog Server Chapter 22 Guidelines for Programmers 22.1 Choosing Identifiers 22.2 Using and Implementing Avalon Types 22.3 Constructing an Avalon Program 22.4 For Experts Only Part V Advanced Features Chapter 23 Common Lisp Interface 23.1 Introduction 23.2 Accessing Camelot Servers from Lisp 23.3 Examples 23.4 The Lisp Recoverable Object Server 23.5 Summary and Future Work Chapter 24 Strongbox 24.1 Introduction 24.2 Design Goals 24.3 Strongbox Architecture 24.4 Converting Camelot Clients and Servers to be Secure 24.5 Secure Loader and White Pages Server 24.6 Interfaces 24.7 Security Algorithms 24.8 Special Issues 24.9 Conclusions Chapter 25 The Design of the Camelot Distributed Log Facility 25.1 Introduction 25.2 Architecture 25.3 Algorithms 25.4 Related Work 25.5 Conclusions Part VI Appendices Appendix A Debugging A.1 Avoiding Bugs A.2 Tools and Techniques Appendix B Abort Codes B.1 System Abort Codes B.2 Library Abort Codes Appendix C Camelot Interface Specification C.1 AT Interface C.2 CA Interface C.3 CS Interface C.4 CT Interface C.5 DL Interface C.6 DN Interface C.7 DR Interface C.8 DS Interface C.9 DT Interface C.10 LD Interface C.11 MD Interface C.12 MR Interface C.13 MT Interface C.14 MX Interface C.15 NA Interface C.16 ND Interface C.17 RD Interface C.18 RT Interface C.19 SR Interface C.20 ST Interface C.21 TA Interface C.22 TC Interface C.23 TD Interface C.24 TR Interface C.25 TS Interface Appendix D Avalon Grammar D.1 Expressions D.2 Declarations D.3 Statements D.4 External Definitions Bibliography Index

01 Jan 1991
TL;DR: This thesis examines the problem of congestion control in reservationless packet switched wide area data networks by modeling a conversation as a linear system in a simple control-theoretic approach, which is used to synthesize a robust and provably stable flow control protocol.
Abstract: This thesis examines the problem of congestion control in reservationless packet switched wide area data networks We define congestion as the loss of utility to a network user due to high traffic loads and congestion control mechanisms as those that maximize a user's utility at high traffic loads In this thesis, we study mechanisms that act at two time scales: multiple round trip times and less than one round trip time At these time scales, congestion control involves the scheduling discipline at the output trunks of switches and routers, and the flow control protocol at the transport layer of the hosts We initially consider the problem of protecting well-behaved users from congestion caused by ill-behaved users by allocating all users a fair share of the network bandwidth This motivates the design and analysis of the Fair Queueing resource scheduling discipline We then study the efficient implementation of the discipline by doing an average case performance evaluation of several data structures for packet buffering Since a Fair Queueing server maintains logically separate per-conversation queues and approximates a bitwise-round robin server, it partially decouples the service received by incoming traffic streams This allows us to deterministically model a single conversation in a network of Fair Queueing servers Analysis of the model shows that a source can estimate the service rate of the slowest server in the path to its destination (the bottleneck) by sending a pair of back-to-back packets (a packet-pair probe), and measuring the inter-acknowledgement spacing The probe values can be used to control a user's data sending rate We formalize this notion by modeling a conversation as a linear system in a simple control-theoretic approach This is used to synthesize a robust and provably stable flow control protocol The network state, that is, the service rate of the bottleneck, can be estimated from the series of probe values using an estimator based on elementary fuzzy logic Our analysis and performance claims are examined by simulation experiments on a set of eight test scenarios We show that under a wide variety of test conditions, both of our schemes provide users with good performance Thus, these mechanisms should prove useful in future high-speed networks

Proceedings ArticleDOI
20 May 1991
TL;DR: A client management technique for detecting file access patterns and then exploiting them to successfully prefetch files from servers is described, and Trace-driven simulations show the technique substantially increases file cache hit rate in a single-user environment.
Abstract: The work habits of many individuals yield file access patterns that are quite pronounced and can be regarded as defining working sets of files used for particular applications. A client management technique for detecting these patterns and then exploiting them to successfully prefetch files from servers is described. Trace-driven simulations show the technique substantially increases file cache hit rate in a single-user environment. Successful file prefetching carries three major advantages: applications run faster, there is less burst load placed on the network, and properly loaded client caches can better survive network outages. The technique requires little extra code, and-because it is simply an augmentation of the standard LRU client cache management algorithm-is easily incorporated into existing software. >

Patent
20 Mar 1991
TL;DR: In this article, the authors present a distributed transaction processing system in which processes running on component systems which may be heterogeneous interact according to the client-server model, where a type is associated with the data and application-defined operations which are part of certain system-definedoperations are defined for each type.
Abstract: Apparatus and method for performing an application-defined operation on data as part of a system-defined operation on the data. The apparatus and method are embodied in a distributed transaction processing system in which processes running on component systems which may be heterogeneous interact according to the client-server model. In the apparatus and method, a type is associated with the data and application-defined operations which are part of certain system-definedoperations are defined for each type. The system-defined operations which the application-defined operations are part of include allocation, reallocation, anddeallocation of buffers and sending buffers between clients and servers using remote procedure calls. In the allocation and reallocation operations, the application-defined operation is initialization; in the deallocation operation, it is uninitialization. In buffer sending, the application-defined operations include operations done on the buffer contents before sending, routing, encoding the buffer contents into a transfer syntax, operations done on the buffer contents after sending, decoding the buffer contents from the transfer syntax after receiving, and operations done on the buffer contents after receiving. Data structures in the processes associate the data and its type and a type and its application-defined operations. Servers employ a shared bulletin board data structure to indicate the types they accept.

Patent
31 Oct 1991
TL;DR: In this article, a method and apparatus providing remote access to server consoles in a network is described, which couples the server console lines to the serial ports of the multiple serial port means as well as server console terminals.
Abstract: A method and apparatus providing remote access to server consoles in a network is disclosed. The method and apparatus provides remote access by utilizing a single server designated an access server, multiple serial port means attached to this single access server, and a plurality of additional servers coupled to this multiple serial port means. The method and apparatus couples the server console lines to the serial ports of the multiple serial port means as well as server console terminals. Remote access is accomplished by gaining access to the access server, which then provides access to any one of the serial ports associated with the access server, thereby providing remote access to any one of the plurality of server console lines coupled to the multiple serial port means. This capability is accomplished while local accessibility to the server console terminals is maintained.

Patent
David Joseph Allard1, Pramod Chandra1
11 Apr 1991
TL;DR: In this paper, the authors proposed a scheme for unattended activation and operation of personal computers through cooperation of power supplies for supplying electrical power to electrically operated components which manipulate or store digital data with a controlling network option card added to the personal computer.
Abstract: This invention relates to personal computers used as network servers, and more particularly to unattended activation and operation of such personal computers through cooperation of power supplies for supplying electrical power to electrically operated components which manipulate or store digital data with a controlling network option card added to the personal computer in adapting it to the network environment. In accordance with this invention, a system administrator is relieved of any necessity of leaving equipment in the power-on active state in order to have network resources available to client users.

Proceedings ArticleDOI
01 Jan 1991
TL;DR: The proposed architecture consists of a central manager, placed at a single secure location, that receives reports from various host and LAN managers and processes these reports, correlates them, and detects intrusions.
Abstract: The network intrusion-detection concept is extended from the LAN (local area network) environment to arbitrarily wider areas, with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e. the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager (namely a monitoring process or collection of processes running in background) in each host: LAN manager for monitoring each LAN in the system; and a central manager, placed at a single secure location, that receives reports from various host and LAN managers and processes these reports, correlates them, and detects intrusions. >

Journal ArticleDOI
Beth Howard-Pitney1, M D Johnson, D G Altman, R Hopkins, N Hammond 
TL;DR: A responsible alcohol-service training program was evaluated for its impact on changing beliefs, knowledge, and behavior in 97 servers and 43 managers and on changing establishment policies that encourage safer drinking environments.
Abstract: A responsible alcohol-service training program was evaluated for its impact on changing beliefs, knowledge, and behavior in 97 servers and 43 managers and on changing establishment policies that encourage safer drinking environments. The training program had a significant impact on changing the beliefs and knowledge of both servers and managers. Observation 4 to 6 weeks after training showed no effects on server behavior, but there was a tendency toward more establishment policies compared with controls.

Proceedings ArticleDOI
30 Sep 1991
TL;DR: The proposed model provides full transparency of groups, and-if groups are used to support replication-full replication transparency, and is more general than those of ISIS and CIRCUS.
Abstract: How a model for interface groups can be integrated with the ANSA computational model is discussed. The result is a uniform model for one-to-one, one-to-many, many-to-one, and many-to-many communication. Whether a service is provided by a single server or distributed over a collection of servers cannot be inferred from the interface to the service. The proposed model thus provides full transparency of groups, and-if groups are used to support replication-full replication transparency. The interface group model is more general than those of ISIS and CIRCUS. In the prototype implementation of interface groups, the multi-endpoint communication protocol is implemented on top of a communication package with synchronous RPCs. The protocol ensures total order if a client uses RPC or no order if a client uses asynchronous calls. >

Journal Article
TL;DR: This paper compares two distributed operating systems, Amoeba and Sprite, which diverged on two philosophical grounds: whether to emphasize a distributed computing model or traditional UNIX-style applications, and whether to use a workstation-centered model of computation or a combination of terminals and a shared processor pool.
Abstract: This paper compares two distributed operating systems, Amoeba and Sprite. Although the systems share many goals, they diverged on two philosophical grounds: whether to emphasize a distributed computing model or traditional UNIX-style applications, and whether to use a workstation-centered model of computation or a combination of terminals and a shared processor pool. Many of the most prominent features of the systems (both positive and negative) follow from the philosophical differences. For example, Amoeba provides a high-performance user-level IPC mechanism, while Sprite's RPC mechanism is only available for kernel use; Sprite's file access performance benefits from client-level caching, while Amoeba caches files only on servers; and Sprite uses a process migration model to share compute power, while Amoeba uses a centralized server to allocate processors and distribute load automatically.

Patent
15 Aug 1991
TL;DR: In this article, a communication client is connected to multiple display servers, and when a client of one of the display servers issues a communication, the communication client notes the communication in the display server coupled to the client and relays the communication to other servers for use by clients of the other servers.
Abstract: A communication client is connected to multiple display servers. When a client of one of the display servers issues a communication, the communication client notes the communication in the display server coupled to the client and relays the communication to the other servers for use by clients of the other servers.

Patent
Kobayashi Seiichi1
23 May 1991
TL;DR: In this paper, a distributed transaction processing system of a two-phase commit scheme is presented, where a client sequentially requests all the servers to perform PHASE I processing, and the client stores data indicating the completion of the processing.
Abstract: In a distributed transaction processing system of a two-phase commit scheme, a client sequentially requests all the servers to perform PHASE I processing. When all the servers complete the PHASE I processing, the client stores data indicating the completion of the processing. When an operation is restarted after a system down of a given server, the server inquires of the client whether all the servers have completed the PHASE I processing. If all the servers have completed the PHASE I processing, the server executes PHASE II processing. If not all the servers have completed the PHASE I processing, the server in which failures occur, causing abnormal system termination performs rollback processing, and the client requests other servers which have completed the PHASE I processing to perform rollback processing.

Journal ArticleDOI
TL;DR: It is shown here how to site limited numbers of engine companies, truck companies and fire stations in such a way as to maximize the calls for service which have an engine company available within an engine covering distance with reliability α and a truck company available with the same reliability within a truck covering distance.

Patent
14 Feb 1991
TL;DR: In this paper, a rate-controlled server is proposed for time multiplexing a resource among a plurality of entities at average rates and with deterministic delays between accesses to the resource by an entity.
Abstract: Apparatus and methods for time multiplexing a resource among a plurality of entities at average rates and with deterministic delays between accesses to the resource by an entity. An entity accessing the resource received a time slot on the resource; a fixed number of time slots constitute a frame. Each entry receives a fixed allocation of time slots in the frame. When an entity has work for the resource to do, it receives access to the resource for a number of slots in each frame equal to the lesser of the number of slots required to do the work and the number of slots in the allocation. A rate-controlled server is disclosed which defines a frame and allocations therein, as well as a hierarchy of servers which combines rate-controlled traffic with best traffic. In the hierarchy, a rate-controlled server activates a round-robin server when the entitles served by the rate-controlled server do not require all the slots in a frame. A hierarchy of rate-controlled servers is further disclosed which permits access to the resource at widely-differing average rates. In that hierarchy, a number of slots in the frame for a given member of the hierarchy are reserved for the next number down of the hierarchy and the next member down is active only duringthose slots. Further disclosed are nodes of ISDN networks employing ATM which incorporate the servers and hierarchies thereof.

Journal ArticleDOI
TL;DR: It is shown that if the service times of the servers are comparable in the reversed hazard rate (or the usual stochastic) ordering then there exists an optimal allocation where the server allocated to the first stage has a larger mean service time than that assigned to the second stage.

Journal ArticleDOI
TL;DR: In this paper, a new set of decision models for siting ambulances and fire companies has been tested and the models place explicit constraints on the availability of service within the time standards.
Abstract: Over the past twenty years, decision models have evolved for siting ambulances and fire companies. The early models assumed that, once positioned, ambulances or fire companies would almost always be available when a call arrived. The recognition that congestion can frequently make servers unavailable led to the development of redundant coverage models. Recently, a new set of models has been tested. These models place explicit constraints on the availability of service within the time standards.