scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 1989"


Book
01 Jan 1989
TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Abstract: gineering, computer science, operations research, and applied mathematics. It is essentially a self-contained work, with the development of the material occurring in the main body of the text and excellent appendices on linear algebra and analysis, graph theory, duality theory, and probability theory and Markov chains supporting it. The introduction discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later. After the introduction, the text is organized in two parts: synchronous algorithms and asynchronous algorithms. The discussion of synchronous algorithms comprises four chapters, with Chapter 2 presenting both direct methods (converging to the exact solution within a finite number of steps) and iterative methods for linear

5,597 citations


Proceedings Article
01 Jun 1989

756 citations


Journal ArticleDOI
TL;DR: An algorithm for distributed mutual exclusion in a computer network of N nodes that communicate by messages rather than shared memory that does not require sequence numbers as it operates correctly despite message overtaking is presented.
Abstract: We present an algorithm for distributed mutual exclusion in a computer network of N nodes that communicate by messages rather than shared memory. The algorithm uses a spanning tree of the computer network, and the number of messages exchanged per critical section depends on the topology of this tree. However, typically the number of messages exchanged is O(log N) under light demand, and reduces to approximately four messages under saturated demand.Each node holds information only about its immediate neighbors in the spanning tree rather than information about all nodes, and failed nodes can recover necessary information from their neighbors. The algorithm does not require sequence numbers as it operates correctly despite message overtaking.

473 citations


Proceedings ArticleDOI
30 Oct 1989
TL;DR: The authors introduce a concept of network decomposition, a partitioning of an arbitrary graph into small-diameter connected components, such that the graph created by contracting each component into a single node has low chromatic number.
Abstract: The authors introduce a concept of network decomposition, a partitioning of an arbitrary graph into small-diameter connected components, such that the graph created by contracting each component into a single node has low chromatic number. They present an efficient distributed algorithm for constructing such a decomposition and demonstrate its use for design of efficient distributed algorithms. The method yields new deterministic distributed algorithms for finding a maximal independent set in an arbitrary graph and for ( Delta +1)-coloring of graphs with maximum degree Delta . These algorithms run in O(n/sup epsilon /) time for epsilon =O((log log n/log n)/sup 1/2/), whereas the best previously known deterministic algorithms required Omega (n) time. The techniques can also be used to remove randomness from the previously known most distributed breadth-first search algorithm. >

383 citations


Journal ArticleDOI
TL;DR: An up-to-date and comprehensive survey of deadlock detection algorithms is presented, their merits and drawbacks are discussed, and their performances are compared.
Abstract: The author describes a series of deadlock detection techniques based on centralized, hierarchical, and distributed control organizations. The point of view is that of practical implications. An up-to-date and comprehensive survey of deadlock detection algorithms is presented, their merits and drawbacks are discussed, and their performances (delays as well as message complexity) are compared. Related issues such as correctness of the algorithms, performance of the algorithms, and deadlock resolution, which require further research are examined. >

296 citations


01 Jan 1989
TL;DR: The architecture must be designed to securely support systems that do not implement or use any of the security services, while providing extensive additional security capabilities for those systems that choose to implement the architecture.
Abstract: The Digital Distributed System Security Architecture is a comprehensive speci cation for security in a distributed system that employs state-of-the-art concepts to address the needs of both commercial and government environments. The architecture covers user and system authentication, mandatory and discretionary security, secure initialization and loading, and delegation in a general-purpose computing environment of heterogeneous systems where there are no central authorities, no global trust, and no central controls. The architecture prescribes a framework for all applications and operating systems currently available or to be developed. Because the distributed system is an open OSI environment, where functional interoperability only requires compliance with selected protocols needed by a given application, the architecture must be designed to securely support systems that do not implement or use any of the security services, while providing extensive additional security capabilities for those systems that choose to implement the architecture.

260 citations


Proceedings ArticleDOI
23 Apr 1989
TL;DR: A distributed algorithm is presented for obtaining an efficient and conflict-free broadcasting schedule in a multi-hop packet radio network and a distributed implementation of this algorithm is proposed, which is based on circulating a token through the nodes in the network.
Abstract: A distributed algorithm is presented for obtaining an efficient and conflict-free broadcasting schedule in a multi-hop packet radio network. The inherent broadcast nature of the radio channel enables a node's transmission to be received by all other nodes within range. Multiple transmissions can be scheduled simultaneously because of the multi-hop nature of the network. It is first shown that the construction of a broadcasting schedule of minimum length is NP-complete, and then a centralized algorithm based on a sequential graph-coloring heuristic is presented to construct minimal-length schedules. A distributed implementation of this algorithm is then proposed, which is based on circulating a token through the nodes in the network. >

241 citations


Proceedings ArticleDOI
01 Aug 1989
TL;DR: A distributed algorithm that provides loop-free paths at every instant and extends or improves algorithms introduced previously by Chandy and Misra, Jaffe and Moss, Merlin and Segall, and the author is described.
Abstract: We present a unified approach for the dynamic computation of shortest paths in a computer network using either distance vectors or link states. We describe a distributed algorithm that provides loop-free paths at every instant and extends or improves algorithms introduced previously by Chandy and Misra, Jaffe and Moss, Merlin and Segall, and the author. Our approach treats the problem of distributed shortest-path routing as one of diffusing computations, which was first proposed by Dijkstra and Scholten. We verify the loop-freedom of the new algorithm, and also demonstrate that it converges to the correct routing entries a finite time after an arbitrary sequence of topological changes. We analyze the complexity of the new algorithm when distance vectors and link states are used, and show that using distance vectors is better insofar as routing overhead is concerned.

227 citations


Proceedings ArticleDOI
01 Aug 1989
TL;DR: A protocol that maintains the shortest-path routes in a dynamic topology, that is, in an environment where links and nodes can fail and recover at arbitrary times, and avoids the bouncing effect and the looping problem that occur in the previous approaches of the distributed implementation of Bellman-Ford algorithm.
Abstract: Distributed algorithms for shortest-path problems are important in the context of routing in computer communication networks. We present a protocol that maintains the shortest-path routes in a dynamic topology, that is, in an environment where links and nodes can fail and recover at arbitrary times. The novelty of this protocol is that it avoids the bouncing effect and the looping problem that occur in the previous approaches of the distributed implementation of Bellman-Ford algorithm. The bouncing effect refers to the very long duration for convergence when failures happen or weights increase, and the nonterminating exchanges of messages, or counting-to-infinity behavior, in disconnected components of the network resulting from failures. The looping problems cause data packets to circulate and, thus, waste bandwidth. These undesirable effects are avoided without any increase in the overall message complexity of previous approaches required in the connected part of the network. The time complexity is better than the distributed Bellman-Ford algorithm encountering failures. The key idea in the implementation is to maintain only loop-free paths, and search for the shortest path only from this set.

226 citations


Proceedings ArticleDOI
01 Jun 1989
TL;DR: Algorithm for recovery on networks, in which each process only communicates with its neighbors, and how to decompose large networks into smaller networks so that each of the smaller network can use a different recovery procedure.
Abstract: Absfrucf: Various distributed algorithms am presented, that allow nodes in a distributed system to recover from crash failures efficiently The algorithms are independent of the application programs running on the nodes The algorithms log messages and checkpoint states of the processes to stable storage at each node Both logging of messages and checkpointing of process states can be done asynchronously with the execution of the application Upon restarting after a failure, a node initiates a procedure in which the nodes use the logs and checkpoints on stable storage to roll back to earlier local states, such that the resulting global state is maximal and consistent The first algorithm requires adding extra information of size O(n) to each application message (where n is the number of nodes); for each failure, O(n2) messages are exchanged, but no node rolls back more than once The second algorithm only requires extra information of size O(1) on each application message, but requires O(n3) messages per failure Both the above algorithms require that each process should be able to send messages to each of the other processes We also present algorithms for recovery on networks, in which each process only communicates with its neighbors Finally, we show how to decompose large networks into smaller networks so that each of the smaller network can use a different recovery procedure

191 citations


Book ChapterDOI
26 Sep 1989
TL;DR: The implementation of causal ordering proposed in this paper uses logical clocks of Mattern-Fidge and presents two advantages over the implementation in ISIS: the information added to messages to ensure causal ordering is bounded by the number of sites in the system, and no special protocol is needed to dispose of this added information when it has become useless.
Abstract: This paper presents a new algorithm to implement causal ordering Causal ordering was first proposed in the ISIS system developed at Cornell University The interest of causal ordering in a distributed system is that it is cheaper to realize than total ordering The implementation of causal ordering proposed in this paper uses logical clocks of Mattern-Fidge (which define a partial order between events in a distributed system) and presents two advantages over the implementation in ISIS: (1) the information added to messages to ensure causal ordering is bounded by the number of sites in the system, and (2) no special protocol is needed to dispose of this added information when it has become useless The implementation of ISIS presents however advantages in the case of site failures

Dissertation
01 Jan 1989
TL;DR: This dissertation proposes a parallelized version of a genetic algorithm called the distributed genetic algorithm, which can achieve near-linear speedup over the traditional version of the algorithm, and discusses the issue of balancing exploration against exploitation in the distributed Genetic algorithm, by allowing different subpopulations to run with different parameters, so that someSubpopulations can emphasize exploration while others emphasize exploitation.
Abstract: The genetic algorithm is a general purpose, population-based search algorithm in which the individuals in the population represent samples from the set of all possibilities, whether they are solutions in a problem space, strategies for a game, rules in classifier systems, or arguments for problems in function optimization. The individuals evolve over time to form even better individuals by sharing and mixing their information about the space. This dissertation proposes a parallelized version of a genetic algorithm called the distributed genetic algorithm, which can achieve near-linear speedup over the traditional version of the algorithm. This algorithm divides the large population into many equal-sized small subpopulations and runs the genetic algorithm on each subpopulation independently. Each subpopulation periodically selects some individuals and exchanges them with other subpopulations; the process known as migration. The functions used to evaluate the performance of the distributed genetic algorithm and the traditional algorithm are called Walsh polynomials, which are based on Walsh functions. Walsh polynomials can be categorized into classes of functions with each class having a different degree of difficulty. By generating a large number of instances of the various classes, the performance difference between the distributed and traditional genetic algorithms can be analyzed statistically. The first part of this dissertation examines the partitioned genetic algorithm, a version of the distributed genetic algorithm with no migration. The experiments on four different classes of Walsh polynomials show that the partitioned algorithm consistently outperforms the traditional algorithm as a function optimizer. This is because the physical subdivision of the population will allow each subpopulation to explore the space independently. Also, good individuals are more likely to be recognized in a smaller subpopulation than they would be in a large, diverse population. The second part of this research examines the effects of migration on the performance of the distributed genetic algorithm. The experiments show that with a moderate migration rate the distributed genetic algorithm finds better individuals than the traditional algorithm while maintaining a high overall fitness of the population. Finally, this dissertation also discusses the issue of balancing exploration against exploitation in the distributed genetic algorithm, by allowing different subpopulations to run with different parameters, so that some subpopulations can emphasize exploration while others emphasize exploitation. The distributed algorithm is shown to be more robust than the traditional version: even when each subpopulation runs with different combinations of crossover and mutation rates, the distributed algorithm performs better than the traditional one.

Proceedings ArticleDOI
30 Oct 1989
TL;DR: Three protocols that take further steps in this direction are presented, one of which is asymptotically optimal in all three quality parameters while using the optimal number of processors.
Abstract: In a distributed consensus protocol all processors (of which t may be faulty) are given (binary) initial values; after exchanging messages all correct processors must agree on one of them. The quality of a protocol is measured here using as parameters the total number of processors n, number of rounds of message exchange r, and maximal message length m, with optima, respectively, of 3t+1, t+1, and 1. Although no known protocol is optimal in all these three aspects simultaneously, the protocols that take further steps in this direction are presented. The first protocol has n>4t, r=t+1, and polynomial message size. The second protocol has n>3t, r=3t+3, and m=2, and it is asymptotically optimal in all three quality parameters while using the optimal number of processors. Using these protocols as building blocks, families of protocols with intermediate quality parameters, offering better tradeoffs than previous results, are obtained. All the protocols work in polynomial time and have succinct descriptions. >

Journal ArticleDOI
TL;DR: A heuristically-aided algorithm to achieve mutual exclusion in distributed systems is presented which has better performance characteristics than previously proposed algorithms and is free from deadlock and starvation.
Abstract: A heuristically-aided algorithm to achieve mutual exclusion in distributed systems is presented which has better performance characteristics than previously proposed algorithms. The algorithm makes use of state information, which is defined as the set of states of mutual exclusion processes in the system. Each site maintains information about the state of other sites and uses it to deduce a subset of sites likely to have the token. Consequently, the number of messages exchanged for a critical section invocation is a random variable between 0 and n (n is the number of sites in the system). It is shown that the algorithm achieves mutual exclusion and is free from deadlock and starvation. The effects of a site crash are discussed, as are those of a communication-medium failure on the proposed algorithm. Methods of recovery from these failures are suggested. The performance of the algorithm is studied using a simulation technique (and an analytic technique for low and heavy traffics of requests for critical section execution). >

Journal ArticleDOI
TL;DR: The author presents a simple solution for the committee coordination problem, which encompasses the synchronization and exclusion problems associated with implementing multiway rendezvous, and shows how it can be implemented to develop a family of algorithms.
Abstract: The author presents a simple solution for the committee coordination problem, which encompasses the synchronization and exclusion problems associated with implementing multiway rendezvous, and shows how it can be implemented to develop a family of algorithms. The algorithms use message counts to solve the synchronization problem, and they solve the exclusion problem by using a circulating token or by using auxiliary resources as in the solutions for the dining or drinking philosophers' problems. Results of a simulation study of the performance of the algorithms are presented. The experiments measured the response time and message complexity of each algorithm as a function of variations in the model parameters, including network topology and level of conflict in the system. The results show that the response time for algorithms proposed is significantly better than for existing algorithms, whereas the message complexity is considerably worse. >

Journal ArticleDOI
TL;DR: The algorithm for distributed mutual exclusion is extended to enable up to K nodes to be within the critical section simultaneously and requires at most 2 * (N-1) messages per entry to thecritical section, and occasionally fewer.

Journal ArticleDOI
TL;DR: There is a need for algorithms with a stronger internodal coordination, such as those reported previously in the literature by Jaffe and Moss and the author, to help solve the counting-to-infinity problem.
Abstract: A new, distributed algorithm for the dynamic computation of minimum-hop paths in a computer network is presented. The new algorithm is an extension of the Bellman-Ford algorithm for the shortest-path computation. According to the new algorithm, each node maintains the successor (next hop) and shortest distance in number of hops of the paths to each network destination; update messages from a node are sent only to its neighbors, and these messages contain the length in hops of the selected path to network destinations. A node always chooses new successors that are equidistant or closer to the destinations than the node's current successor to the same destination. No update is sent before all replies are received for the previous one. The new algorithm is shown to converge in a finite time after an arbitrary sequence of topological changes and to be loop free at every instant. Its performance is compared with the performance of various other extensions of the Bellman-Ford algorithm. We show that none of those extensions eliminates the counting-to-infinity problem and those cases in which the new algorithm provides an improvement over the basic Bellman-Ford algorithm. We conclude that there is a need for algorithms with a stronger internodal coordination, such as those reported previously in the literature by Jaffe and Moss and the author.

Proceedings ArticleDOI
01 Jun 1989
TL;DR: This work considers iterative algorithms of the form x := ƒ(x), executed by a parallel or distributed computing system, and provides results on the convergence rate of such algorithms and a comparison with the convergence rates of the corresponding synchronous methods.
Abstract: We consider iterative algorithms of the form x := ƒ(x), executed by a parallel or distributed computing system. We focus on asynchronous implementations whereby each processor iterates on a different component of x, at its own pace, using the most recently received (but possibly outdated) information on the remaining components of x. We provide results on the convergence rate of such algorithms and make a comparison with the convergence rate of the corresponding synchronous methods in which the computation proceeds in phases. We also present results on how to terminate asynchronous iterations in finite time with an approximate solution of the computational problem under consideration.

Proceedings ArticleDOI
05 Jun 1989
TL;DR: In this article, an algorithm is presented that defects for termination of distributed computations by an auxiliary controlling agent can be detected by assigning a weight W, 0 > 0 to each defect.
Abstract: An algorithm is presented that defects for termination of distributed computations by an auxiliary controlling agent. The algorithm assigns a weight W, 0 >

Journal ArticleDOI
TL;DR: The hybrid algorithm is the fastest algorithm overall, since its arithmetic cost is lower than the Householder algorithms and its communication cost does not increase with the column length of the matrix.
Abstract: Several algorithms for orthogonal factorization on distributed memory multiprocessors are designed and implemented. Two of the algorithms employ Householder transformations, a third is based on Givens rotations, and a fourth hybrid algorithm uses Householder transformations and Givens rotations in different phases.The arithmetic and communication complexities of the algorithms are analyzed. The analyses show that the sequential arithmetic terms play a more important role than the communication terms in determining the running times and efficiencies of these algorithms. The hybrid algorithm is the fastest algorithm overall, since its arithmetic cost is lower than the Householder algorithms and its communication cost does not increase with the column length of the matrix. The observed execution times of the implementations on an iPSC-286 agree quite well with the complexity analyses. It is also shown that the efficiencies can be approximated using only the arithmetic costs of the algorithms.

Proceedings ArticleDOI
05 Jun 1989
TL;DR: A dynamic information-structure mutual-exclusion algorithm is presented for distributed systems whose information structure evolves with time as sites learn about the state of the system through messages and it is shown that the algorithm achieves mutual exclusion and is free from starvation.
Abstract: A dynamic information-structure mutual-exclusion algorithm is presented for distributed systems whose information structure evolves with time as sites learn about the state of the system through messages. It is shown that the algorithm achieves mutual exclusion and is free from starvation. Unlike Maekawa-type algorithms, the proposed algorithm is not prone to deadlocks. This is because its information structure forms a staircaselike structure which in conjunction with timestamp ordering eliminates the possibility of deadlock. Besides having the flavor of dynamic information structure, the algorithm adapts itself to heterogeneous or fluctuating traffic conditions to optimize the performance. An asymptotic analysis of the performance of the algorithm for low and heavy traffics of requests for critical section execution is carried out. >

Journal ArticleDOI
TL;DR: Lower bounds for distributed algorithms for complete networks of processors (i.e., networks where each pair of processors is connected by a communication line) are discussed and an Ω( n log n ) lower bound for the number of messages required by any algorithm in a given class of distributed algorithm for such networks is shown.

DOI
01 Jan 1989
TL;DR: A new methodology for resource sharing algorithms in distributed systems is presented, proposing Pareto-optimality as a definition of optimality and fairness for the flow control problem and proving that the resource allocations computed by the economy are Pare to-optimal.
Abstract: In this thesis, we present a new methodology for resource sharing algorithms in distributed systems. We propose that a distributed computing system should be composed of a decentralized community of microeconomic agents. We show that this approach decreases complexity and can substantially improve performance. We compare the performance, generality and complexity of our algorithms with non-economic algorithms. To validate the usefulness of our approach, we present economies that solve three distinct resource management problems encountered in large, distributed systems. The first economy performs CPU load balancing and demonstrates how our approach limits complexity and effectively allocates resources when compared to non-economic algorithms. We show that the economy achieves better performance than a representative non-economic algorithm. The load balancing economy spans a broad spectrum of possible load balancing strategies, making it possible to adapt the load balancing strategy to the relative power of CPU vs. communication. The second economy implements flow control in virtual circuit based computer networks. This economy implements a general model of VC throughput and delay goals that more accurately describes the goals of a diverse set of users. We propose Pareto-optimality as a definition of optimality and fairness for the flow control problem and prove that the resource allocations computed by the economy are Pareto-optimal. Finally, we present a set of distributed algorithms that rapidly compute a Pareto-optimal allocation of resources. The final economy manages replicated, distributed data in a distributed computer system. This economy substantially decreases mean transaction response time by adapting to the transactions' reference patterns. The economy reacts to localities in the data access pattern by dynamically assigning copies of data objects to nodes in the system. The number of copies of each object is adjusted based on the write frequency versus the read frequency for the object. Unlike previous work, the data management economy's algorithms are completely decentralized and have low computational overhead. Finally, this economy demonstrates how an economy can allocate logical resources in addition to physical resources.

Proceedings ArticleDOI
23 Apr 1989
TL;DR: A distributed asynchronous algorithm is presented for finding two disjoint paths of minimum total length from each possible source node to a destination, and it is shown that a synchronous implementation of the algorithm has communication and time complexities O( mod E mod +D mod N mod ) and O(D/sub 2/), respectively.
Abstract: A distributed asynchronous algorithm is presented for finding two disjoint paths of minimum total length from each possible source node to a destination. The algorithm has both node-disjoint and link-disjoint versions, and provides each node with information sufficient to make incremental routing decisions for forwarding packets over the disjoint paths. For a network in which all links have unit length, it is shown that a synchronous implementation of the algorithm has communication and time complexities O( mod E mod +D mod N mod ) and O(D/sub 2/), respectively, where D is the networks diameter and D/sub 2/ is the maximum, over all nodes i, of the total number of links in the shortest pair of disjoint paths from i to the destination. >

Proceedings Article
01 Jul 1989
TL;DR: This work proposes a novel algorithm that exhibits complete parallelism during the sort, merge, and return-tohost phases, and decreases the amou@ of inter-processor communication compared to existing parallel sort algorithms.
Abstract: The paper considers the prcblem of sorting a file in a distributed system. The file is originally distributed on many sites, and the result of the sort is needed at another site called the “host”. The particular environment that we resume is a backend parallel database machine, but the work is applicable to distributed database systems as well. After discussing the drawbacks of several existing algorithms, we propose a novel algorithm that exhibits complete parallelism during the sort, merge, and return-tohost phases. In addition, this algorithm decreases the amou@ of inter-processor communication compared to existing parallel sort algorithms. We describe an implementation of the algorithm, present performance measurements, and use a validated model to demonstrate its scalability. We also discuss the effect of an uneven distribution of data among the various processors.

Journal ArticleDOI
TL;DR: The sequential depth-first-search algorithm, distributed over processor nodes of a graph, yields a distributed depth- first- search algorithm that uses exactly 2| V | messages and 2 | V | units of time.

Journal ArticleDOI
TL;DR: A general election algorithm for chordal rings is presented, and it is shown thatO(logn) chords at each processor suffice to obtain an algorithm that uses at mostO(n) messages.
Abstract: We study the message complexity of the problem of distributively electing a leader in chordal rings. Such networks consist of a basic ring with additional links, the extreme cases being the oriented ring and the complete graph with a full sense of direction. We present a general election algorithm for these networks, and prove its optimality. As a corollary, we show thatO(logn) chords at each processor suffice to obtain an algorithm that uses at mostO(n) messages; this improves and extends a previous work, where an algorithm, also usingO(n) messages, was suggested for the case where alln-1 chords exist (the oriented complete network).

Journal ArticleDOI
TL;DR: A distributed algorithm for detection and resolution of resource deadlocks in object-oriented distributed systems that can be used in conjunction with concurrency control algorithms which are based on the semantic lock model is proposed and proved.
Abstract: The authors propose and prove a distributed algorithm for detection and resolution of resource deadlocks in object-oriented distributed systems. In particular, the algorithm can be used in conjunction with concurrency control algorithms which are based on the semantic lock model. The algorithm greatly reduces message traffic by properly identifying and eliminating redundant messages. It is shown that both its worst and average time complexities are O(n*e), where n is the number of nodes and e is the number of edges in the waits-for graph. After deadlock resolution, the algorithm leaves information in the system concerning dependence relations of currently running transactions. This information will preclude the wasteful retransmission of messages and reduce the delay in detecting future deadlocks. >

Proceedings ArticleDOI
05 Jun 1989
TL;DR: Results obtained in a study of algorithms to implement a distributed-shared memory in a distributed (loosely coupled) environment are described.
Abstract: Results obtained in a study of algorithms to implement a distributed-shared memory in a distributed (loosely coupled) environment are described. Distributed-shared memory is the implementation of shared memory across multiple nodes in a distributed system. This is accomplished using only the private memories of the nodes by controlling access to the pages of the shared memory and transferring data to and from the private memories when necessary. Alternative algorithms are analyzed to implement distributed-shared memory. The algorithms are analyzed and compared over a wide range of conditions. Application characteristics are identified which can be exploited by the algorithms. The conditions under which the algorithms analyzed perform better or worse than the other alternatives are shown. Results are obtained via simulation using a synthetic reference generator. >

Proceedings ArticleDOI
16 Oct 1989
TL;DR: A method is described for extending an existing updating system to a distributed environment that allows distributed programs written using the remote procedure call paradigm to be dynamically updated.
Abstract: A dynamic program updating system is a tool that replaces a running computer program with a new version, without stopping the currently running program. A method is described for extending an existing updating system to a distributed environment. These extensions allow distributed programs written using the remote procedure call paradigm to be dynamically updated. The approach scales to a geographically distributed computing environment and supports computer systems that contain heterogeneous hardware and software. Programs that execute on computer systems owned by multiple administrative entities can also be updated using this approach. >