scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 1997"


Journal ArticleDOI
Gerard J. Holzmann1
01 May 1997
TL;DR: An overview of the design and structure of the verifier, its theoretical foundation, and an overview of significant practical applications are given.
Abstract: SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.

4,159 citations


Proceedings ArticleDOI
09 Apr 1997
TL;DR: The proposed protocol is a new distributed routing protocol for mobile, multihop, wireless networks that is highly adaptive, efficient and scalable; being best-suited for use in large, dense, mobile networks.
Abstract: We present a new distributed routing protocol for mobile, multihop, wireless networks. The protocol is one of a family of protocols which we term "link reversal" algorithms. The protocol's reaction is structured as a temporally-ordered sequence of diffusing computations; each computation consisting of a sequence of directed link reversals. The protocol is highly adaptive, efficient and scalable; being best-suited for use in large, dense, mobile networks. In these networks, the protocol's reaction to link failures typically involves only a localized "single pass" of the distributed algorithm. This capability is unique among protocols which are stable in the face of network partitions, and results in the protocol's high degree of adaptivity. This desirable behavior is achieved through the novel use of a "physical or logical clock" to establish the "temporal order" of topological change events which is used to structure (or order) the algorithm's reaction to topological changes. We refer to the protocol as the temporally-ordered routing algorithm (TORA).

2,211 citations


Proceedings ArticleDOI
05 Aug 1997
TL;DR: This work proposes a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state and defines an extensible data model to represent required information and presents a scalable, high-performance, distributed implementation.
Abstract: High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately no standard mechanism exists for organizing or accessing such information. Consequently different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

467 citations


Journal ArticleDOI
TL;DR: An approach to parallelizing optimal power flow (OPF) that is suitable for coarse-grained distributed implementation and is applicable to very large interconnected power systems is presented.
Abstract: We present an approach to parallelizing optimal power flow (OPF) that is suitable for coarse-grained distributed implementation and is applicable to very large interconnected power systems. We demonstrate the approach on several medium size systems, including IEEE Test Systems and parts of the ERCOT system. Our simulations demonstrate the feasibility of distributed implementation of OPF. Rough estimates are made of parallel efficiencies and speed-ups.

400 citations


Journal ArticleDOI
01 Jan 1997
TL;DR: Arrays for distributed fusion, whereby each node processes the data from its own set of sensors and communicates with other nodes to improve on the estimates, are discussed, and the information graph is introduced as a way of modeling information flow in distributed fusion systems and for developing algorithms.
Abstract: Modern surveillance systems often utilize multiple physically distributed sensors of different types to provide complementary and overlapping coverage on targets. In order to generate target tracks and estimates, the sensor data need to be fused. While a centralized processing approach is theoretically optimal, there are significant advantages in distributing the fusion operations over multiple processing nodes. This paper discusses architectures for distributed fusion, whereby each node processes the data from its own set of sensors and communicates with other nodes to improve on the estimates, The information graph is introduced as a way of modeling information flow in distributed fusion systems and for developing algorithms. Fusion for target tracking involves two main operations: estimation and association. Distributed estimation algorithms based on the information graph are presented for arbitrary fusion architectures and related to linear and nonlinear distributed estimation results. The distributed data association problem is discussed in terms of track-to-track association likelihoods. Distributed versions of two popular tracking approaches (joint probabilistic data association and multiple hypothesis tracking) are then presented, and examples of applications are given.

384 citations



Book
12 Mar 1997
TL;DR: Distributed Simulation brings together the many complex technologies for distributed simulation, including object-oriented, multilevel, and multi-resolution simulation, with strong emphasis on emerging simulation methodologies.
Abstract: From the Publisher: Simulation is a multi-disciplinary field, and significantsimulation research is dispersed across multiple fields of study. Distributed computer systems, software design methods, and new simulation techniques offer synergistic multipliers when joined together in a distributed simulation. Systems of most interest to the simulation practitioner are often the most difficult to model and implement. Distributed Simulation brings together the many complex technologies for distributed simulation. There is strong emphasis on emerging simulation methodologies, including object-oriented, multilevel, and multi-resolution simulation. Finally, one concise text provides a strong foundation for the development of high fidelity simulations in heterogeneous distributed computing environments!

208 citations


Journal ArticleDOI
TL;DR: This work introduces self-stabilizing protocols for synchronization that are used as building blocks by the leader-election algorithm and presents a simple, uniform, self-Stabilizing ranking protocol.
Abstract: A distributed system is self-stabilizing if it can be started in any possible global state. Once started the system regains its consistency by itself, without any kind of outside intervention. The self-stabilization property makes the system tolerant to faults in which processors exhibit a faulty behavior for a while and then recover spontaneously in an arbitrary state. When the intermediate period in between one recovery and the next faulty period is long enough, the system stabilizes. A distributed system is uniform if all processors with the same number of neighbors are identical. A distributed system is dynamic if it can tolerate addition or deletion of processors and links without reinitialization. In this work, we study uniform dynamic self-stabilizing protocols for leader election under readwrite atomicity. Our protocols use randomization to break symmetry. The leader election protocol stabilizes in O(/spl Delta/D log n) time when the number of the processors is unknown and O(/spl Delta/D), otherwise. Here /spl Delta/ denotes the maximal degree of a node, D denotes the diameter of the graph and n denotes the number of processors in the graph. We introduce self-stabilizing protocols for synchronization that are used as building blocks by the leader-election algorithm. We conclude this work by presenting a simple, uniform, self-stabilizing ranking protocol.

208 citations


Proceedings ArticleDOI
09 Apr 1997
TL;DR: This work proves the DCUR's correctness by showing that it is always capable of constructing a loop-free delay-constrained path within finite time, if such a path exists.
Abstract: We study the NP-hard delay-constrained least-cost path problem, and propose a simple, distributed heuristic solution: the delay-constrained unicast routing (DCUR) algorithm. The DCUR requires limited network state information to be kept at each node: a cost vector and a delay vector. We prove the DCUR's correctness by showing that it is always capable of constructing a loop-free delay-constrained path within finite time, if such a path exists. The worst case message complexity of the DCUR is O(|V|/sup 3/) messages, where |V| is the number of nodes. However simulation results show that, on average, the DCUR requires much fewer messages. Therefore, the DCUR scales well to large networks. We also use simulation to compare the DCUR to the optimal algorithm, and to the least-delay path algorithm. Our results show that the DCUR's path costs are within 10% from those of the optimal solution.

203 citations


Proceedings ArticleDOI
27 May 1997
TL;DR: Two novel suboptimal algorithms for mutual exclusion in distributed systems are presented based on the modification of Maekawa's grid based quorum scheme and the resulting scheme is very close to optimal in terms of quorum size.
Abstract: Two novel suboptimal algorithms for mutual exclusion in distributed systems are presented. One is based on the modification of Maekawa's (1985) grid based quorum scheme. The size of quorums is approximately /spl radic/2/spl radic/N where N is the number of sites in a network, as compared to 2/spl radic/N of the original method. The method is simple and geometrically evident. The second one is based on the idea of difference sets in combinatorial theory. The resulting scheme is very close to optimal in terms of quorum size.

162 citations


Proceedings ArticleDOI
02 Nov 1997
TL;DR: A distributed algorithm for assigning codes in a dynamic, multihop wireless radio network that does not require any form of synchronization and is completely distributed is described and analyzed.
Abstract: This paper describes and analyzes a distributed algorithm for assigning codes in a dynamic, multihop wireless radio network. The algorithm does not require any form of synchronization and is completely distributed. The algorithm can be used for both the transmitter oriented and receiver oriented code assignment. The algorithm is proven to be correct and its complexity is analyzed. The implementation of the code assignment algorithm as part of the medium access control (MAC) and routing protocols of a multihop packet-radio network is discussed.

Journal ArticleDOI
TL;DR: It is proved that, even assuming that the processors know the network topology, Ω(n) rounds are required for solving the problem on a complete network (D=1) with n processors.
Abstract: In this paper, we prove a lower bound on the number of rounds required by a deterministic distributed protocol for broadcasting a message in radio networks whose processors do not know the identities of their neighbors. Such an assumption captures the main characteristic of mobile and wireless environments [3], i.e., the instability of the network topology. For any distributed broadcast protocol II, for any n and for any D ≤ n/2, we exhibit a network G with n nodes and diameter D such that the number of rounds needed by H for broadcasting a message in G is Ω(D log n). The result still holds even if the processors in the network use a different program and know n and D. We also consider the version of the broadcast problem in which an arbitrary number of processors issue at the same time an identical message that has to be delivered to the other processors. In such a case we prove that, even assuming that the processors know the network topology, Ω(n) rounds are required for solving the problem on a complete network (D=1) with n processors.

Proceedings ArticleDOI
19 Oct 1997
TL;DR: A distributed algorithm that obtains a (1+/spl epsiv/) approximation to the global optimum solution and runs in a polylogarithmic number of distributed rounds, which is considerably simpler than previous approximation algorithms for positive linear programs, and thus may have practical value in both centralized and distributed settings.
Abstract: Flow control in high speed networks requires distributed routers to make fast decisions based only on local information in allocating bandwidth to connections. While most previous work on this problem focuses on achieving local objective functions, in many cases it may be necessary to achieve global objectives such as maximizing the total flow. This problem illustrates one of the basic aspects of distributed computing: achieving global objectives using local information. Papadimitriou and Yannakakis (1993) initiated the study of such problems in a framework of solving positive linear programs by distributed agents. We take their model further, by allowing the distributed agents to acquire more information over time. We therefore turn attention to the tradeoff between the running time and the quality of the solution to the linear program. We give a distributed algorithm that obtains a (1+/spl epsiv/) approximation to the global optimum solution and runs in a polylogarithmic number of distributed rounds. While comparable in running time, our results exhibit a significant improvement on the logarithmic ratio previously obtained by Awerbuch and Azar (1994). Our algorithm, which draws from techniques developed by Luby and Nisan (1993) is considerably simpler than previous approximation algorithms for positive linear programs, and thus may have practical value in both centralized and distributed settings.

Journal ArticleDOI
TL;DR: D deterministic and randomized self-stabilizing algorithms that maintain a rooted spanning tree in a general network whose topology changes dynamically, which provide for the easy construction of self-Stabilizing protocols for numerous tasks.

Book
28 Mar 1997
TL;DR: This innovative book provides the reader with knowledge of the important algorithms necessary for an in-depth understanding of distributed systems and motivates the study of these algorithms by presenting a systems framework for their practical application.
Abstract: From the Publisher: Distributed Operating Systems and Algorithms integrates into one text both the theory and implementation aspects of distributed operating systems for the first time. This innovative book provides the reader with knowledge of the important algorithms necessary for an in-depth understanding of distributed systems; at the same time it motivates the study of these algorithms by presenting a systems framework for their practical application. The first part of the book is intended for use in an advanced course on operating systems and concentrates on parallel systems, distributed systems, real-time systems, and computer networks. The second part of the text is written for a course on distributed algorithms with a focus on algorithms for asynchronous distributed systems. While each of the two parts is self-contained, extensive cross-referencing allows the reader to emphasize either theory or implementation or to cover both elements of selected topics. Features: Integrates and balances coverage of the advanced aspects of operating systems with the distributed algorithms used by these systems. Includes extensive references to commercial and experimental systems to illustrate the concepts and implementation issues. Provides precise algorithm description and explanation of why these algorithms were developed. Structures the coverage of algorithms around the creation of a framework for implementing a replicated server-a prototype for implementing a fault-tolerant and highly available distributed system. Contains programming projects on such topics as sockets, RPC, threads, and implementation of distributed algorithmsusing these tools. Includes an extensive annotated bibliography for each chapter, pointing the reader to recent developments. Solutions to selected exercises, templates to programming problems, a simulator for algorithms for distributed synchronization, and teaching tips for selected topics are available to qualified instructors from Addison Wesley.

Proceedings ArticleDOI
16 Nov 1997
TL;DR: The principle objective of the this paper is to present an algorithm that overcomes problems of inconsistent copies, a non-respect of user’s intentions, and in the need to undo and redo certain operations.
Abstract: In a distributed groupware system, objects shared by users are subject to concurrency and real-time constraints. In order to satisfy these, various concurrency control algorithms 141 [ 1 l] have been proposed that exploit the semantic properties of operations. By ordering concurrent operations, they guarantee consistency of the different copies of each object. The drawback of these algorithms is that in some situations they can result in inconsistent copies, a non-respect of user’s intentions, and in the need to undo and redo certain operations. The principle objective of the this paper is to present an algorithm that overcomes these problems. The algorithm is based on the notion of user’s intention, and also on the construction of equivalent histories by exploiting and combining some general semantic properties such as forward/backward transposition. Keywbrds: groupware systems, concurrency control, distributed systems, multi-user editors, operation transposition.

Journal ArticleDOI
TL;DR: This work studies the mobile admission control problem in a cellular PCS network where transmitter powers are constrained and controlled by a distributed constrained power control (DCPC) algorithm and derives a soft and safe (SAS) admission algorithm, which is type I and type II error free, and protects the CIR's of all active links at any moment of time.
Abstract: We study the mobile admission control problem in a cellular PCS network where transmitter powers are constrained and controlled by a distributed constrained power control (DCPC) algorithm. Receivers are subject to nonnegligible noise, and the DCPC attempts to bring each receiver's CIR (carrier-to-interference ratio) above a given quality target. Two classes of distributed admission control are considered. One is a noninteractive admission control (N-IAC), where an admission decision is instantaneously made based on the system state. The other is an interactive admission control (IAC), under which the new mobile is permitted to interact with one or more potential channels before a decision is made. The algorithms are evaluated with respect to their execution time and their decision errors. Two types of errors are examined: type I error, where a new mobile is erroneously accepted and results in outage; and type II error, where a new mobile is erroneously rejected and results in blocking. The algorithms in the N-IAC class accept a new mobile if and only if the uplink and the downlink interferences are below certain corresponding thresholds. These algorithms are subject to errors of type I and type II. In the IAC class, we derive a soft and safe (SAS) admission algorithm, which is type I and type II error free, and protects the CIR's of all active links at any moment of time. A fast-SAS version, which is only type I error-free, is proposed for practical implementation, and is evaluated in several case studies.

Journal ArticleDOI
01 Jan 1997
TL;DR: The features of complex information-carrying environments and the information-gathering task are examined, demonstrating both the utility of viewing information-Gathering as distributed problem-solving and difficulties with viewing it as distributed processing.
Abstract: Two approaches to the problem of information-gathering, that may be characterised as distributed processing and distributed problem-solving, are contrasted. The former is characteristic of most existing information-gathering systems, and the latter is central to research in multi-agent systems. The features of complex information-carrying environments and the information-gathering task are examined, demonstrating both the utility of viewing information-gathering as distributed problem-solving and difficulties with viewing it as distributed processing. A new approach is proposed to information-gathering based on the distributed problem-solving paradigm and its attendant body of research in multi-agent systems and distributed artificial intelligence. This approach, called cooperative information-gathering, involves concurrent, asynchronous discovery and composition of information spread across a network of information servers. Top-level queries drive the creation of partially elaborated information-gathering plans, resulting in the employment of multiple semi-autonomous cooperative agents for the purpose of achieving goals and subgoals within those plans. The system as a whole satisfies, trading off solution quality and search cost while respecting user-imposed deadlines. Current work on distributed and agent-based approaches to information-gathering is also surveyed.

Journal ArticleDOI
Wayne Wolf1
TL;DR: A new, heuristic algorithm which simultaneously synthesizes the hardware and software architectures of a distributed system to meet a performance goal and minimize cost is described.
Abstract: Many embedded computers are distributed systems, composed of several heterogeneous processors and communication links of varying speeds and topologies. This paper describes a new, heuristic algorithm which simultaneously synthesizes the hardware and software architectures of a distributed system to meet a performance goal and minimize cost. The hardware architecture of the synthesized system consists of a network of processors of multiple types and arbitrary communication topology; the software architecture consists of an allocation of processes to processors and a schedule for the processes. Most previous work in co-synthesis targets an architectural template, whereas this algorithm can synthesize a distributed system of arbitrary topology. The algorithm works from a technology database which describes the available processors, communication links, I/O devices, and implementations of processes on processors. Previous work had proposed solving this problem by integer linear programming (ILP); our algorithm is much faster than ILP and produces high-quality results.

Journal ArticleDOI
TL;DR: The main novelty of the IDAMN architecture is its ability to perform intrusion detection in the visited location and within the duration of a typical call, as opposed to existing designs that require the reporting of all call data to the home location in order to perform the actual detection.
Abstract: We present IDAMN (intrusion detection architecture for mobile networks), a distributed system whose main functionality is to track and detect mobile intruders in real time. IDAMN includes two algorithms which model the behavior of users in terms of both telephony activity and migration pattern. The main novelty of our architecture is its ability to perform intrusion detection in the visited location and within the duration of a typical call, as opposed to existing designs that require the reporting of all call data to the home location in order to perform the actual detection. The algorithms and the components of IDAMN have been designed in order to minimize the overhead incurred in the fixed part of the cellular network.

Proceedings ArticleDOI
27 May 1997
TL;DR: It is shown that large scale dynamic caching can be employed to globally minimize server idle time, and hence maximize the aggregate server throughput of the whole service.
Abstract: Document publication service over such a large network as the Internet challenges us to harness available server and network resources to meet fast growing demand. We show that large scale dynamic caching can be employed to globally minimize server idle time, and hence maximize the aggregate server throughput of the whole service. To be efficient, scalable and robust, a successful caching mechanism must have three properties: (1) maximize the global throughput of the system; (2) find cache copies without recourse to a directory service, or to a discovery protocol; and (3) be completely distributed in the sense of operating only on the basis of local information. We develop a precise definition, which we call tree load balance (TLB), of what it means for a mechanism to satisfy these three goals. We present an algorithm that computes TLB offline, and a distributed protocol that induces a load distribution that converges quickly to a TLB one. Both algorithms place cache copies of immutable documents on the routing tree that connects the cached document's home server to its clients, thus enabling requests to stumble on cache copies en route to the home server.


Proceedings ArticleDOI
TL;DR: It is proved that the algorithm is valid as it is stated and that it effectively obtains an upper bound for the worst case response times to external events in distributed systems, since the longest response always occurs within the cases that are currently tested by this algorithm.
Abstract: We investigate into the validity of the rate monotonic analysis techniques for distributed hard real time systems. A recent paper has shown that the algorithm developed by K. Tindell and J. Clark (1994) for the analysis of this kind of system was incomplete because it did not test all the possible cases. We prove that the algorithm is valid as it is stated and that it effectively obtains an upper bound for the worst case response times to external events in distributed systems, since the longest response always occurs within the cases that are currently tested by this algorithm. In addition, we extend the analysis technique to determine an upper bound for the local response times of particular actions in a response to an event, thus allowing the definition and verification of local deadlines for elementary actions in distributed systems.

Book ChapterDOI
24 Sep 1997
TL;DR: In this article, an enhanced mechanism to enable construction in the presence of malicious faults, which can intentionally modify their shares of the information, was later presented by Krawczyk.
Abstract: In his well-known Information Dispersal Algorithm paper, Rabin showed a way to distribute information among n processors in such a way that recovery of the information is possible in the presence of up to t inactive processors. An enhanced mechanism to enable construction in the presence of malicious faults, which can intentionally modify their shares of the information, was later presented by Krawczyk. Yet, this method assumed that the malicious faults occur only at reconstruction time.

Proceedings ArticleDOI
27 May 1997
TL;DR: Traditional scheduling algorithms are adapted to the DNS, new policies are proposed, and the advantage of using strategies that schedule requests on the basis of the origin of the clients and very limited state information, such as whether a server is overloaded or not are shown.
Abstract: A distributed Web system, consisting of multiple servers for data retrieval and a Domain Name Server (DNS) for address resolution, can provide the scalability necessary to keep up with growing client demand at popular sites. However, balancing the requests among these atypical distributed servers opens interesting new challenges. Unlike traditional distributed systems in which a centralized scheduler has full control of the system, the DNS controls only a small fraction of the requests reaching the Web site. This makes it very difficult to avoid overloading situations among the multiple Web servers. We adapt traditional scheduling algorithms to the DNS, propose new policies, and examine their impact. Extensive simulation results show the advantage of using strategies that schedule requests on the basis of the origin of the clients and very limited state information, such as whether a server is overloaded or not. Conversely, algorithms that use detailed state information often exhibit the worst performance.

Book ChapterDOI
24 Jul 1997
TL;DR: The problems that arise when attempting to utilize the theoretical coalition formation algorithms for a real-world system are presented, how some of their restrictive assumptions can be relaxed are demonstrated, and the resulting benefits are discussed.
Abstract: Incorporating coalition formation algorithms into agent systems shall be advantageous due to the consequent increase in the overall quality of task performance. Coalition formation was addressed in game theory, however the game theoretic approach is centralized and computationally intractable. Recent work in DAI has resulted in distributed algorithms with computational tractability. This paper addresses the implementation of distributed coalition formation algorithms within a real-world multi-agent system. We present the problems that arise when attempting to utilize the theoretical coalition formation algorithms for a real-world system, demonstrate how some of their restrictive assumptions can be relaxed, and discuss the resulting benefits. In addition, we analyze the modifications, the complexity and the quality of the cooperation mechanisms. The task domain of our multi-agent system is information gathering, filtering and decision support within the WWW.

Journal ArticleDOI
TL;DR: The cooperative navigation system (CNS) algorithm described here is based on a Kalman filter which uses inter-robot position sensing to update the collective position estimates of the group.
Abstract: The navigation capability of a group of robots can be improved by sensing of relative inter-robot positions and intercommunication of position estimates and planned trajectories. The cooperative navigation system (CNS) algorithm described here is based on a Kalman filter which uses inter-robot position sensing to update the collective position estimates of the group. Assuming independence of sensing and positioning errors, the CNS algorithm always improves individual robot estimates and the collective navigation performance improves as the number of robots increases. The CNS algorithm computation may be distributed among the robot group. Simulation results and experimental measurements on two Yamabico robots are described.

Proceedings Article
01 Jan 1997
TL;DR: An algorithm is presented that uses a fault detector in 3S(Byz) to solve the consensus problem in an asynchronous distributed system with at most b(n 1)=3c Byzantine faults.
Abstract: Unreliable fault detectors can be used to solve the consensus problem in asynchronous distributed systems that are subject to crash faults. We extend this result to asynchronous distributed systems that are subject to Byzantine faults. We define the class 3S(Byz) of eventually strong Byzantine fault detectors and the class 3W(Byz) of eventually weak Byzantine fault detectors and show that any Byzantine fault detector in 3W(Byz) can be transformed into a Byzantine fault detector in 3S(Byz). We present an algorithm that uses a fault detector in 3S(Byz) to solve the consensus problem in an asynchronous distributed system with at most b(n 1)=3c Byzantine faults. The class 3W(Byz) of Byzantine fault detectors is the weakest class of fault detectors that can be used to solve consensus in such an asynchronous distributed system.


01 Jan 1997
TL;DR: In this paper, the authors present an object-based framework for developing wide-area distributed applications based on the concept of a distributed shared object, which has the characteristic feature that state can be physically distributed across multiple machines at the same time.
Abstract: Article summary. Developing large-scale wide-area applications requires an infrastructure that is presently lacking. Currently, most Internet applications have to be built on top of raw communication services, such as TCP connections. All additional services, including those for naming, replication, migration, persistence, fault tolerance, and security, have to be implemented for each application anew. Not only is this a waste of effort, it also makes interoperability between different applications difficult or even impossible. The authors present a novel, object-based framework for developing widearea distributed applications. The framework is based on the concept of a distributed shared object, which has the characteristic feature that its state can be physically distributed across multiple machines at the same time. All implementation aspects, including communication protocols, replication strategies, and distribution and migration of state, are part of each object and are hidden behind its interface. The current performance problems of the World-Wide Web are taken as an example to illustrate the benefit of encapsulating state, operations, and implementation strategies on a per-object basis. The authors describe how distributed objects can be used to implement worldwide scalable Web documents.