scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 1993"


Journal ArticleDOI
TL;DR: The algorithm can be used as a building block for solving other distributed graph problems, and can be slightly modified to run on a strongly-connected diagraph for generating the existent Euler trail or to report that no Euler trails exist.

13,828 citations


Journal ArticleDOI
TL;DR: For wireless cellular communication systems, one seeks a simple effective means of power control of signals associated with randomly dispersed users that are reusing a single channel in different cells, and the authors demonstrate exponentially fast convergence to these settings whenever power settings exist for which all users meet the rho requirement.
Abstract: For wireless cellular communication systems, one seeks a simple effective means of power control of signals associated with randomly dispersed users that are reusing a single channel in different cells. By effecting the lowest interference environment, in meeting a required minimum signal-to-interference ratio of rho per user, channel reuse is maximized. Distributed procedures for doing this are of special interest, since the centrally administered alternative requires added infrastructure, latency, and network vulnerability. Successful distributed powering entails guiding the evolution of the transmitted power level of each of the signals, using only focal measurements, so that eventually all users meet the rho requirement. The local per channel power measurements include that of the intended signal as well as the undesired interference from other users (plus receiver noise). For a certain simple distributed type of algorithm, whenever power settings exist for which all users meet the rho requirement, the authors demonstrate exponentially fast convergence to these settings. >

1,831 citations


Journal ArticleDOI
TL;DR: This paper presents snapshot algorithms for determining a consistent global state of a distributed system without significantly affecting the underlying computation by using snapshot algorithms and distributed termination detection principles.

368 citations


Book
01 May 1993
TL;DR: This paper considers global predicate evaluation as a canonical problem in order to survey concepts and mechanisms that are useful in coping with uncertainty in distributed computation and illustrates the utility of the developed techniques by examining distributed deadlock detection and distributed debugging as two instances of global predicate Evaluation.
Abstract: Distributed systems that span large geographic distances or interconnect large numbers of components are adequately modeled as asynchronous systems. Given the uncertainties in such systems that arise from communication delays and relative speeds of computations, reasoning about global states has to be carried out using local, and often, imperfect information. In this paper, we consider global predicate evaluation as a canonical problem in order to survey concepts and mechanisms that are useful in coping with uncertainty in distributed computation. We illustrate the utility of the developed techniques by examining distributed deadlock detection and distributed debugging as two instances of global predicate evaluation.

319 citations


Journal ArticleDOI
TL;DR: A family of distributed algorithms for the dynamic computation of the shortest paths in a computer network or internet is presented, validated, and analyzed, and these algorithms are shown to converge in finite time after an arbitrary sequence of link cost or topological changes.
Abstract: A family of distributed algorithms for the dynamic computation of the shortest paths in a computer network or internet is presented, validated, and analyzed. According to these algorithms, each node maintains a vector with its distance to every other node. Update messages from a node are sent only to its neighbors; each such message contains a distance vector of one or more entries, and each entry specifies the length of the selected path to a network destination, as well as an indication of whether the entry constitutes an update, a query, or a reply to a previous query. The new algorithms treat the problem of distributed shortest-path routing as one of diffusing computations, which was first proposed by Dijkstra and Scholten (1980). They improve on a number of algorithms introduced previously. The new algorithms are shown to converge in finite time after an arbitrary sequence of link cost or topological changes, to be loop-free at every instant, and to outperform all other loop-free routing algorithms previously proposed from the standpoint of the combined temporal, message, and storage complexities. >

302 citations


Journal ArticleDOI
TL;DR: A control structure called a superimposition is proposed, which captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code.
Abstract: A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented

193 citations


Proceedings ArticleDOI
01 May 1993
TL;DR: TickerTAIP is described, a parallel architecture for disk arrays that distributed the controller functions across several loosely-coupled processors that is feasible, useful, and effective.
Abstract: Traditional disk arrays have a centralized architecture, with a single controller through which all requests flow. Such a controller is a single point of failure, and its performance limits the maximum size that the array can grow to. We describe here TickerTAIP, a parallel architecture for disk arrays that distributed the controller functions across several loosely-coupled processors. The result is better scalability, fault tolerance, and flexibility.This paper presents the TickerTAIP architecture and an evaluation of its behavior. We demonstrate the feasibility by an existence proof; describe a family of distributed algorithms for calculating RAID parity; discuss techniques for establishing request atomicity, sequencing and recovery; and evaluate the performance of the TickerTAIP design in both absolute terms and by comparison to a centralized RAID implementation. We conclude that the TickerTAIP architectural approach is feasible, useful, and effective.

179 citations


Journal ArticleDOI
Avraham Leff1, Joel L. Wolf1, Philip S. Yu1
TL;DR: Performance of the distributed algorithms is found to be close to optimal, while that of the greedy algorithms is far from optimal.
Abstract: Studies the cache performance in a remote caching architecture. The authors develop a set of distributed object replication policies that are designed to implement different optimization goals. Each site is responsible for local cache decisions, and modifies cache contents in response to decisions made by other sites. The authors use the optimal and greedy policies as upper and lower bounds, respectively, for performance in this environment. Critical system parameters are identified, and their effect on system performance studied. Performance of the distributed algorithms is found to be close to optimal, while that of the greedy algorithms is far from optimal. >

135 citations


Journal ArticleDOI
TL;DR: The integration of mobile/portable computers within existing static networks introduces a new set of issues to distributed computations, and unique physical features of a mobile computing environment need to be considered as well, while designing distributed algorithms for such systems.
Abstract: The integration of mobile/portable computers within existing static networks introduces a new set of issues to distributed computations. A mobile host can connect to the network from different locations at different times; distributed algorithms therefore, cannot rely on the assumption that a participant maintains a fixed and universally known location in the network at all times. Mobile hosts communicate with the rest of the network via a wireless broadcast medium and portable computers such as laptops and palmtops, often operate in a "disconnected" or "doze" mode to conserve battery power. These unique physical features of a mobile computing environment need to be considered as well, while designing distributed algorithms for such systems.

122 citations


Journal ArticleDOI
TL;DR: It is argued that object-oriented distributed computing is a natural step forward from client-server systems and to support this claim, the differing levels of object- oriented support already found in commercially available distributed systems are discussed.
Abstract: The basic properties of object orientation and their application to heterogeneous, autonomous, and distributed system to increase interoperability ar examined. It is argued that object-oriented distributed computing is a natural step forward from client-server systems. To support this claim, the differing levels of object-oriented support already found in commercially available distributed systems-in particular, the distributed computing environment of the open software foundation and the Cronus system of Bolt Beranek, Newman (BBN)-are discussed. Emerging object-oriented systems and standards are described, focusing on the convergence toward a least-common-denominator approach to object-oriented distributed computing embodied by the object management group's common object request broker architecture. >

120 citations


Proceedings ArticleDOI
25 May 1993
TL;DR: The authors begin by introducing the notion of a purely replicated architecture and then present GroupDesign, a shared drawing tool implemented with this architecture that gives the best response time for the interface and reduces the number of undo and redo operations when conflicts occur.
Abstract: Computer supported cooperative work (CSCW) is a rapidly growing field. Real-time groupware systems are addressed that allow a group of users to edit a shared document. The architecture and concurrency control algorithm used in this system are described. The algorithm is based on the semantics of the application and can be used by the developers of other groupware systems. The authors begin by introducing the notion of a purely replicated architecture and then present GroupDesign, a shared drawing tool implemented with this architecture. They then present the main parts of the algorithm that implement the distribution. The algorithm gives the best response time for the interface and reduces the number of undo and redo operations when conflicts occur. >

Proceedings ArticleDOI
Juan A. Garay1, Shay Kutten1, David Peleg
03 Nov 1993
TL;DR: This paper proposes that a more sensitive parameter is the network's diameter Diam, and provides a distributed minimum-weight spanning tree algorithm whose time complexity is sub-linear in n, but linear in Diam (specifically, O(Diam+n/sup 0.614/)).
Abstract: This paper considers the question of identifying the parameters governing the behavior of fundamental global network problems. Many papers on distributed network algorithms consider the task of optimizing the running time successful when an O(n) bound is achieved on an n-vertex network. We propose that a more sensitive parameter is the network's diameter Diam. This is demonstrated in the paper by providing a distributed minimum-weight spanning tree algorithm whose time complexity is sub-linear in n, but linear in Diam (specifically, O(Diam+n/sup 0.614/)). Our result is achieved through the application of graph decomposition and edge elimination techniques that may be of independent interest. >

Journal ArticleDOI
TL;DR: It is shown that this problem can be reduced to the problem of finding a minimal shortest path from each node to the destination in a modified network, and a distributed algorithm on the original network that simulates a shortest-paths algorithm running on the modified network is presented.
Abstract: Distributed algorithms for finding two disjoint paths of minimum total length from each node to a destination are presented. The algorithms have both node-disjoint and link-disjoint versions and provide each node with information sufficient to forward data on the disjoint paths. It is shown that this problem can be reduced to the problem of finding a minimal shortest path from each node to the destination in a modified network, and a distributed algorithm on the original network that simulates a shortest-paths algorithm running on the modified network is presented. The algorithm has a smaller space complexity than any previous distributed algorithm for the same problem, and a method for forwarding packets is presented that does not require any additional space complexity. A synchronous implementation of the algorithm is also presented and studied. >

Journal ArticleDOI
TL;DR: It is shown that the termination detection problem for distributed computations can be modeled as an instance of the garbage collection problem and algorithms for thetermination detection problem are obtained by applying transformations to garbage collection algorithms.
Abstract: It is shown that the termination detection problem for distributed computations can be modeled as an instance of the garbage collection problem. Consequently, algorithms for the termination detection problem are obtained by applying transformations to garbage collection algorithms. The transformation can be applied to collectors of the “mark-and-sweep” type as well as to reference-counting protocol of Lermen and Maurer, the weighted-reference-counting protocol, the local-reference-counting protocol, and Ben-Ari's mark-and-sweep collector into termination detection algorithms. Known termination detection algorithms as well as new variants are obtained.

Journal ArticleDOI
TL;DR: A number of distributed algorithms that make use of synchronized clocks are discussed and how clocks are used in these algorithms are analyzed.
Abstract: Synchronized clocks are interesting because they can be used to improve performance of a distributed system by reducing communication Since they have only recently become a reality in distributed systems, their use in distributed algorithms has received relatively little attention This paper discusses a number of distributed algorithms that make use of synchronized clocks and analyzes how clocks are used in these algorithms

Journal ArticleDOI
TL;DR: The authors present a routing algorithm that uses the depth first search approach combined with a backtracking technique to route messages on the star graph in the presence of faulty links and provides a performance analysis for the case where an optimal path does not exist.
Abstract: The authors present a routing algorithm that uses the depth first search approach combined with a backtracking technique to route messages on the star graph in the presence of faulty links. The algorithm is distributed and requires no global knowledge of faults. The only knowledge required at a node is the state of its incident links. The routed message carries information about the followed path and the visited nodes. The algorithm routes messages along the optimal, i.e., the shortest path if no faults are encountered or if the faults are such that an optimal path still exists. In the absence of an optimal path, the algorithm always finds a path between two nodes within a bounded number of hops if the two nodes are connected. Otherwise, it returns the message to the originating node. The authors provide a performance analysis for the case where an optimal path does not exist. They prove that for a maximum of n-2 faults on a graph with N=n! nodes, at most 2i+2 steps are added to the path, where i is O( square root n). Finally, they use the routing algorithm to present an efficient broadcast algorithm on the star graph in the presence of faults. >

Book ChapterDOI
01 Jan 1993
TL;DR: It is shown that the column-oriented approach to sparse Cholesky for distributed-memory machines is not scalable and by considering message volume, node contention, and bisection width, one may obtain lower bounds on the time required for communication in a distributed algorithm.
Abstract: We shall say that a scalable algorithm achieves efficiency that is bounded away from zero as the number of processors and the problem size increase in such a way that the size of the data structures increases linearly with the number of processors In this paper we show that the column-oriented approach to sparse Cholesky for distributed-memory machines is not scalable By considering message volume, node contention, and bisection width, one may obtain lower bounds on the time required for communication in a distributed algorithm Applying this technique to distributed, column-oriented, dense Cholesky leads to the conclusion that N (the order of the matrix) must scale with P (the number of processors) so that storage grows like P 2 So the algorithm is not scalable Identical conclusions have previously been obtained by consideration of communication and computation latency on the critical path in the algorithm; these results complement and reinforce that conclusion

Proceedings ArticleDOI
01 Dec 1993
TL;DR: This paper defines formally a class of unstable non-monotonic global predicates, proposes a distributed algorithm to detect their occurrences and gives a sketch of a proof of correctness of this algorithm.
Abstract: This paper deals with a class of unstable non-monotonic global predicates, called herein atomic sequences of predicates. Such global predicates are defined for distributed programs built with message-passing communication only (no shared memory) and they describe global properties by causal composition of local predicates augmented with atomicity constraints. These constraints specify forbidden properties, whose occurrence invalidate causal sequences. This paper defines formally these atomic sequences of predicates, proposes a distributed algorithm to detect their occurrences and gives a sketch of a proof of correctness of this algorithm.

Journal ArticleDOI
J.W. Stamos1, H.C. Young1
TL;DR: It is claimed that SFR improves the worst-case cost for a distributed join, but it will not displace specialized distributed join algorithms when the later are applicable.
Abstract: It is shown that the fragment and replicate (FR) distributed join algorithm is a special case of the symmetric fragment and replicate (SFR) algorithm, which improves the FR algorithm by reducing its communication. The SFR algorithm, like the FR algorithm, is applicable to N-way joins and nonequijoins and does tuple balancing automatically. The authors derive formulae that show how to minimize the communication in the SFR algorithm, discuss its performance on a parallel database prototype, and evaluate its practicality under various conditions. It is claimed that SFR improves the worst-case cost for a distributed join, but it will not displace specialized distributed join algorithms when the later are applicable. >

01 Jan 1993
TL;DR: This presentation focuses on the class of Conflict Resolution Algorithms, which exhibits very good performance characteristics for ‘‘bursty’’ computer communications traffic, including high capacity, low delay under light traffic conditions, and inherent stability.
Abstract: Multiple Access protocols are distributed algorithms that enable a set of geographically dispersed stations to communicate using a single, common, broadcast channel. We concentrate on the class of Conflict Resolution Algorithms. This class exhibits very good performance characteristics for ‘‘bursty’’ computer communications traffic, including high capacity, low delay under light traffic conditions, and inherent stability. One algorithm in this class achieves the highest capacity among all known multiple-access protocols for the infinite population Poisson model. Indeed, this capacity is not far from a theoretical upper bound. After surveying the most important and influential Conflict Resolution Algorithms, the emphasis in our presentation is shifted to methods for their analysis and results of their performance evaluation. We also discuss some extensions of the basic protocols and performance results for non-standard environments, such as Local Area Networks, satellite channels, channels with errors, etc., providing a comprehensive bibliography. 1. Conflict Resolution Based Random Access Protocols The ALOHA protocols were a breakthrough in the area of multiple access communications.1 They delivered, more or less, what they advertized, i.e., low delay for bursty, computer generated traffic. They suffer, however, from stability problems and low capacity.2 The next major breakthrough in the area of multiple access communications was the development of random access protocols that resolve conflicts algorithmically. The invention of Conflict Resolution Algorithms (CRAs) is usually attributed to Capetanakis [Capet78, Capet79, Capet79b], and, independently, to Tsybakov and Mikhailov [Tsyba78]. The same idea, but in a slightly different context, was also presented, earlier, by Hayes [Hayes78]. Later, it was recognized [Berge84, Wolf85] that the underlying idea had been known for a long time in the context of Group Testing [Dorfm43, Sobel59, Ungar60]. Group Testing was developed during World War II to speed up processing of syphilis blood tests. Since the administered test had high sensitivity, it was suggested [Dorfm43] that many blood samples could be pooled together. The result of the test would then be positive if, and only if, there was at least one diseased sample in the pool, in which case individual tests were administered to isolate the diseased samples. Later, it was suggested that, after the first diseased sample was isolated, the remaining samples could again be pooled for further testing. The beginning of a general theory of Group Testing can be found in [Sobel59], where, as pointed out in [Wolf85], a tree search algorithm is suggested, similar to the ones we present in section 1.2. The first application of Group Testing to communications arose when Hayes proposed a new, and more efficient, polling algorithm that he named probing [Hayes78]. Standard polling schemes are unacceptable for large sets of bursty stations because the overhead is proportional to the number of stations in the system, and independent from the amount of traffic. Hayes’ main idea was to shorten the polling cycle by having the central controller query subsets of the total population to discover if these subsets contain stations with waiting packets. If the response is negative, the total subset is ‘‘eliminated’’ in a single query. If the response is positive, the group is split into two subgroups and the process is continued, recursively, until a single active station is polled. This station is then allowed to transmit some data, which does not have to be in the form of constant size packets. Clearly, this is a reservation protocol. In subsequent papers Hayes has also considered direct transmission systems. Notice that the controller receives feedback in the form something — nothing (at least one station, or no station with waiting packets). 1.1. Basic Assumptions The protocols that will be presented in this section have been developed, and analyzed, on the basis of a set of common assumptions3 that describe a standard environment that is usually called an ALOHA-type channel. 1. Synchronous (slotted) operation: The common-receiver model of a broadcast channel is usually implicitly assumed. Furthermore, messages are split into packets of fixed size. All transmitters are (and remain) synchronized, and may initiate transmissions only at predetermined times, one packet transmission time apart. The time between two successive allowable packet transmission times is called a slot and is usually taken as the time unit. Thus, if more than one packet is transmitted during the same slot, they are ‘‘seen’’ by the receiver simultaneously, and therefore, overlap completely. 2. Errorless channel: If a given slot contains a single packet transmission, then the packet will be received correctly (by the common receiver). 1 For an introduction to the area of multiple access communications see the books by Bertsekas and Gallager [Berts92, chapter 4] and [Rom90]. Actually, chapter 4 of [Berts92] and chapter 5 of [Rom90] also present good expositions of Conflict Resolution Algorithms. 2 If no special control is exercised to stabilize the protocols, the term capacity must be taken in the ‘‘broader’’ sense of maximum throughput maintained during considerable periods of time, since the true capacity is zero [Fergu75, Fayol77, Aldou87]. However, having to stabilize the protocols detracts from their initial appeal that was mainly due to their simplicity. 3 Some of the protocols can operate with some of the assumptions weakened. When this is the case we point it out during their presentation. In section 6 we discuss protocols and analyses techniques that weaken or modify some of these assumptions.

Book ChapterDOI
28 Jun 1993
TL;DR: This paper uses results about I/O automata to extract a set of proof obligations for showing that the behaviors of one algorithm are among those of another, and uses the Larch tools for specification and deduction to discharge these obligations in a natural and easy-to-read fashion.
Abstract: This paper presents a scalable approach to reasoning formally about distributed algorithms. It uses results about I/O automata to extract a set of proof obligations for showing that the behaviors of one algorithm are among those of another, and it uses the Larch tools for specification and deduction to discharge these obligations in a natural and easy-to-read fashion. The approach is demonstrated by proving the behavior equivalence of two high-level specifications for a communication protocol.

Journal ArticleDOI
TL;DR: In this paper, a large class of problems that can be solved using logical clocks as if they were perfectly synchronized clocks is formally characterized, and a broadcast primitive is also proposed to simplify the task of designing and verifying distributed algorithms.
Abstract: Time and knowledge are studied in synchronous and asynchronous distributed systems. A large class of problems that can be solved using logical clocks as if they were perfectly synchronized clocks is formally characterized. For the same class of problems, a broadcast primitive that can be used as if it achieves common knowledge is also proposed. Thus, logical clocks and the broadcast primitive simplify the task of designing and verifying distributed algorithms: The designer can assume that processors have access to perfectly synchronized clocks and the ability to achieve common knowledge.

Book ChapterDOI
27 Sep 1993
TL;DR: It is proved that no such protocols exist for a wide range of problems, including determining the size of the distributed system and leader election, as well as designing protocols that are both self-stabilizing and fault-tolerant, in the asynchronous model of distributed systems.
Abstract: We investigate the possibility of designing protocols that are both self-stabilizing and fault-tolerant, in the asynchronous model of distributed systems We prove that no such protocols exist for a wide range of problems, including determining (even approximately) the size of the distributed system and leader election All of these problems are solvable in asynchronous systems using (randomized) protocols that are only fault-tolerant (but not self-stabilizing); or only self-stabilizing (but not fault-tolerant) We then focus on the problem of computing distinct names for the processors in a ring We give three (randomized) protocols that solve this problem in three settings satisfying increasingly weak assumptions: When processors know the exact size n of the ring; when they only know an upper bound on n; and when they have no information about n

Journal ArticleDOI
TL;DR: A distributed algorithm is presented for constructing a nearly optimal Steiner tree in an asynchronous network represented by a weighted communication graph G = (V, E, c) where G is the subset of nodes of G to be connected.

Book ChapterDOI
26 Jul 1993
TL;DR: This paper proposes to hide low-level functions behind object-oriented abstractions such as object-groups, Remote Method Calling, and Smart Proxies, and describes how the Electra toolkit provides such object- oriented abstractions in a portable and highly machine-independent way.
Abstract: Under many circumstances, the development of distributed applications greatly benefits from mechanisms like process groups, reliable ordered multicast, and message passing. However, toolkits offering these capabilities are often low-level and therefore difficult to program. To ease the development of distributed applications, in this paper we propose to hide these low-level functions behind object-oriented abstractions such as object-groups, Remote Method Calling, and Smart Proxies. Furthermore, we describe how the Electra toolkit provides such object-oriented abstractions in a portable and highly machine-independent way.

Book ChapterDOI
26 Jul 1993
TL;DR: GARF as discussed by the authors is an object-oriented programming environment aimed to support the design of reliable distributed applications, which is based on two programming levels: the functional level and the behavioral level.
Abstract: GARF is an object-oriented programming environment aimed to support the design of reliable distributed applications. Its computational model is based on two programming levels: the functional level and the behavioral level. At the functional level, software functionalities are described using passive objects, named data objects, in a centralized, volatile, and failure free environment. At the behavioral level, data objects are dynamically bound to encapsulators and mailers which support distribution, concurrency, persistence and fault tolerance. Encapsulators wrap data objects by controlling how the latter send and receive messages, while mailers perform communications between encapsulators. This paper describes how the GARF computational model enables to build flexible and highly modular abstractions for the design of reliable distributed applications.

Proceedings ArticleDOI
J.F. Whitehead1
23 May 1993
TL;DR: The author presents a comprehensive study of dynamic channel assignment and power control algorithms which introduces a simple method to estimate the potential capacity of any such system, indicating that interference adaptation is the dominant source of gain in typical settings.
Abstract: The author presents a comprehensive study of dynamic channel assignment and power control (DCA/PC) algorithms which introduces a simple method to estimate the potential capacity of any such system. It is indicated that interference adaptation is the dominant source of gain in typical settings, compared to trunking efficiency. Simulation studies show that distributed DCA/PC algorithms can achieve much of the potential gain (capacity gains of 2-4 are observed where the potential gain is estimated to be about 5). >

Journal ArticleDOI
TL;DR: A centralized 'Traffic' algorithm that can be used as a performance benchmark is presented and a distributed 'degree' algorithms that is a traffic-sensitized version of an algorithm developed by A. Ephremides and T. Truong (1990).
Abstract: The generation of transmission schedules for self-organizing radio networks by traffic-sensitive algorithms is described. A centralized 'Traffic' algorithm that can be used as a performance benchmark is presented. Also described is a distributed 'degree' algorithm that is a traffic-sensitized version of an algorithm developed by A. Ephremides and T. Truong (1990). Two performance measures for comparing schedules and simulation results are also presented. >

Proceedings ArticleDOI
19 Apr 1993
TL;DR: A distributed algorithm for dynamic data replication of an object in a distributed system is presented and it is shown that the cost of the algorithm is within a constant factor of the lower bound.
Abstract: A distributed algorithm for dynamic data replication of an object in a distributed system is presented. The algorithm changes the number of replicas and their location in the distributed system to optimize the amount of communication. The algorithm dynamically adapts the replication scheme of an object to the pattern of read-write requests in the distributed system. It is shown that the cost of the algorithm is within a constant factor of the lower bound. >

Journal ArticleDOI
TL;DR: A transformer takes a distributed algorithm whose message complexity is O(ƒ · m) and produces a new distributed algorithm to solve the same problem with O(n log n + m log n) message complexity, where n and m are the total number of nodes and links in the network.