scispace - formally typeset
Search or ask a question
Topic

Distributed algorithm

About: Distributed algorithm is a research topic. Over the lifetime, 20416 publications have been published within this topic receiving 548109 citations.


Papers
More filters
Book ChapterDOI
09 Jul 2000
TL;DR: Three new deterministic distributed algorithms are developed for broadcasting in radio networks: one node of the network knows a message that needs to be learned by all the remaining nodes, and one of these algorithms improves the performance for general networks running in time O(n3/2).
Abstract: We consider broadcasting in radio networks: one node of the network knows a message that needs to be learned by all the remaining nodes. We seek distributed deterministic algorithms to perform this task. Radio networks are modeled as directed graphs. They are unknown, in the sense that nodes are not assumed to know their neighbors, nor the size of the network, they are aware only of their individual identifying numbers. If more than one message is delivered to a node in a step then the node cannot hear any of them. Nodes cannot distinguish between such collisions and the case when no messages have been delivered in a step. The fastest previously known deterministic algorithm for deterministic distributed broadcasting in unknown radio networks was presented in [6], it worked in time O(n11/6). We develop three new deterministic distributed algorithms. Algorithm A develops further the ideas of [6] and operates in time O(n1:77291) = O(n9/5), for general networks, and in time O(n1+a+H(a)+o(1)) for sparse networks with in-degrees O(na) fora < 1=2; here H is the entropy function. Algorithm B uses a new approach and works in time O(n3/2 log1/2 n) for general networks or O(n1+a+o(1)) for sparse networks. Algorithm C further improves the performance for general networks running in time O(n3/2).

134 citations

Posted Content
TL;DR: In this article, the authors give a poly-logarithmic lower bound on the complexity of local computation for a large class of optimization problems including minimum vertex cover, minimum dominating set, maximum matching, maximal independent set, and maximal matching.
Abstract: The question of what can be computed, and how efficiently, are at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a \emph{distributed} fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first poly-logarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks.

134 citations

Journal ArticleDOI
TL;DR: A new distributed framework to achieve minimum energy data gathering while considering all of the following factors: distributed implementation; capacity and interference associated with the shared medium; and realistic data correlation model is proposed.
Abstract: We consider the problem of correlated data gathering in sensor networks with multiple sink nodes. The problem has two objectives. First, we would like to find a rate allocation on the correlated sensor nodes such that the data gathered by the sink nodes can reproduce the field of observation. Second, we would like to find a transmission structure on the network graph such that the total transmission energy consumed by the network is minimized. The existing solutions to this problem are impractical for deployment because they have not considered all of the following factors: (1) distributed implementation; (2) capacity and interference associated with the shared medium; and (3) realistic data correlation model. In this paper, we propose a new distributed framework to achieve minimum energy data gathering while considering these three factors. Based on a localized version of Slepian-Wolf coding, the problem is modeled as an optimization formulation with a distributed solution. The formulation is first relaxed with Lagrangian dualization and then solved with the subgradient algorithm. The algorithm is amenable to fully distributed implementations, which corresponds to the decentralized nature of sensor networks. To evaluate its effectiveness, we have conducted extensive simulations under a variety of network environments. The results indicate that the algorithm supports asynchronous network settings, sink mobility, and duty schedules.

134 citations

Proceedings ArticleDOI
06 Jun 2011
TL;DR: It is shown that in the absence of a good initial upper bound on the size of the network, eventual consensus is as hard as computing deterministic functions of the input, e.g., the minimum or maximum of inputs to the nodes.
Abstract: We study several variants of coordinated consensus in dynamic networks We assume a synchronous model, where the communication graph for each round is chosen by a worst-case adversary The network topology is always connected, but can change completely from one round to the next The model captures mobile and wireless networks, where communication can be unpredictableIn this setting we study the fundamental problems of eventual, simultaneous, and Δ-coordinated consensus, as well as their relationship to other distributed problems, such as determining the size of the network We show that in the absence of a good initial upper bound on the size of the network, eventual consensus is as hard as computing deterministic functions of the input, eg, the minimum or maximum of inputs to the nodes We also give an algorithm for computing such functions that is optimal in every execution Next, we show that simultaneous consensus can never be achieved in less than n - 1 rounds in any execution, where n is the size of the network; consequently, simultaneous consensus is as hard as computing an upper bound on the number of nodes in the networkFor Δ-coordinated consensus, we show that if the ratio between nodes with input 0 and input 1 is bounded away from 1, it is possible to decide in time n-Θ(√ nΔ), where Δ bounds the time from the first decision until all nodes decide If the dynamic graph has diameter D, the time to decide is min{O(nD/Δ),n-Ω(nΔ/D)}, even if D is not known in advance Finally, we show that (a) there is a dynamic graph such that for every input, no node can decide before time n-O(Δ028n072); and (b) for any diameter D = O(Δ), there is an execution with diameter D where no node can decide before time Ω(nD / Δ) To our knowledge, our work constitutes the first study of Δ-coordinated consensus in general graphs

134 citations

Proceedings Article
08 Dec 2014
TL;DR: A novel re-parametrisation of variational inference for sparse GP regression and latent variable models that allows for an efficient distributed algorithm and shows that GPs perform better than many common models often used for big data.
Abstract: Gaussian processes (GPs) are a powerful tool for probabilistic inference over functions. They have been applied to both regression and non-linear dimensionality reduction, and offer desirable properties such as uncertainty estimates, robustness to over-fitting, and principled ways for tuning hyper-parameters. However the scalability of these models to big datasets remains an active topic of research. We introduce a novel re-parametrisation of variational inference for sparse GP regression and latent variable models that allows for an efficient distributed algorithm. This is done by exploiting the decoupling of the data given the inducing points to re-formulate the evidence lower bound in a Map-Reduce setting. We show that the inference scales well with data and computational resources, while preserving a balanced distribution of the load among the nodes. We further demonstrate the utility in scaling Gaussian processes to big data. We show that GP performance improves with increasing amounts of data in regression (on flight data with 2 million records) and latent variable modelling (on MNIST). The results show that GPs perform better than many common models often used for big data.

133 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
94% related
Scheduling (computing)
78.6K papers, 1.3M citations
91% related
Network packet
159.7K papers, 2.2M citations
91% related
Wireless network
122.5K papers, 2.1M citations
91% related
Wireless sensor network
142K papers, 2.4M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202381
2022135
2021583
2020759
2019876
2018845