scispace - formally typeset
Search or ask a question
Topic

Distributed algorithm

About: Distributed algorithm is a research topic. Over the lifetime, 20416 publications have been published within this topic receiving 548109 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents their own distributed algorithm that outperforms the existing algorithms for minimum CDS and establishes the Ω(nlog n) lower bound on the message complexity of any distributed algorithm for nontrivial CDS, thus message-optimal.
Abstract: Connected dominating set (CDS) has been proposed as virtual backbone or spine of wireless ad hoc networks. Three distributed approximation algorithms have been proposed in the literature for minimum CDS. In this paper, we first reinvestigate their performances. None of these algorithms have constant approximation factors. Thus these algorithms cannot guarantee to generate a CDS of small size. Their message complexities can be as high as O(n2), and their time complexities may also be as large as O(n2) and O(n3). We then present our own distributed algorithm that outperforms the existing algorithms. This algorithm has an approximation factor of at most 8, O(n) time complexity and O(n log n) message complexity. By establishing the Ω(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDS, our algorithm is thus message-optimal.

652 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and established their convergence rates in terms of the per-node communications and the pernode gradient evaluations.
Abstract: We study distributed optimization problems when N nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant L), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications K and the per-node gradient evaluations k. Our first method, Distributed Nesterov Gradient, achieves rates O( logK/K) and O(logk/k). Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of L and μ(W) - the second largest singular value of the N ×N doubly stochastic weight matrix W. It achieves rates O( 1/ K2-ξ) and O( 1/k2) ( ξ > 0 arbitrarily small). Further, we give for both methods explicit dependence of the convergence constants on N and W. Simulation examples illustrate our findings.

649 citations

Journal ArticleDOI
TL;DR: This paper discusses elections and reorganizations of active nodes in a distributed computing system after a failure, and two types of reasonable failure environments are studied.
Abstract: After a failure occurs in a distributed computing system, it is often necessary to reorganize the active nodes so that they can continue to perform a useful task. The first step in such a reorganization or reconfiguration is to elect a coordinator node to manage the operation. This paper discusses such elections and reorganizations. Two types of reasonable failure environments are studied. For each environment assertions which define the meaning of an election are presented. An election algorithm which satisfies the assertions is presented for each environment.

647 citations

Journal ArticleDOI
TL;DR: Theoretical analysis and simulation results show that 85-95 percent of faults can be corrected using this algorithm, even when as many as 10 percent of the nodes are faulty.
Abstract: We propose a distributed solution for a canonical task in wireless sensor networks - the binary detection of interesting environmental events. We explicitly take into account the possibility of sensor measurement faults and develop a distributed Bayesian algorithm for detecting and correcting such faults. Theoretical analysis and simulation results show that 85-95 percent of faults can be corrected using this algorithm, even when as many as 10 percent of the nodes are faulty.

635 citations

Proceedings ArticleDOI
01 Dec 2009
TL;DR: The main contributions of this paper are finding the optimal decentralized Kalman-Consensus filter and showing that its computational and communication costs are not scalable in n and introducing a scalable suboptimalKalman-consensus Filter.
Abstract: One of the fundamental problems in sensor networks is to estimate and track the state of targets (or dynamic processes) of interest that evolve in the sensing field. Kalman filtering has been an effective algorithm for tracking dynamic processes for over four decades. Distributed Kalman Filtering (DKF) involves design of the information processing algorithm of a network of estimator agents with a two-fold objective: 1) estimate the state of the target of interest and 2) reach a consensus with neighboring estimator agents on the state estimate. We refer to this DKF algorithm as Kalman-Consensus Filter (KCF). The main contributions of this paper are as follows: i) finding the optimal decentralized Kalman-Consensus filter and showing that its computational and communication costs are not scalable in n and ii) introducing a scalable suboptimal Kalman-Consensus Filter and providing a formal stability and performance analysis of this distributed and cooperative filtering algorithm. Kalman-Consensus Filtering algorithm is applicable to sensor networks with variable topology including mobile sensor networks and networks with packet-loss.

623 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
94% related
Scheduling (computing)
78.6K papers, 1.3M citations
91% related
Network packet
159.7K papers, 2.2M citations
91% related
Wireless network
122.5K papers, 2.1M citations
91% related
Wireless sensor network
142K papers, 2.4M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202381
2022135
2021583
2020759
2019876
2018845