scispace - formally typeset
Search or ask a question
Topic

Distributed algorithm

About: Distributed algorithm is a research topic. Over the lifetime, 20416 publications have been published within this topic receiving 548109 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value), efficiency, and asymptotic unbiasedness, and provides convergence rate guarantees in distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy intersensor communication.
Abstract: The paper studies distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy intersensor communication. It introduces separably estimable observation models that generalize the observability condition in linear centralized estimation to nonlinear distributed estimation. It studies two distributed estimation algorithms in separably estimable models, the NU (with its linear counterpart LU) and the NLU. Their update rule combines a consensus step (where each sensor updates the state by weight averaging it with its neighbors' states) and an innovation step (where each sensor processes its local current observation). This makes the three algorithms of the consensus + innovations type, very different from traditional consensus. This paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value), efficiency, and asymptotic unbiasedness. For LU and NU, it proves asymptotic normality and provides convergence rate guarantees. The three algorithms are characterized by appropriately chosen decaying weight sequences. Algorithms LU and NU are analyzed in the framework of stochastic approximation theory; algorithm NLU exhibits mixed time-scale behavior and biased perturbations, and its analysis requires a different approach that is developed in this paper.

447 citations

Journal ArticleDOI
Wei Ren1
TL;DR: It is shown that consensus is reached on the generalised coordinates and their derivatives of the networked Euler–Lagrange systems as long as the undirected communication topology is connected.
Abstract: This article proposes and analyses distributed, leaderless, model-independent consensus algorithms for networked Euler–Lagrange systems. We propose a fundamental consensus algorithm, a consensus algorithm accounting for actuator saturation, and a consensus algorithm accounting for unavailability of measurements of generalised coordinate derivatives, for systems modelled by Euler–Lagrange equations. Due to the fact that the closed-loop interconnected Euler–Lagrange equations using these algorithms are non-autonomous, Matrosov's theorem is used for convergence analysis. It is shown that consensus is reached on the generalised coordinates and their derivatives of the networked Euler–Lagrange systems as long as the undirected communication topology is connected. Simulation results show the effectiveness of the proposed algorithms.

445 citations

Journal ArticleDOI
01 Jan 2005
TL;DR: This paper proposes distributed energy-efficient deployment algorithms for mobile sensors and intelligent devices that form an Ambient Intelligent network that employ a synergistic combination of cluster structuring and a peer-to-peer deployment scheme.
Abstract: Many visions of the future include people immersed in an environment surrounded by sensors and intelligent devices, which use smart infrastructures to improve the quality of life and safety in emergency situations. Ubiquitous communication enables these sensors or intelligent devices to communicate with each other and the user or a decision maker by means of ad hoc wireless networking. Organization and optimization of network resources are essential to provide ubiquitous communication for a longer duration in large-scale networks and are helpful to migrate intelligence from higher and remote levels to lower and local levels. In this paper, distributed energy-efficient deployment algorithms for mobile sensors and intelligent devices that form an Ambient Intelligent network are proposed. These algorithms employ a synergistic combination of cluster structuring and a peer-to-peer deployment scheme. An energy-efficient deployment algorithm based on Voronoi diagrams is also proposed here. Performance of our algorithms is evaluated in terms of coverage, uniformity, and time and distance traveled until the algorithm converges. Our algorithms are shown to exhibit excellent performance.

442 citations

Book ChapterDOI
08 Jul 1996
TL;DR: In this paper, the authors look at a number of distributed systems that have attempted to paper over the distinction between local and remote objects, and show that such systems fail to support basic requirements of robustness and reliability.
Abstract: We argue that objects that interact in a distributed system need to be dealt with in ways that are intrinsically different from objects that interact in a single address space. These differences are required because distributed systems require that the programmer be aware of latency, have a different model of memory access, and take into account issues of concurrency and partial failure. We look at a number of distributed systems that have attempted to paper over the distinction between local and remote objects, and show that such systems fail to support basic requirements of robustness and reliability. These failures have been masked in the past by the small size of the distributed systems that have been built. In the enterprise-wide distributed systems foreseen in the near future, however, such a masking will be impossible. We conclude by discussing what is required of both systems-level and application-level programmers and designers if one is to take distribution seriously.

440 citations

Journal ArticleDOI
Guannan Qu1, Na Li1
TL;DR: It is shown that it is impossible for a class of distributed algorithms like DGD to achieve a linear convergence rate without using history information even if the objective function is strongly convex and smooth, and a novel gradient estimation scheme is proposed that uses history information to achieve fast and accurate estimation of the average gradient.
Abstract: There has been a growing effort in studying the distributed optimization problem over a network. The objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. The literature has developed consensus-based distributed (sub)gradient descent (DGD) methods and has shown that they have the same convergence rate $O(\frac{\log t}{\sqrt{t}})$ as the centralized (sub)gradient methods (CGD), when the function is convex but possibly nonsmooth. However, when the function is convex and smooth, under the framework of DGD, it is unclear how to harness the smoothness to obtain a faster convergence rate comparable to CGD's convergence rate. In this paper, we propose a distributed algorithm that, despite using the same amount of communication per iteration as DGD, can effectively harnesses the function smoothness and converge to the optimum with a rate of $O(\frac{1}{t})$ . If the objective function is further strongly convex, our algorithm has a linear convergence rate. Both rates match the convergence rate of CGD. The key step in our algorithm is a novel gradient estimation scheme that uses history information to achieve fast and accurate estimation of the average gradient. To motivate the necessity of history information, we also show that it is impossible for a class of distributed algorithms like DGD to achieve a linear convergence rate without using history information even if the objective function is strongly convex and smooth.

440 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
94% related
Scheduling (computing)
78.6K papers, 1.3M citations
91% related
Network packet
159.7K papers, 2.2M citations
91% related
Wireless network
122.5K papers, 2.1M citations
91% related
Wireless sensor network
142K papers, 2.4M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202381
2022135
2021583
2020759
2019876
2018845