scispace - formally typeset
Search or ask a question
Topic

Distributed algorithm

About: Distributed algorithm is a research topic. Over the lifetime, 20416 publications have been published within this topic receiving 548109 citations.


Papers
More filters
Proceedings ArticleDOI
01 Mar 2007
TL;DR: This paper reports on the design and experimental study of a distributed, self-stabilizing mechanism that assigns channels to multi-radio nodes in wireless mesh networks that takes a modular approach by decoupling the channel selection decision from the data forwarding mechanism.
Abstract: To increase the utilization of the available frequency channel space in 802.11-based wireless mesh networks, recent work has explored solutions based on multi-radio stations. This paper reports on our design and experimental study of a distributed, self-stabilizing mechanism that assigns channels to multi-radio nodes in wireless mesh networks. We take a modular approach by decoupling the channel selection decision from the data forwarding mechanism, which makes our solution readily applicable to real-world operation when used with emerging multi-radio routing solutions. We demonstrate the efficacy of our protocol on a real-world, 14-node testbed comprised of nodes, each equipped with an 802.11a card and an 802.11g card. We show via extensive measurements on our testbed that our channel assignment algorithm improves the network capacity by 50% in comparison to a homogeneous channel assignment and by 20% in comparison to a random assignment.

283 citations

Journal ArticleDOI
TL;DR: This work proposes policies for distributed learning and access which achieve order-optimal cognitive system throughput under self play, i.e., when implemented at all the secondary users, and proposes a policy whose sum regret grows only slightly faster than logarithmic in the number of transmission slots.
Abstract: The problem of distributed learning and channel access is considered in a cognitive network with multiple secondary users. The availability statistics of the channels are initially unknown to the secondary users and are estimated using sensing decisions. There is no explicit information exchange or prior agreement among the secondary users. We propose policies for distributed learning and access which achieve order-optimal cognitive system throughput (number of successful secondary transmissions) under self play, i.e., when implemented at all the secondary users. Equivalently, our policies minimize the regret in distributed learning and access. We first consider the scenario when the number of secondary users is known to the policy, and prove that the total regret is logarithmic in the number of transmission slots. Our distributed learning and access policy achieves order-optimal regret by comparing to an asymptotic lower bound for regret under any uniformly-good learning and access policy. We then consider the case when the number of secondary users is fixed but unknown, and is estimated through feedback. We propose a policy in this scenario whose asymptotic sum regret which grows slightly faster than logarithmic in the number of transmission slots.

280 citations

Posted Content
Tal Ben-Nun1, Torsten Hoefler1
TL;DR: The problem of parallelization in DNNs is described from a theoretical perspective, followed by approaches for its parallelization, and potential directions for parallelism in deep learning are extrapolated.
Abstract: Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.

279 citations

Journal ArticleDOI
TL;DR: The paper establishes a distributed observability condition under which the distributed estimates are consistent and asymptotically normal, and introduces the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator.
Abstract: This paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large-scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information flow among sensors (the consensus term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information gathering measured by the sensors (the sensing or innovations term). This leads to mixed time scale algorithms-one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

277 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
94% related
Scheduling (computing)
78.6K papers, 1.3M citations
91% related
Network packet
159.7K papers, 2.2M citations
91% related
Wireless network
122.5K papers, 2.1M citations
91% related
Wireless sensor network
142K papers, 2.4M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202381
2022135
2021583
2020759
2019876
2018845