scispace - formally typeset
Search or ask a question
Topic

Distributed algorithm

About: Distributed algorithm is a research topic. Over the lifetime, 20416 publications have been published within this topic receiving 548109 citations.


Papers
More filters
Posted Content
TL;DR: The initial results show that this interface can be used to build distributed implementations of a wide variety of common Machine Learning algorithms with minimal complexity and highly competitive performance and scalability.
Abstract: MLI is an Application Programming Interface designed to address the challenges of building Machine Learn- ing algorithms in a distributed setting based on data-centric computing. Its primary goal is to simplify the development of high-performance, scalable, distributed algorithms. Our initial results show that, relative to existing systems, this interface can be used to build distributed implementations of a wide variety of common Machine Learning algorithms with minimal complexity and highly competitive performance and scalability.

166 citations

Journal ArticleDOI
TL;DR: A resource-aware hybrid scheduling algorithm suitable for Heterogeneous Distributed Computing, especially for modern High-Performance Computing (HPC) systems in which applications are modeled with various requirements (both IO and computational intensive), with accent on data from multimedia applications.

166 citations

Proceedings ArticleDOI
14 Apr 2013
TL;DR: This paper develops a distributed algorithm based on the alternating direction method of multipliers (ADMM) that allows for a parallel implementation in a datacenter where each server solves a small sub-problem and converges to near optimum within tens of iterations.
Abstract: Many cloud services are running on geographically distributed datacenters for better reliability and performance. We consider the emerging problem of joint request mapping and response routing with distributed datacenters in this paper. We formulate the problem as a general workload management optimization. A utility function is used to capture various performance goals, and the location diversity of electricity and bandwidth costs are realistically modeled. To solve the large-scale optimization, we develop a distributed algorithm based on the alternating direction method of multipliers (ADMM). Following a decomposition-coordination approach, our algorithm allows for a parallel implementation in a datacenter where each server solves a small sub-problem. The solutions are coordinated to find an optimal solution to the global problem. Our algorithm converges to near optimum within tens of iterations, and is insensitive to step sizes. We empirically evaluate our algorithm based on real-world workload traces and latency measurements, and demonstrate its effectiveness compared to conventional methods.

166 citations

Journal ArticleDOI
01 Sep 2007
TL;DR: An asynchronous, probabilistic neighbor discovery algorithm is presented that permits each node in the network to develop a list of its neighbors, which may be incomplete, and parameter settings are derived which maximize the fraction of neighbors discovered in a fixed running time.
Abstract: We consider the problem of determining, in a distributed, asynchronous and scalable manner, what nodes are ''neighbors'' in a wireless network. Neighbor discovery is an important enabler of network connectivity and energy conservation. An asynchronous, probabilistic neighbor discovery algorithm is presented that permits each node in the network to develop a list of its neighbors, which may be incomplete. The algorithm is analyzed and parameter settings are derived which maximize the fraction of neighbors discovered in a fixed running time. A companion distributed algorithm is also described which allows all the nodes in the network to execute that neighbor discovery algorithm without the need to agree on a common start time.

165 citations

Proceedings ArticleDOI
25 Jul 2005
TL;DR: It is shown that LA-DCOP convincingly outperforms competing distributed task allocation algorithms while using orders of magnitude fewer messages, allowing a dramatic scale-up in extreme teams, upto a fully distributed, proxybased team of 200 agents.
Abstract: Extreme teams, large-scale agent teams operating in dynamic environments, are on the horizon. Such environments are problematic for current task allocation algorithms due to the lack of locality in agent interactions. We propose a novel distributed task allocation algorithm for extreme teams, called LA-DCOP, that incorporates three key ideas. First, LA-DCOP's task allocation is based on a dynamically computed minimum capability threshold which uses approximate knowledge of overall task load. Second, LA-DCOP uses tokens to represent tasks and further minimize communication. Third, it creates potential tokens to deal with inter-task constraints of simultaneous execution. We show that LA-DCOP convincingly outperforms competing distributed task allocation algorithms while using orders of magnitude fewer messages, allowing a dramatic scale-up in extreme teams, upto a fully distributed, proxybased team of 200 agents. Varying threshold are seen as a key to outperforming competing distributed algorithms in the domain of simulated disaster rescue.

165 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
94% related
Scheduling (computing)
78.6K papers, 1.3M citations
91% related
Network packet
159.7K papers, 2.2M citations
91% related
Wireless network
122.5K papers, 2.1M citations
91% related
Wireless sensor network
142K papers, 2.4M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202381
2022135
2021583
2020759
2019876
2018845