scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 2020"


Journal ArticleDOI
TL;DR: This paper studies secure cooperative event-triggered control of linear multiagent systems under denial-of-service (DoS) attacks and shows that based on the proposed distributed algorithms, all the agents can achieve secure consensus exponentially.
Abstract: This paper studies secure cooperative event-triggered control of linear multiagent systems under denial-of-service (DoS) attacks. The DoS attacks refer to interruptions of communication channels carried out by an intelligent adversary. We consider a class of time-sequence-based DoS attacks allowed to occur aperiodically. Then, an explicit analysis of the frequency and duration of DoS attacks is investigated for both secure leaderless and leader-following consensus problems. A resilient cooperative event-triggered control scheme is developed and scheduling of controller updating times is determined in the presence of DoS attacks. It is shown that based on the proposed distributed algorithms, all the agents can achieve secure consensus exponentially. The effectiveness of the developed methods is illustrated through three case studies: 1) multiple robot coordination; 2) distributed voltage regulation of power networks; and 3) distributed cooperative control of unstable dynamic systems.

181 citations


Proceedings ArticleDOI
22 Jun 2020
TL;DR: Ghaffari et al. as mentioned in this paper showed that for any problem whose solution can be checked deterministically in polylogarithmic-time, any randomized algorithm can be derandomized to a deterministic algorithm.
Abstract: We present a simple polylogarithmic-time deterministic distributed algorithm for network decomposition. This improves on a celebrated 2 O(√logn)-time algorithm of Panconesi and Srinivasan [STOC’92] and settles a central and long-standing question in distributed graph algorithms. It also leads to the first polylogarithmic-time deterministic distributed algorithms for numerous other problems, hence resolving several well-known and decades-old open problems, including Linial’s question about the deterministic complexity of maximal independent set [FOCS’87; SICOMP’92]—which had been called the most outstanding problem in the area. The main implication is a more general distributed derandomization theorem: Put together with the results of Ghaffari, Kuhn, and Maus [STOC’17] and Ghaffari, Harris, and Kuhn [FOCS’18], our network decomposition implies that P-RLOCAL = P-LOCAL. That is, for any problem whose solution can be checked deterministically in polylogarithmic-time, any polylogarithmic-time randomized algorithm can be derandomized to a polylogarithmic-time deterministic algorithm. Informally, for the standard first-order interpretation of efficiency as polylogarithmic-time, distributed algorithms do not need randomness for efficiency. By known connections, our result leads also to substantially faster randomized distributed algorithms for a number of well-studied problems including (Δ+1)-coloring, maximal independent set, and Lovasz Local Lemma, as well as massively parallel algorithms for (Δ+1)-coloring.

129 citations


Proceedings ArticleDOI
06 Jul 2020
TL;DR: Results obtained by using a self-driving car dataset and several DNN benchmarks show that the proposed solution significantly reduces the total latency for DNN inference compared to other distributed approaches and is 2.6 to 4.2 times faster than the state- art.
Abstract: Deep neural networks (DNN) are the de-facto solution behind many intelligent applications of today, ranging from machine translation to autonomous driving. DNNs are accurate but resource-intensive, especially for embedded devices such as mobile phones and smart objects in the Internet of Things. To overcome the related resource constraints, DNN inference is generally offloaded to the edge or to the cloud. This is accomplished by partitioning the DNN and distributing computations at the two different ends. However, most of existing solutions simply split the DNN into two parts, one running locally or at the edge, and the other one in the cloud. In contrast, this article proposes a technique to divide a DNN in multiple partitions that can be processed locally by end devices or offloaded to one or multiple powerful nodes, such as in fog networks. The proposed scheme includes both an adaptive DNN partitioning scheme and a distributed algorithm to offload computations based on a matching game approach. Results obtained by using a self-driving car dataset and several DNN benchmarks show that the proposed solution significantly reduces the total latency for DNN inference compared to other distributed approaches and is 2.6 to 4.2 times faster than the state of the art.

108 citations


Journal ArticleDOI
TL;DR: An overview of distributed gradient methods for solving convex machine learning problems that typically involve two update steps: a gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.
Abstract: This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form min x i R n (1/ m ) s i = 1 m f i ( x ) in a system consisting of m agents that are embedded in a communication network. Each agent i has a collection of data captured by its privately known objective function f i ( x ). The distributed algorithms considered here obey two simple rules: privately known agent functions f i ( x ) cannot be disclosed to any other agent in the network and every agent is aware of the local connectivity structure of the network, i.e., it knows its one-hop neighbors only. While obeying these two rules, the distributed algorithms that agents execute should find a solution to the overall system problem with the limited knowledge of the objective function and limited local communications. Given in this article is an overview of such algorithms that typically involve two update steps: a gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.

105 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed model-free deep reinforcement learning-based distributed algorithm can better exploit the processing capacities of the edge nodes and significantly reduce the ratio of dropped tasks and average delay when compared with several existing algorithms.
Abstract: In mobile edge computing systems, an edge node may have a high load when a large number of mobile devices offload their tasks to it. Those offloaded tasks may experience large processing delay or even be dropped when their deadlines expire. Due to the uncertain load dynamics at the edge nodes, it is challenging for each device to determine its offloading decision (i.e., whether to offload or not, and which edge node it should offload its task to) in a decentralized manner. In this work, we consider non-divisible and delay-sensitive tasks as well as edge load dynamics, and formulate a task offloading problem to minimize the expected long-term cost. We propose a model-free deep reinforcement learning-based distributed algorithm, where each device can determine its offloading decision without knowing the task models and offloading decision of other devices. To improve the estimation of the long-term cost in the algorithm, we incorporate the long short-term memory (LSTM), dueling deep Q-network (DQN), and double-DQN techniques. Simulation results show that our proposed algorithm can better exploit the processing capacities of the edge nodes and significantly reduce the ratio of dropped tasks and average delay when compared with several existing algorithms.

88 citations


Journal ArticleDOI
TL;DR: A novel distributed dynamic event-triggered Newton-Raphson algorithm is proposed to solve the double-mode energy management problem in a fully distributed fashion and it is proved that each participant can asymptotically converge to the global optimal point.
Abstract: The islanded and network-connected modes are expected to be modeled into a unified form as well as in a distributed fashion for multi-energy system In this way, the adaptability and flexibility of multi-energy system can be enhanced To this aim, this paper establishes a double-mode energy management model for the multi-energy system It is formed by many energy bodies With such a model, each participant is able to adaptively respond to the change of mode switching Furthermore, a novel distributed dynamic event-triggered Newton-Raphson algorithm is proposed to solve the double-mode energy management problem in a fully distributed fashion In this method, the idea of Newton descent along with the dynamic event-triggered communication strategy are introduced and embedded in the execution of the proposed algorithm With this effort, each participant can adequately utilize the second-order information to speed up convergence The optimality is not affected Meanwhile, the proposed algorithm can be implemented with asynchronous communication and without needing special initialization conditions It exhibits better flexibility and adaptability especially when the system modes are changed In addition, the continuous-time algorithm is executed with discrete-time communication driven by the proposed dynamic event-triggered mechanismIt results in reduced communication interaction and avoids needing continuous-time information transmission It is also proved that each participant can asymptotically converge to the global optimal point Finally, simulation results show the effectiveness of the proposed model and illustrate the faster convergence feature of the proposed algorithm

86 citations


Journal ArticleDOI
08 Mar 2020
TL;DR: In this paper, the authors review state-of-the-art algorithms in the context of federated learning, namely the deep neural network model and the Gaussian process model, and various distributed model hyper-parameter optimization schemes.
Abstract: In this overview paper, data-driven learning model-based cooperative localization and location data processing are considered, in line with the emerging machine learning and big data methods. We first review (1) state-of-the-art algorithms in the context of federated learning, (2) two widely used learning models, namely the deep neural network model and the Gaussian process model, and (3) various distributed model hyper-parameter optimization schemes. Then, we demonstrate various practical use cases that are summarized from a mixture of standard, newly published, and unpublished works, which cover a broad range of location services, including collaborative static localization/fingerprinting, indoor target tracking, outdoor navigation using low-sampling GPS, and spatio-temporal wireless traffic data modeling and prediction. Experimental results show that near centralized data fitting- and prediction performance can be achieved by a set of collaborative mobile users running distributed algorithms. All the surveyed use cases fall under our newly proposed Federated Localization (FedLoc) framework, which targets on collaboratively building accurate location services without sacrificing user privacy, in particular, sensitive information related to their geographical trajectories. Future research directions are also discussed at the end of this paper.

85 citations


Posted Content
TL;DR: In this paper, a model-free deep reinforcement learning-based distributed algorithm was proposed to minimize the expected long-term cost of task offloading in mobile edge computing systems. But the offloading decision of each device was left to the edge nodes.
Abstract: In mobile edge computing systems, an edge node may have a high load when a large number of mobile devices offload their tasks to it. Those offloaded tasks may experience large processing delay or even be dropped when their deadlines expire. Due to the uncertain load dynamics at the edge nodes, it is challenging for each device to determine its offloading decision (i.e., whether to offload or not, and which edge node it should offload its task to) in a decentralized manner. In this work, we consider non-divisible and delay-sensitive tasks as well as edge load dynamics, and formulate a task offloading problem to minimize the expected long-term cost. We propose a model-free deep reinforcement learning-based distributed algorithm, where each device can determine its offloading decision without knowing the task models and offloading decision of other devices. To improve the estimation of the long-term cost in the algorithm, we incorporate the long short-term memory (LSTM), dueling deep Q-network (DQN), and double-DQN techniques. Simulation results with 50 mobile devices and five edge nodes show that the proposed algorithm can reduce the ratio of dropped tasks and average task delay by 86.4%-95.4% and 18.0%-30.1%, respectively, when compared with several existing algorithms.

85 citations


Journal ArticleDOI
TL;DR: An alternating optimization procedure integrating a column-and-constraint generation algorithm and an alternating direction method of multipliers to solve the DAR-VVC problem is developed.
Abstract: This paper proposes a distributed adaptive robust voltage/var control (DAR-VVC) method in active distribution networks to minimize power loss while keeping operating constraints under uncertainties. The DAR-VVC aims to coordinate on-load tap changers, capacitor banks and PV inverters in multiple operation stages through a distributed algorithm. To improve efficiency of the distributed algorithm, an affinity propagation clustering algorithm is employed to divide the distribution network by aggregating “the close nodes” together and setting “the far nodes” apart, leading to the network partition where the information exchange between adjacent sub-networks is reduced. Moreover, the virtual load which describes load characteristics of the sub-networks is applied to enhance the boundary conditions. To fully deal with the uncertainties, the proposed DAR-VVC is formulated in a robust optimization model which considers the worst case to guarantee solution robustness against uncertainty realization. Besides, this paper develops an alternating optimization procedure integrating a column-and-constraint generation algorithm and an alternating direction method of multipliers to solve the DAR-VVC problem. The proposed approach is tested on IEEE 33 and IEEE 123 bus distribution test system and numerical simulations verify high efficiency and full solution robustness of the DAR-VVC.

80 citations


Journal ArticleDOI
TL;DR: This paper not only assures the NE seeking of aggregative games but also achieves the disturbance rejection of external disturbances.
Abstract: In this paper, we study the distributed Nash equilibrium (NE) seeking problem for a class of aggregative games with players described by uncertain perturbed nonlinear dynamics. To seek the NE, each player needs to construct a distributed algorithm based on information of its cost function and the exchanging information obtained from its neighbors. By combining the internal model principle and the average consensus technique, we propose a distributed gradient-based algorithm for the players. This paper not only assures the NE seeking of aggregative games but also achieves the disturbance rejection of external disturbances.

76 citations


Proceedings Article
01 Jan 2020
TL;DR: This paper is the first to solve distributionally robust federated learning with reduced communication, and to analyze the efficiency of local descent methods on distributed minimax problems.
Abstract: In this paper, we study communication efficient distributed algorithms for distributionally robust federated learning via periodic averaging with adaptive sampling. In contrast to standard empirical risk minimization, due to the minimax structure of the underlying optimization problem, a key difficulty arises from the fact that the global parameter that controls the mixture of local losses can only be updated infrequently on the global stage. To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter. We analyze the convergence rate of DRFA in both convex-linear and nonconvex-linear settings. We also generalize the proposed idea to objectives with regularization on the mixture parameter and propose a proximal variant, dubbed as DRFA-Prox, with provable convergence rates. We also analyze an alternative optimization method for regularized cases in strongly-convex-strongly-concave and non-convex (under PL condition)-strongly-concave settings. To the best of our knowledge, this paper is the first to solve distributionally robust federated learning with reduced communication, and to analyze the efficiency of local descent methods on distributed minimax problems. We give corroborating experimental evidence for our theoretical results in federated learning settings.

Journal ArticleDOI
TL;DR: This paper studies a class of distributed convex optimization problems by a set of agents in which each agent only has access to its own local convex objective function and the estimate of each agent is restricted to both coupling linear constraint and individual box constraints.
Abstract: This paper studies a class of distributed convex optimization problems by a set of agents in which each agent only has access to its own local convex objective function and the estimate of each agent is restricted to both coupling linear constraint and individual box constraints. Our focus is to devise a distributed primal-dual gradient algorithm for working out the problem over a sequence of time-varying general directed graphs. The communications among agents are assumed to be uniformly strongly connected. A column-stochastic mixing matrix and a fixed step-size are applied in the algorithm which exactly steers all the agents to asymptotically converge to a global optimal solution. Based on the standard strong convexity and the smoothness assumptions of the objective functions, we show that the distributed algorithm is capable of driving the whole network to geometrically converge to an optimal solution of the convex optimization problem only if the step-size does not exceed some upper bound. We also give an explicit analysis for the convergence rate of the proposed optimization algorithm. Simulations on economic dispatch problems and demand response problems in power systems are performed to illustrate the effectiveness of the proposed optimization algorithm.

Journal ArticleDOI
15 May 2020-Energy
TL;DR: EH concept is studied in networked microgrids structure to exploit the potential capabilities of microgrIDS in satisfying various types of energy demands and the obtained results by Gurobi is more optimal than heuristic algorithms.

Journal ArticleDOI
TL;DR: Distributed iterative algorithms that enable the components of a multicomponent system, each with some integer initial value, to asymptotically compute the average of their initial values, without having to reveal to other components the specific value they contribute to the average calculation.
Abstract: In this article, we develop distributed iterative algorithms that enable the components of a multicomponent system, each with some integer initial value, to asymptotically compute the average of their initial values, without having to reveal to other components the specific value they contribute to the average calculation. We assume a communication topology captured by an arbitrary strongly connected digraph, in which certain nodes (components) might be curious but not malicious (i.e., they execute the distributed protocol correctly, but try to identify the initial values of other nodes). We first develop a variation of the so-called ratio consensus algorithm that operates exclusively on integer values and can be used by the nodes to asymptotically obtain the average of their initial (integer) values, by taking the ratio of two integer values they maintain and iteratively update. Assuming the presence of a trusted node (i.e., a node that is not curious and can be trusted to set up a cryptosystem and not reveal any decrypted values of messages it receives), we describe how this algorithm can be adjusted using homomorphic encryption to allow the nodes to obtain the average of their initial values while ensuring their privacy (i.e., without having to reveal their initial value). We also extend the algorithm to handle situations where multiple nodes set up cryptosystems and privacy is preserved as long as one of these nodes can be trusted (i.e., the ratio of trusted nodes over the nodes that set up cryptosystems decreases).

Journal ArticleDOI
TL;DR: This article divides statistical inference and learning algorithms into two broad categories, namely, distributed algorithms and decentralized algorithms (see "Is It Distributed or Is It Decentralized?").
Abstract: Statistical inference and machine-learning algorithms have traditionally been developed for data available at a single location. Unlike this centralized setting, modern data sets are increasingly being distributed across multiple physical entities (sensors, devices, machines, data centers, and so on) for a multitude of reasons that range from storage, memory, and computational constraints to privacy concerns and engineering needs. This has necessitated the development of inference and learning algorithms capable of operating on noncolocated data. For this article, we divide such algorithms into two broad categories, namely, distributed algorithms and decentralized algorithms (see "Is It Distributed or Is It Decentralized?").

Journal ArticleDOI
TL;DR: This article proposes a general distributed asynchronous algorithmic framework whereby agents can update their local variables as well as communicate with their neighbors at any time, without any form of coordination, and proves that this is the first distributed algorithm with provable geometric convergence rate in such a general asynchronous setting.
Abstract: This article studies multiagent (convex and nonconvex ) optimization over static digraphs. We propose a general distributed asynchronous algorithmic framework whereby 1) agents can update their local variables as well as communicate with their neighbors at any time, without any form of coordination; and 2) they can perform their local computations using (possibly) delayed, out-of-sync information from the other agents. Delays need not be known to the agent or obey any specific profile, and can also be time-varying (but bounded). The algorithm builds on a tracking mechanism that is robust against asynchrony (in the above sense), whose goal is to estimate locally the average of agents’ gradients. When applied to strongly convex functions, we prove that it converges at an R-linear (geometric) rate as long as the step-size is sufficiently small. A sublinear convergence rate is proved, when nonconvex problems and/or diminishing, uncoordinated step-sizes are considered. To the best of our knowledge, this is the first distributed algorithm with provable geometric convergence rate in such a general asynchronous setting. Preliminary numerical results demonstrate the efficacy of the proposed algorithm and validate our theoretical findings.

Journal ArticleDOI
TL;DR: A distributed optimization algorithm, combined with a continuous integral sliding-mode control scheme, is proposed to solve this finite-time optimization problem of multiagent systems in the presence of disturbances, while rejecting local disturbance signals.
Abstract: This paper presents continuous distributed algorithms to solve the finite-time distributed convex optimization problems of multiagent systems in the presence of disturbances. The objective is to design distributed algorithms such that a team of agents seeks to minimize the sum of local objective functions in a finite-time and robust manner. Specifically, a distributed optimization algorithm, combined with a continuous integral sliding-mode control scheme, is proposed to solve this finite-time optimization problem, while rejecting local disturbance signals. The developed algorithm is further applied to solve economic dispatch and resource allocation problems, and proven that under proposed schemes, the optimal solution can be achieved in finite time, while satisfying both global equality and local inequality constraints. Examples and numerical simulations are provided to show the effectiveness of the proposed methods.

Journal ArticleDOI
TL;DR: A novel fully distributed algorithm is proposed to address the DEDP based on the alternating direction method of multipliers (ADMM) and distributed consensus theory of the multiagents system to deal with the supply-demand constraints, capacity limit constraints, and ramp rate constraints.
Abstract: This paper proposes a novel distributed approach to solve a new dynamic economic dispatch problem (DEDP) in which environmental cost function and ramp rate constraints are taken into consideration in islanded microgrid. In our proposed optimization model, the environmental cost function with E-exponential term and ramp rate constraints are considered to make the optimization problem more practical. Then a novel fully distributed algorithm is proposed to address the DEDP based on the alternating direction method of multipliers (ADMM) and distributed consensus theory of the multiagents system. A Lambert $W$ function is employed to tackle the E-exponential term in the environmental cost function, which is different from most existing papers which discuss the DEDP only with quadratic cost functions. A parallel projection method on account of ADMM is used to deal with the ramp rate constraints in this paper. In addition, the power balance can be guaranteed every time when the sum of initial powder output is equal to the total demand. Therefore, the proposed algorithm can deal with the supply-demand constraints, capacity limit constraints, and ramp rate constraints. Finally, simulations on IEEE14-bus are introduced to further illustrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: In this article, a novel fully distributed algorithm based on a relaxation of the primal problem and an elegant exploration of duality theory is proposed for minimizing the sum of local cost functions, each one depending on a local variable, subject to local and coupling constraints.
Abstract: In this paper, we consider a general challenging distributed optimization setup arising in several important network control applications. Agents of a network want to minimize the sum of local cost functions, each one depending on a local variable, subject to local and coupling constraints, with the latter involving all the decision variables. We propose a novel fully distributed algorithm based on a relaxation of the primal problem and an elegant exploration of duality theory. Despite its complex derivation, based on several duality steps, the distributed algorithm has a very simple and intuitive structure. That is, each node finds a primal-dual optimal solution pair of a local relaxed version of the original problem and then updates suitable auxiliary local variables. We prove that agents asymptotically compute their portion of an optimal (feasible) solution of the original problem. This primal recovery property is obtained without any averaging mechanism typically used in dual decomposition methods. To corroborate the theoretical results, we show how the methodology applies to an instance of a distributed model-predictive control scheme in a microgrid control scenario.

Journal ArticleDOI
TL;DR: It is shown that the leaders in each closed and strongly connected component of the network topology will reach a common state and the followers will gradually enter the dynamic convex hull constructed by the leaders, and it is proved that the system matrix can be strictly unstable, and the upper bound of the system Matrix’s spectral radius is explicitly stated.
Abstract: In this contribution, we propose and investigate the containment control issue for general linear multiagent systems (MASs) under the asynchronous setting, where the network topology is not subjected to any structural restrictions and the roles of the leaders and the followers are entirely determined by the network topology. It is assumed that the interaction time instants of each agent, at which this agent interacts with its neighbors, are independent of the other agents’ and can be unevenly distributed. An asynchronous distributed algorithm is proposed to implement the control strategy of linear MASs. The non-negative matrix theory and the composition of binary relations are utilized to handle the asynchronous containment control issue. It is shown that the leaders in each closed and strongly connected component of the network topology will reach a common state and the followers will gradually enter the dynamic convex hull constructed by the leaders. Moreover, it is also proved that the system matrix can be strictly unstable, and the upper bound of the system matrix’s spectral radius is explicitly stated. Finally, two simulation examples are also provided to verify the efficacy of our theoretical results.

Journal ArticleDOI
TL;DR: This article proposes a novel resilient distributed optimization algorithm which exploits the trusted agents which cannot be compromised by adversarial attacks and form a connected dominating set in the original graph to constrain effects of adversarial attack.
Abstract: As the cyber-attack is becoming one of the most challenging threats faced by cyber-physical systems, investigating the effect of cyber-attacks on distributed optimization and designing resilient algorithms are of both theoretical merits and practical values. Most existing works are established on the assumption that the maximum tolerable number of attacks, which depends on the network connectivity, is known by all normal agents. All normal agents will use the maximum number of attacks to decide whether the received information will be used for iterations. In this article, we relax this assumption and propose a novel resilient distributed optimization algorithm. The proposed algorithm exploits the trusted agents which cannot be compromised by adversarial attacks and form a connected dominating set in the original graph to constrain effects of adversarial attacks. It is shown that local variables of all normal and trusted agents converge to the same value under the proposed algorithm. Further, the final solution belongs to the convex set of minimizers of the weighted average of local cost functions of all trusted agents. The upper bound of the distance between the final solution and the optimal one has also been provided. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: The intuitions and connections behind a core set of popular distributed algorithms are described, emphasizing how to balance computation and communication costs.
Abstract: Distributed learning has become a critical enabler of the massively connected world that many people envision. This article discusses four key elements of scalable distributed processing and real-time intelligence: problems, data, communication, and computation. Our aim is to provide a unique perspective of how these elements should work together in an effective and coherent manner. In particular, we selectively review recent techniques developed for optimizing nonconvex models (i.e., problem classes) that process batch and streaming data (data types) across networks in a distributed manner (communication and computation paradigm). We describe the intuitions and connections behind a core set of popular distributed algorithms, emphasizing how to balance computation and communication costs. Practical issues and future research directions will also be discussed.

Journal ArticleDOI
TL;DR: Two asynchronous algorithms for distributedly seeking generalized Nash equilibria with delayed information in multiagent networks are investigated by preconditioned forward–backward operator splitting, and their convergence is shown by relating them to asynchronous fixed-point iterations, under proper assumptions and fixed and nondiminishing step-size choices.
Abstract: This paper investigates asynchronous algorithms for distributedly seeking generalized Nash equilibria with delayed information in multiagent networks. In the game model, a shared affine constraint couples all players’ local decisions. Each player is assumed to only access its private objective function, private feasible set, and a local block matrix of the affine constraint. We first give an algorithm for the case when each agent is able to fully access all other players’ decisions. By using auxiliary variables related to communication links and the edge Laplacian matrix , each player can carry on its iteration asynchronously with only private data and possibly delayed information from its neighbors. Then, we consider the case when agents cannot know all other players’ decisions, called a partial-decision information case . We introduce a local estimation of the overall agents’ decisions and incorporate consensus dynamics on these local estimations. The two algorithms do not need any centralized clock coordination, fully exploit the local computation resource, and remove the idle time due to waiting for the “slowest” agent. Both algorithms are developed by preconditioned forward–backward operator splitting, and their convergence is shown by relating them to asynchronous fixed-point iterations, under proper assumptions and fixed and nondiminishing step-size choices . Numerical studies verify the algorithms’ convergence and efficiency.

Journal ArticleDOI
TL;DR: This note develops a distributed algorithm to solve a convex optimization problem with coupled constraints, where both coupled equality and inequality constraints are considered, and the algorithm focuses on smooth problems and uses a fixed stepsize to find the exact optimal solution.
Abstract: This note develops a distributed algorithm to solve a convex optimization problem with coupled constraints. Both coupled equality and inequality constraints are considered, where functions in the equality constraints are affine and functions in the inequality constraints are convex. Different from primal-dual subgradient methods with decreasing stepsizes for nonsmooth optimizations, our algorithm focuses on smooth problems and uses a fixed stepsize to find the exact optimal solution. Convergence analysis is derived with rigorous proofs. Our result is also illustrated by simulations.

Journal ArticleDOI
TL;DR: This article proposes a fully distributed algorithm to address the EDP over directed networks and takes into account communication delays and noisy gradient observations, which proves that the optimal dispatch can be achieved under the assumptions that the nonidentical constant communication delays inflicting on each link are uniformly bounded.
Abstract: The increased complexity of modern energy network raises the necessity of flexible and reliable methods for smart grid operation. To this end, this article is centered on the economic dispatch problem (EDP) in smart grids, which aims at scheduling generators to meet the total demand at the minimized cost. This article proposes a fully distributed algorithm to address the EDP over directed networks and takes into account communication delays and noisy gradient observations. In particular, the rescaling gradient technique is introduced in the algorithm design and the implementation of the distributed algorithm only resorts to row-stochastic weight matrices, which allows each generator to locally allocate the weights on the messages received from its in-neighbors. It is proved that the optimal dispatch can be achieved under the assumptions that the nonidentical constant communication delays inflicting on each link are uniformly bounded and the noises embroiled in gradient observation of every generator are bounded variance zero mean. Simulations are provided to validate and testify the effectiveness of the presented algorithm.

Journal ArticleDOI
TL;DR: In this paper, an exact asynchronous subgradient-push algorithm (AsySPA) is proposed to solve an additive cost optimization problem over digraphs where each node only has access to a local convex function and updates asynchronously with an arbitrary rate.
Abstract: This paper proposes a novel exact asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over digraphs where each node only has access to a local convex function and updates asynchronously with an arbitrary rate. Specifically, each node of a strongly connected digraph does not wait for updates from other nodes but simply starts a new update within any bounded time interval by using local information available from its in-neighbors. “Exact” means that every node of the AsySPA can asymptotically converge to the same optimal solution, even under different update rates among nodes and bounded communication delays. To address uneven update rates, we design a simple mechanism to adaptively adjust stepsizes per update in each node, which is substantially different from the existing works. Then, we construct a delay-free augmented system to address asynchrony and delays, and study its convergence by proposing a generalized subgradient algorithm, which clearly has its own significance and helps us to explicitly evaluate the convergence rate of the AsySPA. Finally, we demonstrate advantages of the AsySPA in both theory and simulation.

Journal ArticleDOI
TL;DR: The solution of the distributed time-varying convex optimization problem is converted into that of a tracking problem for a class of nonlinear systems and a tracking control scheme is proposed based on the back-stepping strategy.
Abstract: This paper studies the distributed time-varying convex optimization problem for a class of nonlinear multiagent systems. An auxiliary system is first designed to estimate the global optimal state and its high-order derivatives by means of a distributed algorithm. Relying on this auxiliary system, the solution of the distributed time-varying convex optimization problem is converted into that of a tracking problem for a class of nonlinear systems. A tracking control scheme is then proposed based on the back-stepping strategy. Simulations are provided to validate the theoretical results.

Journal ArticleDOI
TL;DR: This paper considers a general class of linearly convergent parallel/distributed algorithms and illustrates how to design quantizers compressing the communicated information to a few bits while still preserving the linear convergence.
Abstract: In distributed optimization and machine learning, multiple nodes coordinate to solve large problems. To do this, the nodes need to compress important algorithm information to bits so that it can be communicated over a digital channel. The communication time of these algorithms follows a complex interplay between a) the algorithm's convergence properties, b) the compression scheme, and c) the transmission rate offered by the digital channel. We explore these relationships for a general class of linearly convergent distributed algorithms. In particular, we illustrate how to design quantizers for these algorithms that compress the communicated information to a few bits while still preserving the linear convergence. Moreover, we characterize the communication time of these algorithms as a function of the available transmission rate. We illustrate our results on learning algorithms using different communication structures, such as decentralized algorithms where a single master coordinates information from many workers and fully distributed algorithms where only neighbours in a communication graph can communicate. We conclude that a co-design of machine learning and communication protocols are mandatory to flourish machine learning over networks.

Proceedings ArticleDOI
02 Feb 2020
TL;DR: This work proposes distributed algorithms that achieve the same optimal rates as their centralized counterparts (up to constant and logarithmic factors), with an additional optimal cost related to the spectral properties of the network.
Abstract: We study dual-based algorithms for distributed convex optimization problems over networks, where the objective is to minimize a sum $\sum olimits_{i = 1}^m {{f_i}\left( z \right)} $ of functions over in a network. We provide complexity bounds for four different cases, namely: each function f i is strongly convex and smooth, each function is either strongly convex or smooth, and when it is convex but neither strongly convex nor smooth. Our approach is based on the dual of an appropriately formulated primal problem, which includes a graph that models the communication restrictions. We propose distributed algorithms that achieve the same optimal rates as their centralized counterparts (up to constant and logarithmic factors), with an additional optimal cost related to the spectral properties of the network. Initially, we focus on functions for which we can explicitly minimize its Legendre–Fenchel conjugate, i.e., admissible or dual friendly functions. Then, we study distributed optimization algorithms for non-dual friendly functions, as well as a method to improve the dependency on the parameters of the functions involved. Numerical analysis of the proposed algorithms is also provided.

Journal ArticleDOI
TL;DR: Fog Computing aided Swarm of Drones (FCSD) architecture is proposed, and a fast Proximal Jacobi Alternating Direction Method of Multipliers (ADMM) based distributed algorithm is developed to speed up the process of the optimization problem solving to improve the practicality.
Abstract: Swarm of drones, as an intensely significant category of swarm robots, is widely used in various fields, e.g., search and rescue, detection missions, military, etc. Because of the limitation of computing resource of drones, dealing with computation-intensive tasks locally is difficult. Hence, the cloud-based computation offloading is widely adopted, nevertheless, for some latency-sensitive tasks, e.g., object recognition, path planning, etc., the cloud-based manner is inappropriate due to the excessive delay. Even in some harsh environments, e.g., disaster area, battlefield, etc., there is no wireless infrastructure existed to combine the drones and cloud center. Thus, to solve the problem encountered by cloud-based computation offloading, in this paper, Fog Computing aided Swarm of Drones (FCSD) architecture is proposed. Considering the uncertainty factors in harsh environments which may threaten the success of FCSD processing tasks, not only the latency model, but also the reliability model of FCSD is constructed to guarantee the high reliability of task completion. Moreover, in view of the limited battery life of the drone, we formulated the problem as the task allocation problem which minimized the energy consumption of FCSD under the constraints of latency and reliability. Furthermore, to speed up the process of the optimization problem solving to improve the practicality, relying on the recent advances in distributed convex optimization, we develop a fast Proximal Jacobi Alternating Direction Method of Multipliers (ADMM) based distributed algorithm. Finally, simulation results validate the effectiveness of our proposed scheme.