scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 2011"


Book
20 Oct 2011
TL;DR: This unified treatment of game theory focuses on finding state-of-the-art solutions to issues surrounding the next generation of wireless and communications networks and covers a wide range of techniques for modeling, designing and analysing communication networks using game theory, as well as state of theart distributed design techniques.
Abstract: This unified treatment of game theory focuses on finding state-of-the-art solutions to issues surrounding the next generation of wireless and communications networks. Future networks will rely on autonomous and distributed architectures to improve the efficiency and flexibility of mobile applications, and game theory provides the ideal framework for designing efficient and robust distributed algorithms. This book enables readers to develop a solid understanding of game theory, its applications and its use as an effective tool for addressing wireless communication and networking problems. The key results and tools of game theory are covered, as are various real-world technologies including 3G networks, wireless LANs, sensor networks, dynamic spectrum access and cognitive networks. The book also covers a wide range of techniques for modeling, designing and analysing communication networks using game theory, as well as state-of-the-art distributed design techniques. This is an ideal resource for communications engineers, researchers, and graduate and undergraduate students.

808 citations


Journal ArticleDOI
TL;DR: It is proved that for networks of interconnected second-order linear time invariant systems, one can construct a bank of unknown input observers, and use them to detect and isolate faults in the network, by exploiting the system structure.

399 citations


Book
23 Feb 2011
TL;DR: The authors follow an incremental approach by first introducing basic abstractions in simple distributed environments, before moving to more sophisticated abstractions and more challenging environments, and each core chapter is devoted to one topic, covering reliable broadcast, shared memory, consensus, and extensions of consensus.
Abstract: In modern computing a program is usually distributed among several processes. The fundamental challenge when developing reliable and secure distributed programs is to support the cooperation of processes required to execute a common task, even when some of these processes fail. Failures may range from crashes to adversarial attacks by malicious processes.Cachin, Guerraoui, and Rodrigues present an introductory description of fundamental distributed programming abstractions together with algorithms to implement them in distributed systems, where processes are subject to crashes and malicious attacks. The authors follow an incremental approach by first introducing basic abstractions in simple distributed environments, before moving to more sophisticated abstractions and more challenging environments. Each core chapter is devoted to one topic, covering reliable broadcast, shared memory, consensus, and extensions of consensus. For every topic, many exercises and their solutions enhance the understanding This book represents the second edition of "Introduction to Reliable Distributed Programming". Its scope has been extended to include security against malicious actions by non-cooperating processes. This important domain has become widely known under the name "Byzantine fault-tolerance".

346 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of distributed learning and channel access in a cognitive network with multiple secondary users, where the availability statistics of the channels are initially unknown to the secondary users and are estimated using sensing decisions.
Abstract: The problem of distributed learning and channel access is considered in a cognitive network with multiple secondary users. The availability statistics of the channels are initially unknown to the secondary users and are estimated using sensing decisions. There is no explicit information exchange or prior agreement among the secondary users and sensing and access decisions are undertaken by them in a completely distributed manner. We propose policies for distributed learning and access which achieve order-optimal cognitive system throughput (number of successful secondary transmissions) under self play, i.e., when implemented at all the secondary users. Equivalently, our policies minimize the sum regret in distributed learning and access, which is the loss in secondary throughput due to learning and distributed access. For the scenario when the number of secondary users is known to the policy, we prove that the total regret is logarithmic in the number of transmission slots. This policy achieves order-optimal regret based on a logarithmic lower bound for regret under any uniformly-good learning and access policy. We then consider the case when the number of secondary users is fixed but unknown, and is estimated at each user through feedback. We propose a policy whose sum regret grows only slightly faster than logarithmic in the number of transmission slots.

335 citations


Journal ArticleDOI
TL;DR: A novel fully distributed multiagent based load restoration algorithm that can be applied to systems of any size and structure and compared against existing algorithms and a particle swarm optimization based algorithm is proposed.
Abstract: Once a fault in microgrids has been cleared, it is necessary to restore the unfaulted but out-of-service loads as much as possible in a timely manner. This paper proposes a novel fully distributed multiagent based load restoration algorithm. According to the algorithm, each agent makes synchronized load restoration decision according to discovered information. During the information discovery process, agents only communicate with their direct neighbors, and the global information is discovered based on the Average-Consensus Theorem. In this way, total net power, indexes and demands of loads that are ready for restoration can be obtained. Then the load restoration problem can be modeled and solved using existing algorithms for the 0-1 Knapsack problem. To achieve adaptivity and stability, a distributed algorithm for coefficient setting is proposed and compared against existing algorithms and a particle swarm optimization based algorithm. Theoretically, the proposed load restoration algorithm can be applied to systems of any size and structure. Simulation studies with power systems of different scale demonstrate the effectiveness of the proposed algorithm.

308 citations


Journal ArticleDOI
TL;DR: Simulations testify the effectiveness of the proposed cooperative sensing approach in multi-hop CR networks and a decentralized consensus optimization algorithm is derived to attain high sensing performance at a reasonable computational cost and power overhead.
Abstract: In wideband cognitive radio (CR) networks, spectrum sensing is an essential task for enabling dynamic spectrum sharing, but entails several major technical challenges: very high sampling rates required for wideband processing, limited power and computing resources per CR, frequency-selective wireless fading, and interference due to signal leakage from other coexisting CRs. In this paper, a cooperative approach to wideband spectrum sensing is developed to overcome these challenges. To effectively reduce the data acquisition costs, a compressive sampling mechanism is utilized which exploits the signal sparsity induced by network spectrum under-utilization. To collect spatial diversity against wireless fading, multiple CRs collaborate during the sensing task by enforcing consensus among local spectral estimates; accordingly, a decentralized consensus optimization algorithm is derived to attain high sensing performance at a reasonable computational cost and power overhead. To identify spurious spectral estimates due to interfering CRs, the orthogonality between the spectrum of primary users and that of CRs is imposed as constraints for consensus optimization during distributed collaborative sensing. These decentralized techniques are developed for both cases of with and without channel knowledge. Simulations testify the effectiveness of the proposed cooperative sensing approach in multi-hop CR networks.

297 citations


Journal ArticleDOI
TL;DR: The paper establishes a distributed observability condition under which the distributed estimates are consistent and asymptotically normal, and introduces the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator.
Abstract: This paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large-scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information flow among sensors (the consensus term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information gathering measured by the sensors (the sensing or innovations term). This leads to mixed time scale algorithms-one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

277 citations


Proceedings ArticleDOI
04 Jul 2011
TL;DR: An upper bound on the total profit is provided and an algorithm based on force-directed search is proposed to solve the resource allocation problem for multi-tier applications in the cloud computing.
Abstract: With increasing demand for computing and memory, distributed computing systems have attracted a lot of attention. Resource allocation is one of the most important challenges in the distributed systems specially when the clients have Service Level Agreements (SLAs) and the total profit in the system depends on how the system can meet these SLAs. In this paper, an SLA-based resource allocation problem for multi-tier applications in the cloud computing is considered. An upper bound on the total profit is provided and an algorithm based on force-directed search is proposed to solve the problem. The processing, memory requirement, and communication resources are considered as three dimensions in which optimization is performed. Simulation results demonstrate the effectiveness of the proposed heuristic algorithm.

233 citations


Journal ArticleDOI
TL;DR: An efficient distributed algorithm is proposed that produces a collision-free schedule for data aggregation in WSNs and it is theoretically proved that the delay of the aggregation schedule generated by the algorithm is at most 16R + Δ - 14 time slots.
Abstract: Data aggregation is a key functionality in wireless sensor networks (WSNs). This paper focuses on data aggregation scheduling problem to minimize the delay (or latency). We propose an efficient distributed algorithm that produces a collision-free schedule for data aggregation in WSNs. We theoretically prove that the delay of the aggregation schedule generated by our algorithm is at most 16R + Δ - 14 time slots. Here, R is the network radius and Δ is the maximum node degree in the communication graph of the original network. Our algorithm significantly improves the previously known best data aggregation algorithm with an upper bound of delay of 24D + 6Δ + 16 time slots, where D is the network diameter (note that D can be as large as 2R). We conduct extensive simulations to study the practical performances of our proposed data aggregation algorithm. Our simulation results corroborate our theoretical results and show that our algorithms perform better in practice. We prove that the overall lower bound of delay for data aggregation under any interference model is max{log n,R}, where n is the network size. We provide an example to show that the lower bound is (approximately) tight under the protocol interference model when rI = r, where rI is the interference range and r is the transmission range. We also derive the lower bound of delay under the protocol interference model when r <; rI <; 3r and rI ≥ 3r.

224 citations


Journal ArticleDOI
TL;DR: The main challenges inherent to the resource allocation process particular to distributed clouds are highlighted and categorized, offering a stepwise view of this process that covers the initial modeling phase through to the optimization phase.
Abstract: In a cloud computing environment, dynamic resource allocation and reallocation are keys for accommodating unpredictable demands and, ultimately, contribute to investment return. This article discusses this process in the context of distributed clouds, which are seen as systems where application developers can selectively lease geographically distributed resources. This article highlights and categorizes the main challenges inherent to the resource allocation process particular to distributed clouds, offering a stepwise view of this process that covers the initial modeling phase through to the optimization phase.

214 citations


Journal ArticleDOI
01 Dec 2011
TL;DR: A distributed algorithm is proposed, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms, and bounds on the convergence properties of the proposed method are obtained.
Abstract: This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This paper considers the application of gradient-based distributed algorithms on an approximation of the multiuser problem and considers instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints.
Abstract: Traditionally, a multiuser problem is a constrained optimization problem characterized by a set of users, an objective given by a sum of user-specific utility functions, and a collection of linear constraints that couple the user decisions. The users do not share the information about their utilities, but do communicate values of their decision variables. The multiuser problem is to maximize the sum of the user-specific utility functions subject to the coupling constraints, while abiding by the informational requirements of each user. In this paper, we focus on generalizations of convex multiuser optimization problems where the objective and constraints are not separable by user and instead consider instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints. To solve this problem, we consider the application of gradient-based distributed algorithms on an approximation of the multiuser problem. Such an approximation is obtained through a Tikhonov regulariza...

Journal ArticleDOI
TL;DR: The proposed distributed detection algorithms are inherently adaptive and can track changes in the active hypothesis, and are applied to the problem of spectrum sensing in cognitive radios.
Abstract: We study the problem of distributed detection, where a set of nodes is required to decide between two hypotheses based on available measurements. We seek fully distributed and adaptive implementations, where all nodes make individual real-time decisions by communicating with their immediate neighbors only, and no fusion center is necessary. The proposed distributed detection algorithms are based on diffusion strategies [C. G. Lopes and A. H. Sayed, “Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis,” IEEE Trans. Signal Process., vol. 56, no. 7, pp. 3122-3136, July 2008; F. S. Cattivelli and A. H. Sayed, “Diffusion LMS Strategies for Distributed Estimation,” IEEE Trans. Signal Process., vol. 58, no. 3, pp. 1035-1048, March 2010; F. S. Cattivelli, C. G. Lopes, and A. H. Sayed, “Diffusion Recursive Least-Squares for Distributed Estimation Over Adaptive Networks,” IEEE Trans. Signal Process., vol. 56, no. 5, pp. 1865-1877, May 2008] for distributed estimation. Diffusion detection schemes are attractive in the context of wireless and sensor networks due to their scalability, improved robustness to node and link failure as compared to centralized schemes, and their potential to save energy and communication resources. The proposed algorithms are inherently adaptive and can track changes in the active hypothesis. We analyze the performance of the proposed algorithms in terms of their probabilities of detection and false alarm, and provide simulation results comparing with other cooperation schemes, including centralized processing and the case where there is no cooperation. Finally, we apply the proposed algorithms to the problem of spectrum sensing in cognitive radios.

Journal ArticleDOI
TL;DR: Distributed clustering schemes are developed in this paper for both deterministic and probabilistic approaches to unsupervised learning that can exhibit improved robustness to initialization than their centralized counterparts.
Abstract: Clustering spatially distributed data is well motivated and especially challenging when communication to a central processing unit is discouraged, e.g., due to power constraints. Distributed clustering schemes are developed in this paper for both deterministic and probabilistic approaches to unsupervised learning. The centralized problem is solved in a distributed fashion by recasting it to a set of smaller local clustering problems with consensus constraints on the cluster parameters. The resulting iterative schemes do not exchange local data among nodes, and rely only on single-hop communications. Performance of the novel algorithms is illustrated with simulated tests on synthetic and real sensor data. Surprisingly, these tests reveal that the distributed algorithms can exhibit improved robustness to initialization than their centralized counterparts.

Journal ArticleDOI
TL;DR: A macrolevel model is discussed to show the relation of the amount of computation and the total power consumption of multiple peer computers to perform Web types of application processes and algorithms for allocating a process to a computer so that the deadline constraint is satisfied and thetotal power consumption is reduced.
Abstract: Information systems are composed of various types of computers interconnected in networks. In addition, information systems are being shifted from the traditional client-server model to the peer-to-peer (P2P) model. P2P systems are scalable and fully distributed without any centralized coordinator. Here, it is getting more significant to discuss how to reduce the total electric power consumption of computers in addition to developing distributed algorithms to minimize the computation time and memory space. In this paper, we do not discuss microlevel models like the hardware specifications of computers like low-energy CPUs. We rather discuss a macrolevel model to show the relation of the amount of computation and the total power consumption of multiple peer computers to perform Web types of application processes. We also discuss algorithms for allocating a process to a computer so that the deadline constraint is satisfied and the total power consumption is reduced.

Journal ArticleDOI
TL;DR: This work proposes a novel low-complex and fully distributed IM scheme, called REFIM (REFerence based Interference Management), in the downlink of heterogeneous multi-cell networks and presents that as long as interference is managed well, the spectrum sharing policy can outperform the best spectrum splitting policy where the number of subchannels is optimally divided between macro and femto cells.
Abstract: Due to the increasing demand of capacity in wireless cellular networks, the small cells such as pico and femto cells are becoming more popular to enjoy a spatial reuse gain, and thus cells with different sizes are expected to coexist in a complex manner. In such a heterogeneous environment, the role of interference management (IM) becomes of more importance, but technical challenges also increase, since the number of cell-edge users, suffering from severe interference from the neighboring cells, will naturally grow. In order to overcome low performance and/or high complexity of existing static and other dynamic IM algorithms, we propose a novel low-complex and fully distributed IM scheme, called REFIM (REFerence based Interference Management), in the downlink of heterogeneous multi-cell networks. We first formulate a general optimization problem that turns out to require intractable computation complexity for global optimality. To have a practical solution with low computational and signaling overhead, which is crucial for low-cost small-cell solutions, e.g., femto cells, in REFIM, we decompose it into per-BS (base station) problems based on the notion of reference user and reduce feedback overhead over backhauls both temporally and spatially. We evaluate REFIM through extensive simulations under various configurations, including the scenarios from a real deployment of BSs. We show that, compared to the schemes without IM, REFIM can yield more than 40% throughput improvement of cell-edge users while increasing the overall performance by 10~107%. This is equal to about 95% performance of the existing centralized IM algorithm (MC-IIWF) that is known to be near-optimal but hard to implement in practice due to prohibitive complexity. We also present that as long as interference is managed well, the spectrum sharing policy can outperform the best spectrum splitting policy where the number of subchannels is optimally divided between macro and femto cells.

Journal ArticleDOI
TL;DR: This paper proposes an adaptive and cross-layer framework for reliable and energy-efficient data collection in WSNs based on the IEEE 802.15.4/ZigBee standards, and proposes a low-complexity distributed algorithm, called ADaptive Access Parameters Tuning (ADAPT), that can effectively meet the application-specific reliability under a wide range of operating conditions.
Abstract: A major concern in wireless sensor networks (WSNs) is energy conservation, since battery-powered sensor nodes are expected to operate autonomously for a long time, e.g., for months or even years. Another critical aspect of WSNs is reliability, which is highly application-dependent. In most cases it is possible to trade-off energy consumption and reliability in order to prolong the network lifetime, while satisfying the application requirements. In this paper we propose an adaptive and cross-layer framework for reliable and energy-efficient data collection in WSNs based on the IEEE 802.15.4/ZigBee standards. The framework involves an energy-aware adaptation module that captures the application's reliability requirements, and autonomously configures the MAC layer based on the network topology and the traffic conditions in order to minimize the power consumption. Specifically, we propose a low-complexity distributed algorithm, called ADaptive Access Parameters Tuning (ADAPT), that can effectively meet the application-specific reliability under a wide range of operating conditions, for both single-hop and multi-hop networking scenarios. Our solution can be integrated into WSNs based on IEEE 802.15.4/ZigBee without requiring any modification to the standards. Simulation results show that ADAPT is very energy-efficient, with near-optimal performance.

Journal ArticleDOI
TL;DR: It is proved both analytically and via simulations that the proposed ADMM approach exhibits convergence speed between the best in the literature (classical and optimized solutions), while providing the most powerful resilience to noise.
Abstract: The alternating direction multipliers method (ADMM) has been recently proposed as a practical and efficient algorithm for distributed computing. We discuss its applicability to the average consensus problem in this paper. By carefully relaxing ADMM augmentation coefficients we are able to analytically investigate its properties, and to propose simple and strict analytical bounds. These provide a clear indication on how to choose system parameters for optimized performance. We prove both analytically and via simulations that the proposed approach exhibits convergence speed between the best in the literature (classical and optimized solutions), while providing the most powerful resilience to noise.

Journal ArticleDOI
TL;DR: This work addresses the problem of designing intelligent in-tersections, where traffic lights and stop signs are removed, and cars negotiate the intersection through an interaction of centralized and distributed decision making.
Abstract: The automation of driving tasks is of increasing interest for highway traffic management. The emerging technologies of global positioning and intervehicular wireless communications, combined with in-vehicle computation and sensing capabilities, can potentially provide remarkable improvements in safety and efficiency. We address the problem of designing intelligent in-tersections, where traffic lights and stop signs are removed, and cars negotiate the intersection through an interaction of centralized and distributed decision making. Intelligent intersections are representative of complex hybrid systems that are increasingly of interest, where the challenge is to design tractable distributed algorithms that guarantee safety and provide good performance. Systems of automatically driven vehicles will need an under lying collision avoidance system with provable safety properties to be acceptable. This condition raises several challenges. We need to ensure perpetual collision avoidance so that cars do not get into future problematic positions to avoid an immediate collision. The architecture needs to allow distributed freedom of action to cars yet should guard against worst-case behavior of other cars to guarantee collision avoidance. The algorithms should be tractable both computationally and in information requirements and robust to uncertainties in sensing and communication. To address these challenges, we propose a hybrid architecture with an appropriate interplay between centralized coordination and distributed freedom of action. The approach is built around a core where each car has an infinite horizon contingency plan, which is updated at each sampling instant and distributed by the cars, in a computationally tractable manner. We also define a dynamically changing partial-order relation between cars, which specifies, for each car, a set of cars whose worst-case behaviors it should guard against. The architecture is hybrid, involving a centralized component that coordinates intersection traversals. We prove the safety and liveness of the overall scheme. The mathematical challenge of accurately quantifying performance remains as a difficult challenge; therefore, we conduct a simulation study that shows the benefits over stop signs and traffic lights. It is hoped that our effort can provide methodologies for the design of tractable solutions for complex distributed systems that require safety and liveness guarantees.

Journal ArticleDOI
TL;DR: This work considers the problem of maximizing the weighted sum-rate of a wireless cellular network via coordinated scheduling and discrete power control and presents two distributed iterative algorithms which require limited information exchange and data processing at each base station.
Abstract: Inter-cell interference mitigation is a key challenge in the next generation wireless networks which are expected to use an aggressive frequency reuse factor and a high-density base station deployment to improve coverage and spectral efficiency. In this work, we consider the problem of maximizing the weighted sum-rate of a wireless cellular network via coordinated scheduling and discrete power control. We present two distributed iterative algorithms which require limited information exchange and data processing at each base station. Both algorithms provably converge to a solution where no base station can unilaterally modify its status (i.e., transmit power and user selection) to improve the weighted sum-rate of the network. Numerical studies are carried out to assess the performance of the proposed schemes in a realistic system based on the IEEE 802.16m specifications. Simulation results show that the proposed algorithms achieve a significant rate gain over uncoordinated transmission strategies for both cell-edge and inner users.

Proceedings ArticleDOI
24 Jul 2011
TL;DR: In this paper, the incremental cost of each generation unit as the consensus variable is used to solve the conventional centralized control problem in a distributed manner, where the row-stochastic matrices have been used to indicate the different topologies of distribution systems and their configuration properties, such as convergence speeds.
Abstract: In a next generation power system, effective distributed control algorithms could be embedded in distributed controllers to properly allocate electrical power among connected buses autonomously. In this paper, we present a novel approach to solve the economic dispatch problem. By selecting the incremental cost of each generation unit as the consensus variable, the algorithm is able to solve the conventional centralized control problem in a distributed manner. The row-stochastic matrices have been used to indicate the different topologies of distribution systems and their configuration properties, such as convergence speeds. The simulation results of several case studies are provided to verify the algorithm.

Journal ArticleDOI
TL;DR: A spline-based approach to field estimation, which relies on a basis expansion model of the field of interest, and induces a group-Lasso estimator for the coefficients of the thin-plate spline expansions per basis.
Abstract: The unceasing demand for continuous situational awareness calls for innovative and large-scale signal processing algorithms, complemented by collaborative and adaptive sensing platforms to accomplish the objectives of layered sensing and control. Towards this goal, the present paper develops a spline-based approach to field estimation, which relies on a basis expansion model of the field of interest. The model entails known bases, weighted by generic functions estimated from the field's noisy samples. A novel field estimator is developed based on a regularized variational least-squares (LS) criterion that yields finite-dimensional (function) estimates spanned by thin-plate splines. Robustness considerations motivate well the adoption of an overcomplete set of (possibly overlapping) basis functions, while a sparsifying regularizer augmenting the LS cost endows the estimator with the ability to select a few of these bases that “better” explain the data. This parsimonious field representation becomes possible, because the sparsity-aware spline-based method of this paper induces a group-Lasso estimator for the coefficients of the thin-plate spline expansions per basis. A distributed algorithm is also developed to obtain the group-Lasso estimator using a network of wireless sensors, or, using multiple processors to balance the load of a single computational unit. The novel spline-based approach is motivated by a spectrum cartography application, in which a set of sensing cognitive radios collaborate to estimate the distribution of RF power in space and frequency. Computer simulations and tests on real data corroborate that the estimated power spectrum density atlas yields the desired RF state awareness, since the maps reveal spatial locations where idle frequency bands can be reused for transmission, even when fading and shadowing effects are pronounced.

Journal ArticleDOI
TL;DR: In this paper, the authors study a projected multi-agent subgradient algorithm under state-dependent communication and show that the algorithm converges to the same optimal solution with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Abstract: We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.

Journal ArticleDOI
TL;DR: This work solves the problem for generic connected network topologies with asymmetric random link failures with a novel distributed, de-centralized algorithm, and proposes a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale.
Abstract: We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x = x*. The objective function of the corresponding optimization problem is the sum of private (known only by a node), convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, de-centralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping), and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping). The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures), the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.

Journal ArticleDOI
TL;DR: Based on how cameras share estimates and fuse information, this article classified these trackers as distributed, decentralized, and centralized algorithms and highlighted the challenges to be addressed in the design of decentralized and distributed tracking algorithms.
Abstract: We discussed emerging multicamera tracking algorithms that find their roots in signal processing, wireless sensor networks, and computer vision. Based on how cameras share estimates and fuse information, we classified these trackers as distributed, decentralized, and centralized algorithms. We also highlighted the challenges to be addressed in the design of decentralized and distributed tracking algorithms. In particular, we showed how the constraints derived from the topology of the networks and the nature of the task have favored so far decentralized architectures with multiple local fusion centers. Because of the availability of fewer fusion centers compared to distributed algorithms, decentralized algorithms can share larger amounts of data (e.g., occupancy maps) and can back-project estimates among views and fusion centers to validate results. Distributed tracking uses algorithms that can operate with smaller amounts of data at any particular node and obtain state estimates through iterative fusion. Despite recent advances, there are important issues to be addressed to achieve efficient multitarget multicamera tracking. Current algorithms either assume the track-to-measurement association information to be available for the tracker or operate on a small (known) number of targets. Algorithms performing track-to-measurement association for a time-varying number of targets with higher accuracy usually incur much higher costs, whose reduction is an important open problem to be addressed in multicamera networks.

Journal ArticleDOI
TL;DR: A fully distributed algorithm which has low overhead and can achieve scalable synchronization for wireless sensor networks is proposed based on belief propagation, and simulation results show that the proposed algorithm achieves better accuracy than consensus algorithms.
Abstract: In this paper, we study the global clock synchronization problem for wireless sensor networks. Based on belief propagation, we propose a fully distributed algorithm which has low overhead and can achieve scalable synchronization. It is also shown analytically that the proposed algorithm always converges for strongly connected networks. Simulation results show that the proposed algorithm achieves better accuracy than consensus algorithms. Furthermore, the belief obtained at each sensor provides an accurate prediction on the algorithm's performance in terms of MSE.

Proceedings ArticleDOI
06 Jun 2011
TL;DR: It is shown that in the absence of a good initial upper bound on the size of the network, eventual consensus is as hard as computing deterministic functions of the input, e.g., the minimum or maximum of inputs to the nodes.
Abstract: We study several variants of coordinated consensus in dynamic networks We assume a synchronous model, where the communication graph for each round is chosen by a worst-case adversary The network topology is always connected, but can change completely from one round to the next The model captures mobile and wireless networks, where communication can be unpredictableIn this setting we study the fundamental problems of eventual, simultaneous, and Δ-coordinated consensus, as well as their relationship to other distributed problems, such as determining the size of the network We show that in the absence of a good initial upper bound on the size of the network, eventual consensus is as hard as computing deterministic functions of the input, eg, the minimum or maximum of inputs to the nodes We also give an algorithm for computing such functions that is optimal in every execution Next, we show that simultaneous consensus can never be achieved in less than n - 1 rounds in any execution, where n is the size of the network; consequently, simultaneous consensus is as hard as computing an upper bound on the number of nodes in the networkFor Δ-coordinated consensus, we show that if the ratio between nodes with input 0 and input 1 is bounded away from 1, it is possible to decide in time n-Θ(√ nΔ), where Δ bounds the time from the first decision until all nodes decide If the dynamic graph has diameter D, the time to decide is min{O(nD/Δ),n-Ω(nΔ/D)}, even if D is not known in advance Finally, we show that (a) there is a dynamic graph such that for every input, no node can decide before time n-O(Δ028n072); and (b) for any diameter D = O(Δ), there is an execution with diameter D where no node can decide before time Ω(nD / Δ) To our knowledge, our work constitutes the first study of Δ-coordinated consensus in general graphs

Journal ArticleDOI
TL;DR: In this paper, the authors present a long-term dynamic multi-objective planning model for distribution network expansion along with distributed energy options, which optimizes two objectives, namely costs and emissions and determines the optimal schemes of sizing, placement and specially the dynamics (i.e., timing) of investments on distributed generation units and network reinforcements over the planning period.

Journal ArticleDOI
TL;DR: This work studies the performance of the consensus-based multi-agent distributed subgradient method and shows how it depends on the probability distribution of the random graph.
Abstract: We investigate collaborative optimization of an objective function expressed as a sum of local convex functions, when the agents make decisions in a distributed manner using local information, while the communication topology used to exchange messages and information is modeled by a graph-valued random process, assumed independent and identically distributed. Specifically, we study the performance of the consensus-based multi-agent distributed subgradient method and show how it depends on the probability distribution of the random graph. For the case of a constant stepsize, we first give an upper bound on the difference between the objective function, evaluated at the agents' estimates of the optimal decision vector, and the optimal value. Second, for a particular class of convex functions, we give an upper bound on the distances between the agents' estimates of the optimal decision vector and the minimizer. In addition, we provide the rate of convergence to zero of the time varying component of the aforementioned upper bound. The addressed metrics are evaluated via their expected values. As an application, we show how the distributed optimization algorithm can be used to perform collaborative system identification and provide numerical experiments under the randomized and broadcast gossip protocols.

Journal ArticleDOI
TL;DR: Adapt and distributed algorithms for motion coordination of a group of m vehicles must service demands whose time of arrival, spatial location, and service requirement are stochastic to minimize the average time demands spend in the system.
Abstract: In this paper, we present adaptive and distributed algorithms for motion coordination of a group of m vehicles. The vehicles must service demands whose time of arrival, spatial location, and service requirement are stochastic; the objective is to minimize the average time demands spend in the system. The general problem is known as the m-vehicle Dynamic Traveling Repairman Problem (m-DTRP). The best previously known control algorithms rely on centralized task assignment and are not robust against changes in the environment. In this paper, we first devise new control policies for the 1-DTRP that: i) are provably optimal both in light-load conditions (i.e., when the arrival rate for the demands is small) and in heavy-load conditions (i.e., when the arrival rate for the demands is large), and ii) are adaptive, in particular, they are robust against changes in load conditions. Then, we show that specific partitioning policies, whereby the environment is partitioned among the vehicles and each vehicle follows a certain set of rules within its own region, are optimal in heavy-load conditions. Building upon the previous results, we finally design control policies for the m-DTRP that i) are adaptive and distributed, and ii) have strong performance guarantees in heavy-load conditions and stabilize the system in any load condition.