scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 2006"


Journal ArticleDOI
TL;DR: A novel scheme that first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination and achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols.
Abstract: Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.

3,153 citations


Journal ArticleDOI
TL;DR: This work analyzes the averaging problem under the gossip constraint for an arbitrary network graph, and finds that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm.
Abstract: Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of "gossip" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.

2,634 citations


Journal ArticleDOI
TL;DR: This tutorial paper first reviews the basics of convexity, Lagrange duality, distributed subgradient method, Jacobi and Gauss-Seidel iterations, and implication of different time scales of variable updates, and introduces primal, dual, indirect, partial, and hierarchical decompositions, focusing on network utility maximization problem formulations.
Abstract: A systematic understanding of the decomposability structures in network utility maximization is key to both resource allocation and functionality allocation. It helps us obtain the most appropriate distributed algorithm for a given network resource allocation problem, and quantifies the comparison across architectural alternatives of modularized network design. Decomposition theory naturally provides the mathematical language to build an analytic foundation for the design of modularized and distributed control of networks. In this tutorial paper, we first review the basics of convexity, Lagrange duality, distributed subgradient method, Jacobi and Gauss-Seidel iterations, and implication of different time scales of variable updates. Then, we introduce primal, dual, indirect, partial, and hierarchical decompositions, focusing on network utility maximization problem formulations and the meanings of primal and dual decompositions in terms of network architectures. Finally, we present recent examples on: systematic search for alternative decompositions; decoupling techniques for coupled objective functions; and decoupling techniques for coupled constraint sets that are not readily decomposable

1,725 citations


Journal ArticleDOI
TL;DR: A new distributed energy-efficient clustering scheme for heterogeneous wireless sensor networks, which is called DEEC, is proposed and evaluated, which achieves longer lifetime and more effective messages than current important clustering protocols in heterogeneous environments.

1,131 citations


Journal ArticleDOI
TL;DR: It is shown that a clean-slate optimization-based approach to the multihop resource allocation problem naturally results in a "loosely coupled" cross-layer solution, and how to use imperfect scheduling in the cross- layer framework is demonstrated.
Abstract: This tutorial paper overviews recent developments in optimization-based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned and the main obstacles in extending the work to general resource allocation problems for multihop wireless networks. Towards this end, we show that a clean-slate optimization-based approach to the multihop resource allocation problem naturally results in a "loosely coupled" cross-layer solution. That is, the algorithms obtained map to different layers [transport, network, and medium access control/physical (MAC/PHY)] of the protocol stack, and are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex, and thus needs simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the cross-layer framework and describe recently developed distributed algorithms along these lines. We conclude by describing a set of open research problems

899 citations


Journal ArticleDOI
10 Jan 2006
TL;DR: This paper explains what network coding does and how it does it and discusses the implications of theoretical results on network coding for realistic settings and shows how network coding can be used in practice.
Abstract: Network coding is a new research area that may have interesting applications in practical networking systems. With network coding, intermediate nodes may send out packets that are linear combinations of previously received information. There are two main benefits of this approach: potential throughput improvements and a high degree of robustness. Robustness translates into loss resilience and facilitates the design of simple distributed algorithms that perform well, even if decisions are based only on partial information. This paper is an instant primer on network coding: we explain what network coding does and how it does it. We also discuss the implications of theoretical results on network coding for realistic settings and show how network coding can be used in practice

858 citations


Journal ArticleDOI
01 Sep 2006
TL;DR: An asynchronous distributed algorithm for updating power levels and prices is presented, and by relating this algorithm to myopic best response updates in a fictitious game, it is able to characterize convergence using supermodular game theory.
Abstract: We consider a distributed power control scheme for wireless ad hoc networks, in which each user announces a price that reflects compensation paid by other users for their interference. We present an asynchronous distributed algorithm for updating power levels and prices. By relating this algorithm to myopic best response updates in a fictitious game, we are able to characterize convergence using supermodular game theory. Extensions of this algorithm to a multichannel network are also presented, in which users can allocate their power across multiple frequency bands.

782 citations


Journal ArticleDOI
TL;DR: A dynamic control strategy for minimizing energy expenditure in a time-varying wireless network with adaptive transmission rates and a similar algorithm that solves the related problem of maximizing network throughput subject to peak and average power constraints are developed.
Abstract: We develop a dynamic control strategy for minimizing energy expenditure in a time-varying wireless network with adaptive transmission rates. The algorithm operates without knowledge of traffic rates or channel statistics, and yields average power that is arbitrarily close to the minimum possible value achieved by an algorithm optimized with complete knowledge of future events. Proximity to this optimal solution is shown to be inversely proportional to network delay. We then present a similar algorithm that solves the related problem of maximizing network throughput subject to peak and average power constraints. The techniques used in this paper are novel and establish a foundation for stochastic network optimization

559 citations


Journal ArticleDOI
TL;DR: A general framework that defines the spectrum access problem for several definitions of overall system utility is defined, and it is shown that the global optimization problem is NP-hard, and a general approximation methodology through vertex labeling is provided.
Abstract: The Open Spectrum approach to spectrum access can achieve near-optimal utilization by allowing devices to sense and utilize available spectrum opportunistically. However, a naive distributed spectrum assignment can lead to significant interference between devices. In this paper, we define a general framework that defines the spectrum access problem for several definitions of overall system utility. By reducing the allocation problem to a variant of the graph coloring problem, we show that the global optimization problem is NP-hard, and provide a general approximation methodology through vertex labeling. We examine both a centralized strategy, where a central server calculates an allocation assignment based on global knowledge, and a distributed approach, where devices collaborate to negotiate local channel assignments towards global optimization. Our experimental results show that our allocation algorithms can dramatically reduce interference and improve throughput (as much as 12-fold). Further simulations show that our distributed algorithms generate allocation assignments similar in quality to our centralized algorithms using global knowledge, while incurring substantially less computational complexity in the process.

547 citations


Journal ArticleDOI
TL;DR: The notion of a connected sensor cover is developed and a centralized approximation algorithm that constructs a topology involving a near-optimal connected sensors cover is designed, which proves that the size of the constructed topology is within an O(log n) factor of the optimal size.
Abstract: Spatial query execution is an essential functionality of a sensor network, where a query gathers sensor data within a specific geographic region. Redundancy within a sensor network can be exploited to reduce the communication cost incurred in execution of such queries. Any reduction in communication cost would result in an efficient use of the battery energy, which is very limited in sensors. One approach to reduce the communication cost of a query is to self-organize the network, in response to a query, into a topology that involves only a small subset of the sensors sufficient to process the query. The query is then executed using only the sensors in the constructed topology. The self-organization technique is beneficial for queries that run sufficiently long to amortize the communication cost incurred in self-organization. In this paper, we design and analyze algorithms for suchself-organization of a sensor network to reduce energy consumption. In particular, we develop the notion of a connected sensor cover and design a centralized approximation algorithm that constructs a topology involving a near-optimal connected sensor cover. We prove that the size of the constructed topology is within an O(logn) factor of the optimal size, where n is the network size. We develop a distributed self-organization version of the approximation algorithm, and propose several optimizations to reduce the communication overhead of the algorithm. We also design another distributed algorithm based on node priorities that has a further lower communication overhead, but does not provide any guarantee on the size of the connected sensor cover constructed. Finally, we evaluate the distributed algorithms using simulations and show that our approaches results in significant communication cost reductions.

417 citations


Proceedings ArticleDOI
29 Sep 2006
TL;DR: It is shown that under a setting with single-hop traffic and no rate control, the maximal scheduling policy can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs.
Abstract: We consider the problem of throughput-optimal scheduling in wireless networks subject to interference constraints. We model the interference using a family of K -hop interference models. We define a K-hop interference model as one for which no two links within K hops can successfully transmit at the same time (Note that IEEE 802.11 DCF corresponds to a 2-hop interference model.) .For a given K, a throughput-optimal scheduler needs to solve a maximum weighted matching problem subject to the K-hop interference constraints. For K=1, the resulting problem is the classical Maximum Weighted Matching problem, that can be solved in polynomial time. However, we show that for K>1,the resulting problems are NP-Hard and cannot be approximated within a factor that grows polynomially with the number of nodes. Interestingly, we show that for specific kinds of graphs, that can be used to model the underlying connectivity graph of a wide range of wireless networks, the resulting problems admit polynomial time approximation schemes. We also show that a simple greedy matching algorithm provides a constant factor approximation to the scheduling problem for all K in this case. We then show that under a setting with single-hop traffic and no rate control, the maximal scheduling policy considered in recent related works can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs. These results are encouraging as they suggest that one can develop distributed algorithms to achieve near optimal throughput in case of a wide range of wireless networks.

Journal ArticleDOI
TL;DR: A novel “coverage by directional sensors” problem with tunable orientations on a set of discrete targets is studied and a distributed greedy algorithm (DGA) solution is provided by incorporating a measure of the sensors residual energy into DGA.
Abstract: We study a novel “coverage by directional sensors” problem with tunable orientations on a set of discrete targets. We propose a Maximum Coverage with Minimum Sensors (MCMS) problem in which coverage in terms of the number of targets to be covered is maximized whereas the number of sensors to be activated is minimized. We present its exact Integer Linear Programming (ILP) formulation and an approximate (but computationally efficient) centralized greedy algorithm (CGA) solution. These centralized solutions are used as baselines for comparison. Then we provide a distributed greedy algorithm (DGA) solution. By incorporating a measure of the sensors residual energy into DGA, we further develop a Sensing Neighborhood Cooperative Sleeping (SNCS) protocol which performs adaptive scheduling on a larger time scale. Finally, we evaluate the properties of the proposed solutions and protocols in terms of providing coverage and maximizing network lifetime through extensive simulations. Moreover, for the case of circular coverage, we compare against the best known existing coverage algorithm.

Journal ArticleDOI
TL;DR: The proposed centralized algorithm uses the dual decomposition method to optimize spectra in an efficient and computationally tractable way and shows significant performance gains over existing dynamics spectrum management techniques.
Abstract: Crosstalk is a major issue in modern digital subscriber line (DSL) systems such as ADSL and VDSL. Static spectrum management, which is the traditional way of ensuring spectral compatibility, employs spectral masks that can be overly conservative and lead to poor performance. This paper presents a centralized algorithm for optimal spectrum balancing in DSL. The algorithm uses the dual decomposition method to optimize spectra in an efficient and computationally tractable way. The algorithm shows significant performance gains over existing dynamics spectrum management (DSM) techniques, e.g., in one of the cases studied, the proposed centralized algorithm leads to a factor-of-four increase in data rate over the distributed DSM algorithm iterative waterfilling.

Journal ArticleDOI
TL;DR: Distributed algorithms to compute an optimal routing scheme that maximizes the time at which the first node in the network drains out of energy are proposed.
Abstract: A sensor network of nodes with wireless transceiver capabilities and limited energy is considered. We propose distributed algorithms to compute an optimal routing scheme that maximizes the time at which the first node in the network drains out of energy. The problem is formulated as a linear programming problem and subgradient algorithms are used to solve it in a distributed manner. The resulting algorithms have low computational complexity and are guaranteed to converge to an optimal routing scheme that maximizes the network lifetime. The algorithms are illustrated by an example in which an optimal flow is computed for a network of randomly distributed nodes. We also show how our approach can be used to obtain distributed algorithms for many different extensions to the problem. Finally, we extend our problem formulation to more general definitions of network lifetime to model realistic scenarios in sensor networks

Proceedings ArticleDOI
26 Sep 2006
TL;DR: A localized fault detection algorithm is proposed and evaluated that can clearly identify the faulty sensors in the wireless sensor networks with high accuracy and the probability of correct diagnosis is very high even in the existence of large fault sets.
Abstract: Wireless Sensor Networks (WSNs) have become a new information collection and monitoring solution for a variety of applications. Faults occurring to sensor nodes are common due to the sensor device itself and the harsh environment where the sensor nodes are deployed. In order to ensure the network quality of service it is necessary for the WSN to be able to detect the faults and take actions to avoid further degradation of the service. The goal of this paper is to locate the faulty sensors in the wireless sensor networks. We propose and evaluate a localized fault detection algorithm to identify the faulty sensors. The implementation complexity of the algorithm is low and the probability of correct diagnosis is very high even in the existence of large fault sets. Simulation results show the algorithm can clearly identify the faulty sensors with high accuracy.

Journal ArticleDOI
TL;DR: A class of weighted gradient methods for distributed resource allocation over a network is considered and sufficient conditions on the edge weights for the algorithm to converge monotonically to the optimal solution have the form of a linear matrix inequality.
Abstract: We consider a class of weighted gradient methods for distributed resource allocation over a network. Each node of the network is associated with a local variable and a convex cost function; the sum of the variables (resources) across the network is fixed. Starting with a feasible allocation, each node updates its local variable in proportion to the differences between the marginal costs of itself and its neighbors. We focus on how to choose the proportional weights on the edges (scaling factors for the gradient method) to make this distributed algorithm converge and on how to make the convergence as fast as possible. We give sufficient conditions on the edge weights for the algorithm to converge monotonically to the optimal solution; these conditions have the form of a linear matrix inequality. We give some simple, explicit methods to choose the weights that satisfy these conditions. We derive a guaranteed convergence rate for the algorithm and find the weights that minimize this rate by solving a semidefinite program. Finally, we extend the main results to problems with general equality constraints and problems with block separable objective function.

Journal ArticleDOI
26 Jun 2006
TL;DR: This work presents the first distributed scheduling framework that guarantees maximum throughput, based on a combination of a distributed matching algorithm and an algorithm that compares and merges successive matching solutions.
Abstract: A major challenge in the design of wireless networks is the need for distributed scheduling algorithms that will efficiently share the common spectrum. Recently, a few distributed algorithms for networks in which a node can converse with at most a single neighbor at a time have been presented. These algorithms guarantee 50% of the maximum possible throughput. We present the first distributed scheduling framework that guarantees maximum throughput. It is based on a combination of a distributed matching algorithm and an algorithm that compares and merges successive matching solutions. The comparison can be done by a deterministic algorithm or by randomized gossip algorithms. In the latter case, the comparison may be inaccurate. Yet, we show that if the matching and gossip algorithms satisfy simple conditions related to their performance and to the inaccuracy of the comparison (respectively), the framework attains the desired throughput.It is shown that the complexities of our algorithms, that achieve nearly 100% throughput, are comparable to those of the algorithms that achieve 50% throughput. Finally, we discuss extensions to general interference models. Even for such models, the framework provides a simple distributed throughput optimal algorithm.

Journal ArticleDOI
TL;DR: It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared with a single-carrier system and also to a multicarrier system in which each user maximizes its utility over each carrier independently.
Abstract: A game-theoretic model for studying power control in multicarrier code-division multiple-access systems is proposed. Power control is modeled as a noncooperative game in which each user decides how much power to transmit over each carrier to maximize its own utility. The utility function considered here measures the number of reliable bits transmitted over all the carriers per joule of energy consumed and is particularly suitable for networks where energy efficiency is important. The multidimensional nature of users' strategies and the nonquasi-concavity of the utility function make the multicarrier problem much more challenging than the single-carrier or throughput-based-utility case. It is shown that, for all linear receivers including the matched filter, the decorrelator, and the minimum-mean-square-error detector, a user's utility is maximized when the user transmits only on its "best" carrier. This is the carrier that requires the least amount of power to achieve a particular target signal-to-interference-plus-noise ratio at the output of the receiver. The existence and uniqueness of Nash equilibrium for the proposed power control game are studied. In particular, conditions are given that must be satisfied by the channel gains for a Nash equilibrium to exist, and the distribution of the users among the carriers at equilibrium is characterized. In addition, an iterative and distributed algorithm for reaching the equilibrium (when it exists) is presented. It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared with a single-carrier system and also to a multicarrier system in which each user maximizes its utility over each carrier independently

Proceedings ArticleDOI
29 Sep 2006
TL;DR: Using a mathematical formulation, synchronized TDMA link schedulings that optimize the networking throughput are developed that are both efficient centralized and distributed algorithms that use time slots within a constant factor of the optimum.
Abstract: We study efficient link scheduling for a multihop wireless network to maximize its throughput. Efficient link scheduling can greatly reduce the interference effect of close-by transmissions. Unlike the previous studies that often assume a unit disk graph model, we assume that different terminals could have different transmission ranges and different interference ranges. In our model, it is also possible that a communication link may not exist due to barriers or is not used by a predetermined routing protocol, while the transmission of a node always result interference to all non-intended receivers within its interference range. Using a mathematical formulation, we develop synchronized TDMA link schedulings that optimize the networking throughput. Specifically, by assuming known link capacities and link traffic loads, we study link scheduling under the RTS/CTS interference model and the protocol interference model with fixed transmission power. For both models, we present both efficient centralized and distributed algorithms that use time slots within a constant factor of the optimum. We also present efficient distributed algorithms whose performances are still comparable with optimum, but with much less communications. Our theoretical results are corroborated by extensive simulation studies.

Proceedings ArticleDOI
11 Dec 2006
TL;DR: A novel algorithm called SCALE is derived, that provides a significant performance improvement over the existing iterative water-filling (IWF) algorithm in multi-user DSL networks, doing so with comparable low complexity.
Abstract: Dynamic Spectrum Management of Digital Subscriber Lines (DSL) has the potential to dramatically increase the capacity of the aging last-mile copper access network. This paper takes an important step toward fulfilling this potential through power spectrum balancing. We derive a novel algorithm called SCALE, that provides a significant performance improvement over the existing iterative water-filling (IWF) algorithm in multi-user DSL networks, doing so with comparable low complexity. The algorithm is easily distributed through measurement and limited message-passing with the use of a Spectrum Management Center. We outline how overhead can be managed, and show that in the limit of zero message-passing, performance reduces to IWF. Numerical convergence of SCALE was found to be extremely fast when applied to VDSL, with performance exceeding that of iterative water-filling in just a few iterations, and to over 90% of the final rate in under 5 iterations. Lastly, we return to the problem of iterative water-filling and derive a new algorithm named SCAWF that is shown to be a very simple way to waterfill, particularly suited to the multi-user context.

Journal ArticleDOI
TL;DR: A technique for efficiently storing user reputation information in a completely decentralized manner is described, and it is shown how this information can be used to efficiently identify non-cooperative users in NICE.

Journal ArticleDOI
TL;DR: This paper derives and analyzes distributed state estimators of dynamical stochastic processes, whereby the low communication cost is effected by requiring the transmission of a single bit per observation.
Abstract: When dealing with decentralized estimation, it is important to reduce the cost of communicating the distributed observations-a problem receiving revived interest in the context of wireless sensor networks. In this paper, we derive and analyze distributed state estimators of dynamical stochastic processes, whereby the low communication cost is effected by requiring the transmission of a single bit per observation. Following a Kalman filtering (KF) approach, we develop recursive algorithms for distributed state estimation based on the sign of innovations (SOI). Even though SOI-KF can afford minimal communication overhead, we prove that in terms of performance and complexity it comes very close to the clairvoyant KF which is based on the analog-amplitude observations. Reinforcing our conclusions, we show that the SOI-KF applied to distributed target tracking based on distance-only observations yields accurate estimates at low communication cost

Proceedings ArticleDOI
22 Jan 2006
TL;DR: This paper provides an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems and gives a distributed algorithm using only small messages which obtains an (ρΔ)1/k-approximation in time O(k2).
Abstract: Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1/k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1/k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: For a model of a random wireless network, it is shown that with high probability the error variance is O(1) as the number of nodes in the network increases, which provides support for the feasibility of time-based computing n large wireless networks.
Abstract: We analyze the spatial smoothing algorithm of Solis, Borkar and Kumar (2005) for clock synchronization over multi-hop wireless networks. In particular, for a model of a random wireless network we show that with high probability the error variance is O(1) as the number of nodes in the network increases. This provides support for the feasibility of time-based computing n large wireless networks. We also provide bounds on the settling time of a distributed algorithm

Proceedings ArticleDOI
01 Dec 2006
TL;DR: A fully distributed and asynchronous algorithm which functions by simple local broadcasts is designed, and changing the time reference node for synchronization is also easy, consisting simply of one node switching on adaptation, and another switching it off.
Abstract: A distributed algorithm to achieve accurate time synchronization in large multihop wireless networks is presented. The central idea is to exploit the large number of global constraints that have to be satisfied by a common notion of time in a multihop network. If, at a certain instant, Oij is the clock offset between two neighboring nodes i and j, then for any loop i1, i2, i3 , ..., in, in + 1 - i1 in the multihop network, these offsets must satisfy the global constraint Sigma k = 1 nOik, ik + 1 = 0. Noisy estimates Ocirc ij of Oij are usually arrived at by bilateral exchanges of timestamped messages or local broadcasts. By imposing the large number of global constraints for all the loops in the multihop network, these estimates can be smoothed and made more accurate. A fully distributed and asynchronous algorithm which functions by simple local broadcasts is designed. Changing the time reference node for synchronization is also easy, consisting simply of one node switching on adaptation, and another switching it off. Implementation results on a forty node network, and comparative evaluation against a leading algorithm, are presented

Proceedings ArticleDOI
14 May 2006
TL;DR: It is shown that many power control problems, with coupled constraints among the users, can be naturally formulated as potential games and, hence, efficiently solved.
Abstract: In this paper we propose a unified framework, based on the emergent potential games to deal with a variety of network resource allocation problems. We generalize the existing results on potential games to the cases where there exists coupling among the (possibly vector) strategies of all players. We derive sufficient conditions for the existence and uniqueness of the Nash Equilibrium, and provide different distributed algorithms along their convergence properties. Using this new framework, we then show that many power control problems (standard and non-standard) with coupled constraints among the users, can be naturally formulated as potential games and, hence, efficiently solved. Finally, we point out an interesting interplay existing between potential games, classical optimization theory, and Lyapunov stability theory.

Proceedings ArticleDOI
23 Jul 2006
TL;DR: In this article, the authors studied the problem of computing functions of values at the nodes in a network in a totally distributed manner, and proposed a distributed randomized algorithm for computing separable functions based on properties of exponential random variables.
Abstract: Motivated by applications to sensor, peer-to-peer, and ad-hoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individual variables. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions.The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions.The second contribution of this paper is an analysis of the information spreading time of the gossip algorithm. This analysis yields an upper bound on the information spreading time, and therefore a corresponding upper bound on the running time of the algorithm for computing separable functions, in terms of the conductance of an appropriate stochastic matrix. These bounds imply that, for a class of graphs with small spectral gap (such as grid graphs), the time used by our algorithm to compute averages is of a smaller order than the time required for the computation of averages by a known iterative gossip scheme [5].

Book
27 Oct 2006
TL;DR: The aim of this monograph is to provide a history of distributed computing in the context of elections, as well as some of the techniques used to design and implement these networks.
Abstract: Preface. 1. Distributed Computing Environments. 1.1 Entities. 1.2 Communication. 1.3 Axioms and Restrictions. 1.3.1 Axioms. 1.3.2 Restrictions. 1.4 Cost and Complexity. 1.4.1 Amount of Communication Activities. 1.4.2 Time. 1.5 An Example: Broadcasting. 1.6 States and Events. 1.6.1 Time and Events. 1.6.2 States and Configurations. 1.7 Problems and Solutions (). 1.8 Knowledge. 1.8.1 Levels of Knowledge. 1.8.2 Types of Knowledge. 1.9 Technical Considerations. 1.9.1 Messages. 1.9.2 Protocol. 1.9.3 Communication Mechanism. 1.10 Summary of Definitions. 1.11 Bibliographical Notes. 1.12 Exercises, Problems, and Answers. 1.12.1 Exercises and Problems. 1.12.2 Answers to Exercises. 2. Basic Problems And Protocols. 2.1 Broadcast. 2.1.1 The Problem. 2.1.2 Cost of Broadcasting. 2.1.3 Broadcasting in Special Networks. 2.2 Wake-Up. 2.2.1 Generic Wake-Up. 2.2.2 Wake-Up in Special Networks. 2.3 Traversal. 2.3.1 Depth-First Traversal. 2.3.2 Hacking (). 2.3.3 Traversal in Special Networks. 2.3.4 Considerations on Traversal. 2.4 Practical Implications: Use a Subnet. 2.5 Constructing a Spanning Tree. 2.5.1 SPT Construction with a Single Initiator: Shout. 2.5.2 Other SPT Constructions with Single Initiator. 2.5.3 Considerations on the Constructed Tree. 2.5.4 Application: Better Traversal. 2.5.5 Spanning-Tree Construction with Multiple Initiators. 2.5.6 Impossibility Result. 2.5.7 SPT with Initial Distinct Values. 2.6 Computations in Trees. 2.6.1 Saturation: A Basic Technique. 2.6.2 Minimum Finding. 2.6.3 Distributed Function Evaluation. 2.6.4 Finding Eccentricities. 2.6.5 Center Finding. 2.6.6 Other Computations. 2.6.7 Computing in Rooted Trees. 2.7 Summary. 2.7.1 Summary of Problems. 2.7.2 Summary of Techniques. 2.8 Bibliographical Notes. 2.9 Exercises, Problems, and Answers. 2.9.1 Exercises. 2.9.2 Problems. 2.9.3 Answers to Exercises. 3. Election. 3.1 Introduction. 3.1.1 Impossibility Result. 3.1.2 Additional Restrictions. 3.1.3 Solution Strategies. 3.2 Election in Trees. 3.3 Election in Rings. 3.3.1 All the Way. 3.3.2 As Far As It Can. 3.3.3 Controlled Distance. 3.3.4 Electoral Stages. 3.3.5 Stages with Feedback. 3.3.6 Alternating Steps. 3.3.7 Unidirectional Protocols. 3.3.8 Limits to Improvements (). 3.3.9 Summary and Lessons. 3.4 Election in Mesh Networks. 3.4.1 Meshes. 3.4.2 Tori. 3.5 Election in Cube Networks. 3.5.1 Oriented Hypercubes. 3.5.2 Unoriented Hypercubes. 3.6 Election in Complete Networks. 3.6.1 Stages and Territory. 3.6.2 Surprising Limitation. 3.6.3 Harvesting the Communication Power. 3.7 Election in Chordal Rings (). 3.7.1 Chordal Rings. 3.7.2 Lower Bounds. 3.8 Universal Election Protocols. 3.8.1 Mega-Merger. 3.8.2 Analysis of Mega-Merger. 3.8.3 YO-YO. 3.8.4 Lower Bounds and Equivalences. 3.9 Bibliographical Notes. 3.10 Exercises, Problems, and Answers. 3.10.1 Exercises. 3.10.2 Problems. 3.10.3 Answers to Exercises. 4. Message Routing and Shortest Paths. 4.1 Introduction. 4.2 Shortest Path Routing. 4.2.1 Gossiping the Network Maps. 4.2.2 Iterative Construction of Routing Tables. 4.2.3 Constructing Shortest-Path Spanning Tree. 4.2.4 Constructing All-Pairs Shortest Paths. 4.2.5 Min-Hop Routing. 4.2.6 Suboptimal Solutions: Routing Trees. 4.3 Coping with Changes. 4.3.1 Adaptive Routing. 4.3.2 Fault-Tolerant Tables. 4.3.3 On Correctness and Guarantees. 4.4 Routing in Static Systems: Compact Tables. 4.4.1 The Size of Routing Tables. 4.4.2 Interval Routing. 4.5 Bibliographical Notes. 4.6 Exercises, Problems, and Answers. 4.6.1 Exercises. 4.6.2 Problems. 4.6.3 Answers to Exercises. 5. Distributed Set Operations. 5.1 Introduction. 5.2 Distributed Selection. 5.2.1 Order Statistics. 5.2.2 Selection in a Small Data Set. 5.2.3 Simple Case: Selection Among Two Sites. 5.2.4 General Selection Strategy: RankSelect. 5.2.5 Reducing the Worst Case: ReduceSelect. 5.3 Sorting a Distributed Set. 5.3.1 Distributed Sorting. 5.3.2 Special Case: Sorting on a Ordered Line. 5.3.3 Removing the Topological Constraints: Complete Graph. 5.3.4 Basic Limitations. 5.3.5 Efficient Sorting: SelectSort. 5.3.6 Unrestricted Sorting. 5.4 Distributed Sets Operations. 5.4.1 Operations on Distributed Sets. 5.4.2 Local Structure. 5.4.3 Local Evaluation (). 5.4.4 Global Evaluation. 5.4.5 Operational Costs. 5.5 Bibliographical Notes. 5.6 Exercises, Problems, and Answers. 5.6.1 Exercises. 5.6.2 Problems. 5.6.3 Answers to Exercises. 6. Synchronous Computations. 6.1 Synchronous Distributed Computing. 6.1.1 Fully Synchronous Systems. 6.1.2 Clocks and Unit of Time. 6.1.3 Communication Delays and Size of Messages. 6.1.4 On the Unique Nature of Synchronous Computations. 6.1.5 The Cost of Synchronous Protocols. 6.2 Communicators, Pipeline, and Transformers. 6.2.1 Two-Party Communication. 6.2.2 Pipeline. 6.2.3 Transformers. 6.3 Min-Finding and Election: Waiting and Guessing. 6.3.1 Waiting. 6.3.2 Guessing. 6.3.3 Double Wait: Integrating Waiting and Guessing. 6.4 Synchronization Problems: Reset, Unison, and Firing Squad. 6.4.1 Reset /Wake-up. 6.4.2 Unison. 6.4.3 Firing Squad. 6.5 Bibliographical Notes. 6.6 Exercises, Problems, and Answers. 6.6.1 Exercises. 6.6.2 Problems. 6.6.3 Answers to Exercises. 7. Computing in Presence of Faults. 7.1 Introduction. 7.1.1 Faults and Failures. 7.1.2 Modelling Faults. 7.1.3 Topological Factors. 7.1.4 Fault Tolerance, Agreement, and Common Knowledge. 7.2 The Crushing Impact of Failures. 7.2.1 Node Failures: Single-Fault Disaster. 7.2.2 Consequences of the Single Fault Disaster. 7.3 Localized Entity Failures: Using Synchrony. 7.3.1 Synchronous Consensus with Crash Failures. 7.3.2 Synchronous Consensus with Byzantine Failures. 7.3.3 Limit to Number of Byzantine Entities for Agreement. 7.3.4 From Boolean to General Byzantine Agreement. 7.3.5 Byzantine Agreement in Arbitrary Graphs. 7.4 Localized Entity Failures: Using Randomization. 7.4.1 Random Actions and Coin Flips. 7.4.2 Randomized Asynchronous Consensus: Crash Failures. 7.4.3 Concluding Remarks. 7.5 Localized Entity Failures: Using Fault Detection. 7.5.1 Failure Detectors and Their Properties. 7.5.2 The Weakest Failure Detector. 7.6 Localized Entity Failures: Pre-Execution Failures. 7.6.1 Partial Reliability. 7.6.2 Example: Election in Complete Network. 7.7 Localized Link Failures. 7.7.1 A Tale of Two Synchronous Generals. 7.7.2 Computing With Faulty Links. 7.7.3 Concluding Remarks. 7.7.4 Considerations on Localized Entity Failures. 7.8 Ubiquitous Faults. 7.8.1 Communication Faults and Agreement. 7.8.2 Limits to Number of Ubiquitous Faults for Majority. 7.8.3 Unanimity in Spite of Ubiquitous Faults. 7.8.4 Tightness. 7.9 Bibliographical Notes. 7.10 Exercises, Problems, and Answers. 7.10.1 Exercises. 7.10.2 Problems. 7.10.3 Answers to Exercises. 8. Detecting Stable Properties. 8.1 Introduction. 8.2 Deadlock Detection. 8.2.1 Deadlock. 8.2.2 Detecting Deadlock: Wait-for Graph. 8.2.3 Single-Request Systems. 8.2.4 Multiple-Requests Systems. 8.2.5 Dynamic Wait-for Graphs. 8.2.6 Other Requests Systems. 8.3 Global Termination Detection. 8.3.1 A Simple Solution: Repeated Termination Queries. 8.3.2 Improved Protocols: Shrink. 8.3.3 Concluding Remarks. 8.4 Global Stable Property Detection. 8.4.1 General Strategy. 8.4.2 Time Cuts and Consistent Snapshots. 8.4.3 Computing A Consistent Snapshot. 8.4.4 Summary: Putting All Together. 8.5 Bibliographical Notes. 8.6 Exercises, Problems, and Answers. 8.6.1 Exercises. 8.6.2 Problems. 8.6.3 Answers to Exercises. 9. Continuous Computations. 9.1 Introduction. 9.2 Keeping Virtual Time. 9.2.1 Virtual Time and Causal Order. 9.2.2 Causal Order: Counter Clocks. 9.2.3 Complete Causal Order: Vector Clocks. 9.2.4 Concluding Remarks. 9.3 Distributed Mutual Exclusion. 9.3.1 The Problem. 9.3.2 A Simple And Efficient Solution. 9.3.3 Traversing the Network. 9.3.4 Managing a Distributed Queue. 9.3.5 Decentralized Permissions. 9.3.6 Mutual Exclusion in Complete Graphs: Quorum. 9.3.7 Concluding Remarks. 9.4 Deadlock: System Detection and Resolution. 9.4.1 System Detection and Resolution. 9.4.2 Detection and Resolution in Single-Request Systems. 9.4.3 Detection and Resolution in Multiple-Requests Systems. 9.5 Bibliographical Notes. 9.6 Exercises, Problems, and Answers. 9.6.1 Exercises. 9.6.2 Problems. 9.6.3 Answers to Exercises. Index.

Journal ArticleDOI
TL;DR: A cooperative coevolutionary algorithm (CCEA) for multiobjective optimization, which applies the divide-and-conquer approach to decompose decision vectors into smaller components and evolves multiple solutions in the form of cooperative subpopulations is presented.
Abstract: Recent advances in evolutionary algorithms show that coevolutionary architectures are effective ways to broaden the use of traditional evolutionary algorithms. This paper presents a cooperative coevolutionary algorithm (CCEA) for multiobjective optimization, which applies the divide-and-conquer approach to decompose decision vectors into smaller components and evolves multiple solutions in the form of cooperative subpopulations. Incorporated with various features like archiving, dynamic sharing, and extending operator, the CCEA is capable of maintaining archive diversity in the evolution and distributing the solutions uniformly along the Pareto front. Exploiting the inherent parallelism of cooperative coevolution, the CCEA can be formulated into a distributed cooperative coevolutionary algorithm (DCCEA) suitable for concurrent processing that allows inter-communication of subpopulations residing in networked computers, and hence expedites the computational speed by sharing the workload among multiple computers. Simulation results show that the CCEA is competitive in finding the tradeoff solutions, and the DCCEA can effectively reduce the simulation runtime without sacrificing the performance of CCEA as the number of peers is increased

Posted Content
TL;DR: A new class of simple, distributed algorithms for scheduling in multihop wireless networks under the primary interference model, parameterized by integers k ges 1, which are the first ones guaranteed to achieve any fixed fraction of the capacity region while using small and constant overheads that do not scale with network size.
Abstract: This paper proposes a new class of simple, distributed algorithms for scheduling in wireless networks. The algorithms generate new schedules in a distributed manner via simple local changes to existing schedules. The class is parameterized by integers $k\geq 1$. We show that algorithm $k$ of our class achieves $k/(k+2)$ of the capacity region, for every $k\geq 1$. The algorithms have small and constant worst-case overheads: in particular, algorithm $k$ generates a new schedule using {\em (a)} time less than $4k+2$ round-trip times between neighboring nodes in the network, and {\em (b)} at most three control transmissions by any given node, for any $k$. The control signals are explicitly specified, and face the same interference effects as normal data transmissions. Our class of distributed wireless scheduling algorithms are the first ones guaranteed to achieve any fixed fraction of the capacity region while using small and constant overheads that do not scale with network size. The parameter $k$ explicitly captures the tradeoff between control overhead and scheduler throughput performance and provides a tuning knob protocol designers can use to harness this trade-off in practice.