scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 2005"


Proceedings ArticleDOI
13 Mar 2005
TL;DR: It is shown that intelligent channel assignment is critical to Hyacinth's performance, and distributed algorithms that utilize only local traffic load information to dynamically assign channels and to route packets are presented, and their performance is compared against a centralized algorithm that performs the same functions.
Abstract: Even though multiple non-overlapped channels exist in the 2.4 GHz and 5 GHz spectrum, most IEEE 802.11-based multi-hop ad hoc networks today use only a single channel. As a result, these networks rarely can fully exploit the aggregate bandwidth available in the radio spectrum provisioned by the standards. This prevents them from being used as an ISP's wireless last-mile access network or as a wireless enterprise backbone network. In this paper, we propose a multi-channel wireless mesh network (WMN) architecture (called Hyacinth) that equips each mesh network node with multiple 802.11 network interface cards (NICs). The central design issues of this multi-channel WMN architecture are channel assignment and routing. We show that intelligent channel assignment is critical to Hyacinth's performance, present distributed algorithms that utilize only local traffic load information to dynamically assign channels and to route packets, and compare their performance against a centralized algorithm that performs the same functions. Through an extensive simulation study, we show that even with just 2 NICs on each node, it is possible to improve the network throughput by a factor of 6 to 7 when compared with the conventional single-channel ad hoc network architecture. We also describe and evaluate a 9-node Hyacinth prototype that Is built using commodity PCs each equipped with two 802.11a NICs.

1,636 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present the operation of a multiagent system (MAS) for the control of a microgrid and a classical distributed algorithm based on the symmetrical assignment problem for the optimal energy exchange between the production units of the Microgrid and the local loads, as well the main grid.
Abstract: This paper presents the operation of a multiagent system (MAS) for the control of a Microgrid. The approach presented utilizes the advantages of using the MAS technology for controlling a Microgrid and a classical distributed algorithm based on the symmetrical assignment problem for the optimal energy exchange between the production units of the Microgrid and the local loads, as well the main grid.

1,035 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: This paper shows that a central Kalman filter for sensor networks can be decomposed into n micro-Kalman filters with inputs that are provided by two types of consensus filters, and demonstrates that these filters can approximate these sums and give an approximate distributed Kalman filtering algorithm.
Abstract: The problem of distributed Kalman filtering (DKF) for sensor networks is one of the most fundamental distributed estimation problems for scalable sensor fusion. This paper addresses the DKF problem by reducing it to two separate dynamic consensus problems in terms of weighted measurements and inverse-covariance matrices. These to data fusion problems are solved is a distributed way using low-pass and band-pass consensus filters. Consensus filters are distributed algorithms that allow calculation of average-consensus of time-varying signals. The stability properties of consensus filters is discussed in a companion CDC ’05 paper [24]. We show that a central Kalman filter for sensor networks can be decomposed into n micro-Kalman filters with inputs that are provided by two types of consensus filters. This network of micro-Kalman filters collectively are capable to provide an estimate of the state of the process (under observation) that is identical to the estimate obtained by a central Kalman filter given that all nodes agree on two central sums. Later, we demonstrate that our consensus filters can approximate these sums and that gives an approximate distributed Kalman filtering algorithm. A detailed account of the computational and communication architecture of the algorithm is provided. Simulation results are presented for a sensor network with 200 nodes and more than 1000 links.

1,021 citations


Proceedings ArticleDOI
01 Dec 2005
TL;DR: An old distributed algorithm for reaching consensus that has received a fair amount of recent attention is discussed, in which a number of agents exchange their values asynchronously and form weighted averages with (possibly outdated) values possessed by their neighbors.
Abstract: We discuss an old distributed algorithm for reaching consensus that has received a fair amount of recent attention. In this algorithm, a number of agents exchange their values asynchronously and form weighted averages with (possibly outdated) values possessed by their neighbors. We overview existing convergence results, and establish some new ones, for the case of unbounded intercommunication intervals.

964 citations


Patent
Galen C. Hunt1, Bassam Tabbara1, Kevin Grealish1, Geoffrey Outhred, Rob Mensching 
29 Dec 2005
TL;DR: An architecture and methodology for designing, deploying, and managing a distributed application onto a distributed computing system is described in this article, where the authors present an architecture and a methodology for deploying and managing such applications.
Abstract: An architecture and methodology for designing, deploying, and managing a distributed application onto a distributed computing system is described.

606 citations


Book
Mung Chiang1
06 Jun 2005
TL;DR: This text provides both an in-depth tutorial on the theory, algorithms, and modeling methods of GP, and a comprehensive survey on the applications of GP to the study of communication systems.
Abstract: Geometric Programming (GP) is a class of nonlinear optimization with many useful theoretical and computational properties. Over the last few years, GP has been used to solve a variety of problems in the analysis and design of communication systems in several 'layers' in the communication network architecture, including information theory problems, signal processing algorithms, basic queuing system optimization, many network resource allocation problems such as power control and congestion control, and cross-layer design. We also start to understand why, in addition to how, GP can be applied to a surprisingly wide range of problems in communication systems. These applications have in turn spurred new research activities on GP, especially generalizations of GP formulations and development of distributed algorithms to solve GP in a network. This text provides both an in-depth tutorial on the theory, algorithms, and modeling methods of GP, and a comprehensive survey on the applications of GP to the study of communication systems.

510 citations


Proceedings ArticleDOI
13 Mar 2005
TL;DR: This paper studies how the performance of cross-layer rate control can be impacted if the network can only use an imperfect scheduling component that is easier to implement, and designs a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.
Abstract: In this paper, we study cross-layer design for rate control in multihop wireless networks. In our previous work, we have developed an optimal cross-layered rate control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the underlying layers. However, the scheduling component in this optimal cross-layered rate control scheme has to solve a complex global optimization problem at each time, and hence is too computationally expensive for online implementation. In this paper, we study how the performance of cross-layer rate control can be impacted if the network can only use an imperfect (and potentially distributed) scheduling component that is easier to implement. We study both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and we establish desirable results on the performance bounds of cross-layered rate control with imperfect scheduling. Compared with a layered approach that does not design rate control and scheduling together, our cross-layered approach has provably better performance bounds, and substantially outperforms the layered approach. The insights drawn from our analyses also enable us to design a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.

454 citations


Journal ArticleDOI
01 Jan 2005
TL;DR: This paper proposes distributed energy-efficient deployment algorithms for mobile sensors and intelligent devices that form an Ambient Intelligent network that employ a synergistic combination of cluster structuring and a peer-to-peer deployment scheme.
Abstract: Many visions of the future include people immersed in an environment surrounded by sensors and intelligent devices, which use smart infrastructures to improve the quality of life and safety in emergency situations. Ubiquitous communication enables these sensors or intelligent devices to communicate with each other and the user or a decision maker by means of ad hoc wireless networking. Organization and optimization of network resources are essential to provide ubiquitous communication for a longer duration in large-scale networks and are helpful to migrate intelligence from higher and remote levels to lower and local levels. In this paper, distributed energy-efficient deployment algorithms for mobile sensors and intelligent devices that form an Ambient Intelligent network are proposed. These algorithms employ a synergistic combination of cluster structuring and a peer-to-peer deployment scheme. An energy-efficient deployment algorithm based on Voronoi diagrams is also proposed here. Performance of our algorithms is evaluated in terms of coverage, uniformity, and time and distance traveled until the algorithm converges. Our algorithms are shown to exhibit excellent performance.

442 citations


Journal ArticleDOI
TL;DR: The main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.
Abstract: Wireless sensor networks are capable of collecting an enormous amount of data. Often, the ultimate objective is to estimate a parameter or function from these data, and such estimators are typically the solution of an optimization problem (e.g., maximum likelihood, minimum mean-squared error, or maximum a posteriori). This paper investigates a general class of distributed optimization algorithms for "in-network" data processing, aimed at reducing the amount of energy and bandwidth used for communication. Our intuition tells us that processing the data in-network should, in general, require less energy than transmitting all of the data to a fusion center. In this paper, we address the questions: When, in fact, does in-network processing use less energy, and how much energy is saved? The proposed distributed algorithms are based on incremental optimization methods. A parameter estimate is circulated through the network, and along the way each node makes a small gradient descent-like adjustment to the estimate based only on its local data. Applying results from the theory of incremental subgradient optimization, we find that the distributed algorithms converge to an approximate solution for a broad class of problems. We extend these results to the case where the optimization variable is quantized before being transmitted to the next node and find that quantization does not affect the rate of convergence. Bounds on the number of incremental steps required for a certain level of accuracy provide insight into the tradeoff between estimation performance and communication overhead. Our main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.

419 citations


Journal ArticleDOI
TL;DR: The proposed algorithms improve the scalability of the filter architectures affected by the resampling process and reduce communication through the interconnection network is reduced and made deterministic, which results in simpler network structure and increased sampling frequency.
Abstract: In this paper, we propose novel resampling algorithms with architectures for efficient distributed implementation of particle filters. The proposed algorithms improve the scalability of the filter architectures affected by the resampling process. Problems in the particle filter implementation due to resampling are described, and appropriate modifications of the resampling algorithms are proposed so that distributed implementations are developed and studied. Distributed resampling algorithms with proportional allocation (RPA) and nonproportional allocation (RNA) of particles are considered. The components of the filter architectures are the processing elements (PEs), a central unit (CU), and an interconnection network. One of the main advantages of the new resampling algorithms is that communication through the interconnection network is reduced and made deterministic, which results in simpler network structure and increased sampling frequency. Particle filter performances are estimated for the bearings-only tracking applications. In the architectural part of the analysis, the area and speed of the particle filter implementation are estimated for a different number of particles and a different level of parallelism with field programmable gate array (FPGA) implementation. In this paper, only sampling importance resampling (SIR) particle filters are considered, but the analysis can be extended to any particle filters with resampling.

360 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights.
Abstract: Fairness is an important issue when accessing a shared wireless channel. With fair scheduling, it is possible to allocate bandwidth in proportion to weights of the packet flows sharing the channel. This paper presents a fully distributed algorithm for fair scheduling in a wireless LAN. The algorithm can be implemented without using a centralized coordinator to arbitrate medium access. The proposed protocol is derived from the Distributed Coordination Function in the IEEE 802.11 standard. Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights. An attractive feature of the proposed approach is that it can be implemented with simple modifications to the IEEE 802.11 standard.

Journal ArticleDOI
TL;DR: The results show that DSA is superior to DBA when controlled properly, having better or competitive solution quality and significantly lower communication cost than DBA, and is the algorithm of choice for distributed scheduling problems and other distributed problems of similar properties.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: This work considers the joint optimal design of the physical, medium access control (MAC), and routing layers to maximize the lifetime of energy-constrained wireless sensor networks and proposes an iterative algorithm that alternates between adaptive link scheduling and computation of optimal link rates and transmission powers for a fixed link schedule.
Abstract: We consider the joint optimal design of physical, medium access control (MAC), and routing layers to maximize the lifetime of energy-constrained wireless sensor networks. The problem of computing a lifetime-optimal routing flow, link schedule, and link transmission powers is formulated as a non-linear optimization problem. We first restrict the link schedules to the class of interference-free time division multiple access (TDMA) schedules. In this special case we formulate the optimization problem as a mixed integer-convex program, which can be solved using standard techniques. For general non-orthogonal link schedules, we propose an iterative algorithm that alternates between adaptive link scheduling and computation of optimal link rates and transmission powers for a fixed link schedule. The performance of this algorithm is compared to other design approaches for several network topologies. The results illustrate the advantages of load balancing, multihop routing, frequency reuse, and interference mitigation in increasing the lifetime of energy-constrained networks. We also describe a partially distributed algorithm to compute optimal rates and transmission powers for a given link schedule.

Book ChapterDOI
30 Jun 2005
TL;DR: This paper describes Kairos’ programming model, and demonstrates its suitability, through actual implementation, for a variety of distributed programs—both infrastructure services and signal processing tasks—typically encountered in sensor network literature: routing tree construction, localization, and object tracking.
Abstract: The literature on programming sensor networks has focused so far on providing higher-level abstractions for expressing local node behavior. Kairos is a natural next step in sensor network programming in that it allows the programmer to express, in a centralized fashion, the desired global behavior of a distributed computation on the entire sensor network. Kairos’ compile-time and runtime subsystems expose a small set of programming primitives, while hiding from the programmer the details of distributed-code generation and instantiation, remote data access and management, and inter-node program flow coordination. In this paper, we describe Kairos’ programming model, and demonstrate its suitability, through actual implementation, for a variety of distributed programs—both infrastructure services and signal processing tasks—typically encountered in sensor network literature: routing tree construction, localization, and object tracking. Our experimental results suggest that Kairos does not adversely affect the performance or accuracy of distributed programs, while our implementation experiences suggest that it greatly raises the level of abstraction presented to the programmer.

Proceedings ArticleDOI
Qing Fang1, Jie Gao1, Leonidas J. Guibas1, V. de Silva1, Li Zhang2 
13 Mar 2005
TL;DR: This work develops a protocol which in a preprocessing phase discovers the global topology of the sensor field and partitions the nodes into routable tiles - regions where the node placement is sufficiently dense and regular that local greedy methods can work well.
Abstract: We present gradient landmark-based distributed routing (GLIDER), a novel naming/addressing scheme and associated routing algorithm, for a network of wireless communicating nodes We assume that the nodes are fixed (though their geographic locations are not necessarily known), and that each node can communicate wirelessly with some of its geographic neighbors - a common scenario in sensor networks We develop a protocol which in a preprocessing phase discovers the global topology of the sensor field and, as a byproduct, partitions the nodes into routable tiles - regions where the node placement is sufficiently dense and regular that local greedy methods can work well Such global topology includes not just connectivity but also higher order topological features, such as the presence of holes We address each node by the name of the tile containing it and a set of local coordinates derived from connectivity graph distances between the node and certain landmark nodes associated with its own and neighboring tiles We use the tile adjacency graph for global route planning and the local coordinates for realizing actual inter- and intra-tile routes We show that efficient load-balanced global routing can be implemented quite simply using such a scheme

Journal ArticleDOI
TL;DR: This article shows that SeRLoc is robust against known attacks on a WSNs such as the wormhole attack, the Sybil attack, and compromise of network entities and analytically compute the probability of success for each attack.
Abstract: Many distributed monitoring applications of Wireless Sensor Networks (WSNs) require the location information of a sensor node. In this article, we address the problem of enabling nodes of Wireless Sensor Networks to determine their location in an untrusted environment, known as the secure localization problem. We propose a novel range-independent localization algorithm called SeRLoc that is well suited to a resource constrained environment such as a WSN. SeRLoc is a distributed algorithm based on a two-tier network architecture that allows sensors to passively determine their location without interacting with other sensors. We show that SeRLoc is robust against known attacks on a WSNs such as the wormhole attack, the Sybil attack, and compromise of network entities and analytically compute the probability of success for each attack. We also compare the performance of SeRLoc with state-of-the-art range-independent localization schemes and show that SeRLoc has better performance.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: Analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity are developed.
Abstract: This paper considers two inter-related questions: (i) Given a wireless ad-hoc network and a collection of source-destination pairs {(si,ti)}, what is the maximum throughput capacity of the network, i.e. the rate at which data from the sources to their corresponding destinations can be transferred in the network? (ii) Can network protocols be designed that jointly route the packets and schedule transmissions at rates close to the maximum throughput capacity? Much of the earlier work focused on random instances and proved analytical lower and upper bounds on the maximum throughput capacity. Here, in contrast, we consider arbitrary wireless networks. Further, we study the algorithmic aspects of the above questions: the goal is to design provably good algorithms for arbitrary instances. We develop analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity.Motivated by certain popular wireless protocols used in practice, we also explore "shortest-path like" path selection strategies which maximize the network throughput. The theoretical results naturally suggest an interesting class of congestion aware link metrics which can be directly plugged into several existing routing protocols such as AODV, DSR, etc. We complement the theoretical analysis with extensive simulations. The results indicate that routes obtained using our congestion aware link metrics consistently yield higher throughput than hop-count based shortest path metrics.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: This work considers the problem of link scheduling in a sensor network employing a TDMA MAC protocol and proposes a distributed edge coloring algorithm that needs at most (/spl delta/+1) colors, which is the first distributed algorithm that can edge color a graph with at most /spl delta/ colors.
Abstract: We consider the problem of link scheduling in a sensor network employing a TDMA MAC protocol Our link scheduling algorithm involves two phases In the first phase, we assign a color to each edge in the network such that no two edges incident on the same node are assigned the same color We propose a distributed edge coloring algorithm that needs at most (/spl delta/+1) colors, where /spl delta/ is the maximum degree of the graph To the best of our knowledge, this is the first distributed algorithm that can edge color a graph with at most (/spl delta/+1) colors In the second phase, we map each color to a unique timeslot and attempt to identify a direction of transmission along each edge such that the hidden terminal and the exposed terminal problems are avoided Next, considering topologies for which a feasible solution does not exist, we obtain a direction of transmission for each edge using additional timeslots, if necessary Finally, we show that reversing the direction of transmission along every edge leads to another feasible direction of transmission Using both the transmission assignments we obtain a TDMA MAC schedule, which enables two-way communication between every pair of neighbors For acyclic topologies, we show that at most 2(/spl delta/+1) timeslots are required Through simulations we show that for sparse graphs with cycles the number of timeslots assigned is close to 2(/spl delta/+1)

Journal ArticleDOI
TL;DR: A game theoretic framework for obtaining a user-optimal load balancing scheme in heterogeneous distributed systems as a noncooperative game among users is presented and a new distributed load balancing algorithm is derived.

Journal ArticleDOI
TL;DR: This paper addresses the problem of distributed routing of restoration paths and introduces the concept of "backtracking" to bound the restoration latency, using a link cost model that captures bandwidth sharing among links using various types of aggregate link-state information.
Abstract: The emerging multiprotocol label switching (MPLS) networks enable network service providers to route bandwidth guaranteed paths between customer sites. This basic label switched path (LSP) routing is often enhanced using restoration routing which sets up alternate LSPs to guarantee uninterrupted connectivity in case network links or nodes along primary path fail. We address the problem of distributed routing of restoration paths, which can be defined as follows: given a request for a bandwidth guaranteed LSP between two nodes, find a primary LSP, and a set of backup LSPs that protect the links along the primary LSP. A routing algorithm that computes these paths must optimize the restoration latency and the amount of bandwidth used. We introduce the concept of "backtracking" to bound the restoration latency. We consider three different cases characterized by a parameter called backtracking distance D: 1) no backtracking (D=0); 2) limited backtracking (D=k); and 3) unlimited backtracking (D=/spl infin/). We use a link cost model that captures bandwidth sharing among links using various types of aggregate link-state information. We first show that joint optimization of primary and backup paths is NP-hard in all cases. We then consider algorithms that compute primary and backup paths in two separate steps. Using link cost metrics that capture bandwidth sharing, we devise heuristics for each case. Our simulation study shows that these algorithms offer a way to tradeoff bandwidth to meet a range of restoration latency requirements.

Proceedings ArticleDOI
26 Jun 2005
TL;DR: Information forks, a comprehensive criterion that characterizes all architectures with an undecidable synthesis problem, is defined and it can be determined in O(n/sup 2//spl middot/v) time whether the synthesis problem is decidable.
Abstract: We provide a uniform solution to the problem of synthesizing a finite-state distributed system. An instance of the synthesis problem consists of a system architecture and a temporal specification. The architecture is given as a directed graph, where the nodes represent processes (including the environment as a special process) that communicate synchronously through shared variables attached to the edges. The same variable may occur on multiple outgoing edges of a single node, allowing for the broadcast of data. A solution to the synthesis problem is a collection of finite-state programs for the processes in the architecture, such that the joint behavior of the programs satisfies the specification in an unrestricted environment. We define information forks, a comprehensive criterion that characterizes all architectures with an undecidable synthesis problem. The criterion is effective: for a given architecture with n processes and v variables, it can be determined in O(n/sup 2//spl middot/v) time whether the synthesis problem is decidable. We give a uniform synthesis algorithm for all decidable cases. Our algorithm works for all /spl omega/-regular tree specification languages, including the /spl mu/-calculus. The undecidability proof, on the other hand, uses only LTL or, alternatively, CTL as the specification language. Our results therefore hold for the entire range of specification languages from LTL/CTL to the /spl mu/-calculus.

01 Jan 2005
TL;DR: In this paper, the authors present a new framework for the crucial challenge of self-organization of a large sensor network, where the objective is to develop algorithms and protocols that allow selforganisation of the swarm into large-scale structures that reflect the structure of the street network, setting the stage for global routing, tracking and guiding algorithms.
Abstract: We present a new framework for the crucial challenge of self-organization of a large sensor network. The basic scenario can be described as follows: Given a large swarm of immobile sensor nodes that have been scattered in a polygonal region, such as a street network. Nodes have no knowledge of size or shape of the environment or the position of other nodes. Moreover, they have no way of measuring coordinates, geometric distances to other nodes, or their direction. Their only way of interacting with other nodes is to send or to receive messages from any node that is within communication range. The objective is to develop algorithms and protocols that allow self-organization of the swarm into large-scale structures that reflect the structure of the street network, setting the stage for global routing, tracking and guiding algorithms. Our algorithms work in two stages: boundary recognition and topology extraction. All steps are strictly deterministic, yield fast distributed algorithms, and make no assumption on the distribution of nodes in the environment, other than sufficient density.

Book ChapterDOI
01 Nov 2005
TL;DR: An overview of PADRES is presented, highlighting some of its novel features, including the composite subscription language, the coordination patterns, the composite event detection algorithms, the rule-based router design, and a detailed case study illustrating the decentralized processing of workflows.
Abstract: Distributed publish/subscribe systems are naturally suited for processing events in distributed systems. However, support for expressing patterns about distributed events and algorithms for detecting correlations among these events are still largely unexplored. Inspired from the requirements of decentralized, event-driven workflow processing, we design a subscription language for expressing correlations among distributed events. We illustrate the potential of our approach with a workflow management case study. The language is validated and implemented in PADRES. In this paper we present an overview of PADRES, highlighting some of its novel features, including the composite subscription language, the coordination patterns, the composite event detection algorithms, the rule-based router design, and a detailed case study illustrating the decentralized processing of workflows. Our experimental evaluation shows that rule-based brokers are a viable and powerful alternative to existing, special-purpose, content-based routing algorithms. The experiments also show that the use of composite subscriptions in PADRES significantly reduces the load on the network. Complex workflows can be processed in a decentralized fashion with a gain of 40% in message dissemination cost. All processing is realized entirely in the publish/subscribe paradigm.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: The paper develops two new algorithms to control and exploit the presence of multiple schedules to reduce energy consumption and latency in large sensor networks, including the global schedule algorithm (GSA) and the fast path algorithm (FPA).
Abstract: Recently, several MAC protocols, such as S-MAC and T-MAC, have exploited scheduled sleep/wakeup cycles to conserve energy in sensor networks. Until now, most protocols have assumed all nodes in the network were configured to follow the same schedule, or have assumed border nodes would follow multiple schedules, but those cases have not been evaluated. The paper develops two new algorithms to control and exploit the presence of multiple schedules to reduce energy consumption and latency. The first one is the global schedule algorithm (GSA). Through experiments, we demonstrate that, because of radio propagation vagaries, large sensor networks have very ragged, overlapping borders where many nodes listen to two or more schedules. GSA is a fully distributed algorithm that allows a large network to converge on a single global schedule to conserve energy. Secondly, we demonstrate that strict schedules incur a latency penalty in a multi-hop network when packets must wait for the next schedule for transmission. To reduce latency in multi-hop paths, we develop the fast path algorithm (FPA). FPA provides fast data forwarding paths by adding additional wake-up periods on the nodes along paths from sources to sinks. We evaluate both algorithms through experiments on Berkeley motes and demonstrate that the protocols accomplish their goals of reducing energy consumption and latency in large sensor networks.

Journal ArticleDOI
TL;DR: This paper generalizes Yates' result and establishes a new framework, which is applicable to systems supporting opportunistic communications and with heterogeneous service requirements, and shows that the proposed algorithm yields significant improvement in throughput when compared with the conventional target tracking approach.
Abstract: Most power control algorithms that aim at hitting a signal-to-interference ratio (SIR) target fall within Yates' framework. However, for delay-tolerable applications, it is unnecessary to maintain the SIR at a certain level all the time. To maximize throughput, one should increase one's power when the interference level is low, and the information transmission rate is adjusted accordingly by adaptive modulation and coding techniques. This approach is called opportunistic communications. In this paper, we generalize Yates' result and establish a new framework, which is applicable to systems supporting opportunistic communications and with heterogeneous service requirements. Simulation results show that our proposed algorithm yields significant improvement in throughput when compared with the conventional target tracking approach.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: A distributed algorithm for gathering global network topology information for MAC-layer configuration and efficient routing of packets for a cognitive radio (CR) network is proposed.
Abstract: In a cognitive radio (CR) network, MAC-layer configuration involves determining a common set of channels to facilitate communication among participating nodes. Further, the availability of multiple channels and frequent channel switches add to the complexity of route selection. Knowledge of the global network topology can be used to solve the above-described problems. In this paper, we propose a distributed algorithm for gathering global network topology information for a CR network. We outline approaches that utilize the gathered topology information for MAC-layer configuration and efficient routing of packets. In addition, situation awareness is achieved by sharing the physical location information among the nodes in the network. The proposed algorithm determines the global network topology in O(N2 ) timeslots, where N is the maximum number of nodes deployed. With 80 available channels for communication, the algorithm terminates within 0.8 second

Proceedings ArticleDOI
25 Jul 2005
TL;DR: It is shown that LA-DCOP convincingly outperforms competing distributed task allocation algorithms while using orders of magnitude fewer messages, allowing a dramatic scale-up in extreme teams, upto a fully distributed, proxybased team of 200 agents.
Abstract: Extreme teams, large-scale agent teams operating in dynamic environments, are on the horizon. Such environments are problematic for current task allocation algorithms due to the lack of locality in agent interactions. We propose a novel distributed task allocation algorithm for extreme teams, called LA-DCOP, that incorporates three key ideas. First, LA-DCOP's task allocation is based on a dynamically computed minimum capability threshold which uses approximate knowledge of overall task load. Second, LA-DCOP uses tokens to represent tasks and further minimize communication. Third, it creates potential tokens to deal with inter-task constraints of simultaneous execution. We show that LA-DCOP convincingly outperforms competing distributed task allocation algorithms while using orders of magnitude fewer messages, allowing a dramatic scale-up in extreme teams, upto a fully distributed, proxybased team of 200 agents. Varying threshold are seen as a key to outperforming competing distributed algorithms in the domain of simulated disaster rescue.

Proceedings ArticleDOI
25 May 2005
TL;DR: This paper presents two centralized algorithms, BAMER and GAMer, that optimally solve the minimum energy reliable communication problem in presence of unreliable links and presents a distributed algorithm, DAMER, that approximates the performance of the centralized algorithm and leads to significant performance improvement over existing single-path or multi-path based techniques.
Abstract: We address the problem of energy-efficient reliable wireless communication in the presence of unreliable or lossy wireless link layers in multi-hop wireless networks. Prior work [1] has provided an optimal energy efficient solution to this problem for the case where link layers implement perfect reliability. However, a more common scenario --- a link layer that is not perfectly reliable, was left as an open problem. In this paper we first present two centralized algorithms, BAMER and GAMER, that optimally solve the minimum energy reliable communication problem in presence of unreliable links. Subsequently we present a distributed algorithm, DAMER, that approximates the performance of the centralized algorithm and leads to significant performance improvement over existing single-path or multi-path based techniques.

Book ChapterDOI
30 Jun 2005
TL;DR: This paper develops a set of distributed algorithms for processing multiple queries for aggregate queries on sensor networks that incur minimum communication while observing the computational limitations of the sensor nodes.
Abstract: The widespread dissemination of small-scale sensor nodes has sparked interest in a powerful new database abstraction for sensor networks: Clients “program” the sensors through queries in a high-level declarative language permitting the system to perform the low-level optimizations necessary for energy-efficient query processing. In this paper we consider multi-query optimization for aggregate queries on sensor networks. We develop a set of distributed algorithms for processing multiple queries that incur minimum communication while observing the computational limitations of the sensor nodes. Our algorithms support incremental changes to the set of active queries and allow for local repairs to routes in response to node failures. A thorough experimental analysis shows that our approach results in significant energy savings, compared to previous work.

01 Jan 2005
TL;DR: This paper presents a very simple distributed algorithm for computing a small CDS, improving upon the previous best known approximation factor of 8 and implying improved approximation factors for many existing algorithm.
Abstract: Several routing schemes in ad hoc networks first establish a virtual backbone and then route messages via back-bone nodes. One common way of constructing such a backbone is based on the construction of a minimum connected dominating set (CDS). In this paper we present a very simple distributed algorithm for computing a small CDS. Our algorithm has an approximation factor of at most 6.91, improving upon the previous best known approximation factor of 8 due to Wan et al. [INFOCOM'02], The improvement relies on a refined analysis of the relationship between the size of a maximal independent set and a minimum CDS in a unit disk graph. This subresult also implies improved approximation factors for many existing algorithm.