scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal on Selected Areas in Communications in 2005"


Journal ArticleDOI
Simon Haykin1
TL;DR: Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks: radio-scene analysis, channel-state estimation and predictive modeling, and the emergent behavior of cognitive radio.
Abstract: Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.

12,172 citations


Journal ArticleDOI
TL;DR: A joint routing and power allocation policy is developed that stabilizes the system and provides bounded average delay guarantees whenever the input rates are within this capacity region, and is applied to an ad hoc wireless network where channel variations are due to user mobility.
Abstract: We consider dynamic routing and power allocation for a wireless network with time-varying channels. The network consists of power constrained nodes that transmit over wireless links with adaptive transmission rates. Packets randomly enter the system at each node and wait in output queues to be transmitted through the network to their destinations. We establish the capacity region of all rate matrices (/spl lambda//sub ij/) that the system can stably support-where /spl lambda//sub ij/ represents the rate of traffic originating at node i and destined for node j. A joint routing and power allocation policy is developed that stabilizes the system and provides bounded average delay guarantees whenever the input rates are within this capacity region. Such performance holds for general arrival and channel state processes, even if these processes are unknown to the network controller. We then apply this control algorithm to an ad hoc wireless network, where channel variations are due to user mobility. Centralized and decentralized implementations are compared, and the stability region of the decentralized algorithm is shown to contain that of the mobile relay strategy developed by Grossglauser and Tse (2002).

751 citations


Journal ArticleDOI
Mung Chiang1
TL;DR: This work presents a step toward a systematic understanding of "layering" as "optimization decomposition," where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as the optimization variables coordinating the subproblems.
Abstract: In a wireless network with multihop transmissions and interference-limited link rates, can we balance power control in the physical layer and congestion control in the transport layer to enhance the overall network performance while maintaining the architectural modularity between the layers? We answer this question by presenting a distributed power control algorithm that couples with existing transmission control protocols (TCPs) to increase end-to-end throughput and energy efficiency of the network. Under the rigorous framework of nonlinearly constrained utility maximization, we prove the convergence of this coupled algorithm to the global optimum of joint power control and congestion control, for both synchronized and asynchronous implementations. The rate of convergence is geometric and a desirable modularity between the transport and physical layers is maintained. In particular, when congestion control uses TCP Vegas, a simple utilization in the physical layer of the queueing delay information suffices to achieve the joint optimum. Analytic results and simulations illustrate other desirable properties of the proposed algorithm, including robustness to channel outage and to path loss estimation errors, and flexibility in trading off performance optimality for implementation simplicity. This work presents a step toward a systematic understanding of "layering" as "optimization decomposition," where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as the optimization variables coordinating the subproblems. In the case of the transport and physical layers, link congestion prices turn out to be the optimal "layering prices.".

695 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the information used for sensor localization is fundamentally local with regard to the network topology and used to reformulate the problem within a graphical model framework, and that judicious message construction can result in better estimates.
Abstract: Automatic self-localization is a critical need for the effective use of ad hoc sensor networks in military or civilian applications. In general, self-localization involves the combination of absolute location information (e.g., from a global positioning system) with relative calibration information (e.g., distance measurements between sensors) over regions of the network. Furthermore, it is generally desirable to distribute the computational burden across the network and minimize the amount of intersensor communication. We demonstrate that the information used for sensor localization is fundamentally local with regard to the network topology and use this observation to reformulate the problem within a graphical model framework. We then present and demonstrate the utility of nonparametric belief propagation (NBP), a recent generalization of particle filtering, for both estimating sensor locations and representing location uncertainties. NBP has the advantage that it is easily implemented in a distributed fashion, admits a wide variety of statistical models, and can represent multimodal uncertainty. Using simulations of small to moderately sized sensor networks, we show that NBP may be made robust to outlier measurement errors by a simple model augmentation, and that judicious message construction can result in better estimates. Furthermore, we provide an analysis of NBP's communications requirements, showing that typically only a few messages per sensor are required, and that even low bit-rate approximations of these messages can be used with little or no performance impact.

586 citations


Journal ArticleDOI
TL;DR: A practical approach to networks comprising multiple relays operating over orthogonal time slots is proposed based on a generalization of hybrid-automatic repeat request (ARQ), indicating a significant improvement in the energy-latency tradeoff when compared with conventional multihop protocols implemented as a cascade of point-to-point links.
Abstract: Wireless networks contain an inherent distributed spatial diversity that can be exploited by the use of relaying. Relay networks take advantage of the broadcast-oriented nature of radio and require node-based, rather than link-based protocols. Prior work on relay networks has studied performance limits either with unrealistic assumptions, complicated protocols, or only a single relay. In this paper, a practical approach to networks comprising multiple relays operating over orthogonal time slots is proposed based on a generalization of hybrid-automatic repeat request (ARQ). In contrast with conventional hybrid-ARQ, retransmitted packets do not need to come from the original source radio but could instead be sent by relays that overhear the transmission. An information theoretic framework is exposed that establishes the performance limits of such systems in a block fading environment, and numerical results are presented for some representative topologies and protocols. The results indicate a significant improvement in the energy-latency tradeoff when compared with conventional multihop protocols implemented as a cascade of point-to-point links.

548 citations


Journal ArticleDOI
TL;DR: The results show that SEF can be implemented efficiently in sensor nodes as small as Mica2, and can drop up to 70% of bogus reports injected by a compromised node within five hops, and reduce energy consumption by 65% or more in many cases.
Abstract: In a large-scale sensor network individual sensors are subject to security compromises. A compromised node can be used to inject bogus sensing reports. If undetected, these bogus reports would be forwarded to the data collection point (i.e., the sink). Such attacks by compromised nodes can result in not only false alarms but also the depletion of the finite amount of energy in a battery powered network. In this paper, we present a statistical en-route filtering (SEF) mechanism to detect and drop false reports during the forwarding process. Assuming that the same event can be detected by multiple sensors, in SEF each of the detecting sensors generates a keyed message authentication code (MAC) and multiple MACs are attached to the event report. As the report is forwarded, each node along the way verifies the correctness of the MAC's probabilistically and drops those with invalid MACs. SEF exploits the network scale to filter out false reports through collective decision-making by multiple detecting nodes and collective false detection by multiple forwarding nodes. We have evaluated SEF's feasibility and performance through analysis, simulation, and implementation. Our results show that SEF can be implemented efficiently in sensor nodes as small as Mica2. It can drop up to 70% of bogus reports injected by a compromised node within five hops, and reduce energy consumption by 65% or more in many cases.

535 citations


Journal ArticleDOI
TL;DR: This work proposes a QoS-aware routing protocol that incorporates an admission control scheme and a feedback scheme to meet the QoS requirements of real-time applications and implements these schemes by using two bandwidth estimation methods to find the residual bandwidth available at each node to support new streams.
Abstract: Routing protocols for mobile ad hoc networks (MANETs) have been explored extensively in recent years. Much of this work is targeted at finding a feasible route from a source to a destination without considering current network traffic or application requirements. Therefore, the network may easily become overloaded with too much traffic and the application has no way to improve its performance under a given network traffic condition. While this may be acceptable for data transfer, many real-time applications require quality-of-service (QoS) support from the network. We believe that such QoS support can be achieved by either finding a route to satisfy the application requirements or offering network feedback to the application when the requirements cannot be met. We propose a QoS-aware routing protocol that incorporates an admission control scheme and a feedback scheme to meet the QoS requirements of real-time applications. The novel part of this QoS-aware routing protocol is the use of the approximate bandwidth estimation to react to network traffic. Our approach implements these schemes by using two bandwidth estimation methods to find the residual bandwidth available at each node to support new streams. We simulate our QoS-aware routing protocol for nodes running the IEEE 802.11 medium access control. Results of our experiments show that the packet delivery ratio increases greatly, and packet delay and energy dissipation decrease significantly, while the overall end-to-end throughput is not impacted, compared with routing protocols that do not provide QoS support.

510 citations


Journal ArticleDOI
TL;DR: UDAAN is an interacting suite of modular network- and medium access control (MAC)-layer mechanisms for adaptive control of steered or switched antenna systems in an ad hoc network that can produce a very significant improvement in throughput over omnidirectional communications.
Abstract: Directional antennas offer tremendous potential for improving the performance of ad hoc networks. Harnessing this potential, however, requires new mechanisms at the medium access and network layers for intelligently and adaptively exploiting the antenna system. While recent years have seen a surge of research into such mechanisms, the problem of developing a complete ad hoc networking system, including the unique challenge of real-life prototype development and experimentation has not been addressed. In this paper, we present utilizing directional antennas for ad hoc networking (UDAAN). UDAAN is an interacting suite of modular network- and medium access control (MAC)-layer mechanisms for adaptive control of steered or switched antenna systems in an ad hoc network. UDAAN consists of several new mechanisms-a directional power-controlled MAC, neighbor discovery with beamforming, link characterization for directional antennas, proactive routing and forwarding-all working cohesively to provide the first complete systems solution. We also describe the development of a real-life ad hoc network testbed using UDAAN with switched directional antennas, and we discuss the lessons learned during field trials. High fidelity simulation results, using the same networking code as in the prototype, are also presented both for a specific scenario and using random mobility models. For the range of parameters studied, our results show that UDAAN can produce a very significant improvement in throughput over omnidirectional communications.

497 citations


Journal ArticleDOI
TL;DR: The main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.
Abstract: Wireless sensor networks are capable of collecting an enormous amount of data. Often, the ultimate objective is to estimate a parameter or function from these data, and such estimators are typically the solution of an optimization problem (e.g., maximum likelihood, minimum mean-squared error, or maximum a posteriori). This paper investigates a general class of distributed optimization algorithms for "in-network" data processing, aimed at reducing the amount of energy and bandwidth used for communication. Our intuition tells us that processing the data in-network should, in general, require less energy than transmitting all of the data to a fusion center. In this paper, we address the questions: When, in fact, does in-network processing use less energy, and how much energy is saved? The proposed distributed algorithms are based on incremental optimization methods. A parameter estimate is circulated through the network, and along the way each node makes a small gradient descent-like adjustment to the estimate based only on its local data. Applying results from the theory of incremental subgradient optimization, we find that the distributed algorithms converge to an approximate solution for a broad class of problems. We extend these results to the case where the optimization variable is quantized before being transmitted to the next node and find that quantization does not affect the rate of convergence. Bounds on the number of incremental steps required for a certain level of accuracy provide insight into the tradeoff between estimation performance and communication overhead. Our main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.

419 citations


Journal ArticleDOI
TL;DR: The maximum rate at which functions of sensor measurements can be computed and communicated to the sink node is studied, focusing on symmetric functions, where only the data from a sensor is important, not its identity.
Abstract: In wireless sensor networks, one is not interested in downloading all the data from all the sensors. Rather, one is only interested in collecting from a sink node a relevant function of the sensor measurements. This paper studies the maximum rate at which functions of sensor measurements can be computed and communicated to the sink node. It focuses on symmetric functions, where only the data from a sensor is important, not its identity. The results include the following. The maximum rate of downloading the frequency histogram in a random planar multihop network with n nodes is O(1/logn) A subclass of functions, called type-sensitive functions, is maximally difficult to compute. In a collocated network, they can be computed at rate O(1/n), and in a random planar multihop network at rate O(1/logn). This class includes the mean, mode, median, etc. Another subclass of functions, called type-threshold functions, is exponentially easier to compute. In a collocated network they can be computed at rate O(1/logn), and in a random planar multihop network at rate O(1/loglogn). This class includes the max, min, range, etc. The results also show the architecture for processing information across sensor networks.

406 citations


Journal ArticleDOI
TL;DR: This paper describes and evaluates ARAN and shows that it is able to effectively and efficiently discover secure routes within an ad hoc network, and details how ARAN can secure routing in environments where nodes are authorized to participate but untrusted to cooperate, as well as environments where participants do not need to be authorization to participate.
Abstract: Initial work in ad hoc routing has considered only the problem of providing efficient mechanisms for finding paths in very dynamic networks, without considering security. Because of this, there are a number of attacks that can be used to manipulate the routing in an ad hoc network. In this paper, we describe these threats, specifically showing their effects on ad hoc on-demand distance vector and dynamic source routing. Our protocol, named authenticated routing for ad hoc networks (ARAN), uses public-key cryptographic mechanisms to defeat all identified attacks. We detail how ARAN can secure routing in environments where nodes are authorized to participate but untrusted to cooperate, as well as environments where participants do not need to be authorized to participate. Through both simulation and experimentation with our publicly available implementation, we characterize and evaluate ARAN and show that it is able to effectively and efficiently discover secure routes within an ad hoc network.

Journal ArticleDOI
TL;DR: A bio-inspired network synchronization protocol for large scale sensor networks that emulates the simple strategies adopted by the biological agents, and obtains an optimum operating point at the local minimum of the energy consumption curve that is associated to the phase transition phenomenon.
Abstract: Synchronization is considered a particularly difficult task in wireless sensor networks due to its decentralized structure. Interestingly, synchrony has often been observed in networks of biological agents (e.g., synchronously flashing fireflies, or spiking of neurons). In this paper, we propose a bio-inspired network synchronization protocol for large scale sensor networks that emulates the simple strategies adopted by the biological agents. The strategy synchronizes pulsing devices that are led to emit their pulses periodically and simultaneously. The convergence to synchrony of our strategy follows from the theory of Mirollo and Strogatz, 1990, while the scalability is evident from the many examples existing in the natural world. When the nodes are within a single broadcast range, our key observation is that the dependence of the synchronization time on the number of nodes N is subject to a phase transition: for values of N beyond a specific threshold, the synchronization is nearly immediate; while for smaller N, the synchronization time decreases smoothly with respect to N. Interestingly, a tradeoff is observed between the total energy consumption and the time necessary to reach synchrony. We obtain an optimum operating point at the local minimum of the energy consumption curve that is associated to the phase transition phenomenon mentioned before. The proposed synchronization protocol is directly applied to the cooperative reach-back communications problem. The main advantages of the proposed method are its scalability and low complexity.

Journal ArticleDOI
TL;DR: The design of a sequence of increasingly complex protocols, which address the multidimensional ramifications of the power control problem are detailed, which may be the only implementations for power control in a real system.
Abstract: Transmit power control is a prototypical example of a cross-layer design problem. The transmit power level affects signal quality and, thus, impacts the physical layer, determines the neighboring nodes that can hear the packet and, thus, the network layer affects interference which causes congestion and, thus, affects the transport layer. It is also key to several performance measures such as throughput, delay, and energy consumption. The challenge is to determine where in the architecture the power control problem is to be situated, to determine the appropriate power level by studying its impact on several performance issues, to provide a solution which deals properly with the multiple effects of transmit power control, and finally, to provide a software architecture for realizing the solution. We distill some basic principles on power control, which inform the subsequent design process. We then detail the design of a sequence of increasingly complex protocols, which address the multidimensional ramifications of the power control problem. Many of these protocols have been implemented, and may be the only implementations for power control in a real system. It is hoped that the approach in this paper may also be of use in other topical problems in cross-layer design.

Journal ArticleDOI
TL;DR: Two new iterative decoding algorithms for channels affected by strong phase noise are presented and their results show that they achieve near-coherent performance at very low complexity without requiring any change to the existing DVB-S2 standard.
Abstract: We present two new iterative decoding algorithms for channels affected by strong phase noise and compare them with the best existing algorithms proposed in the literature. The proposed algorithms are obtained as an application of the sum-product algorithm to the factor graph representing the joint a posteriori probability mass function of the information bits given the channel output. In order to overcome the problems due to the presence in the factor graph of continuous random variables, we apply the method of canonical distributions . Several choices of canonical distributions have been considered in the literature. Well-known approaches consist of discretizing continuous variables or treating them as jointly Gaussian, thus obtaining a Kalman estimator. Our first new approach, based on the Fourier series expansion of the phase probability density function, yields better complexity/performance tradeoff with respect to the usual discretized-phase method. Our second new approach, based on the Tikhonov canonical distribution, yields near-optimal performance at very low complexity and is shown to be much more robust than the Kalman method to the placement of pilot symbols in the coded frame. We present numerical results for binary LDPC codes and LDPC-coded modulation, with particular reference to some phase-noise models and coded-modulation formats standardized in the next-generation satellite Digital Video Broadcasting (DVB-S2). These results show that our algorithms achieve near-coherent performance at very low complexity without requiring any change to the existing DVB-S2 standard.

Journal ArticleDOI
TL;DR: All anomalies that could exist in a single- or multifirewall environment are identified and a set of techniques and algorithms to automatically discover policy anomalies in centralized and distributed firewalls are presented.
Abstract: Firewalls are core elements in network security. However, managing firewall rules, particularly, in multifirewall enterprise networks, has become a complex and error-prone task. Firewall filtering rules have to be written, ordered, and distributed carefully in order to avoid firewall policy anomalies that might cause network vulnerability. Therefore, inserting or modifying filtering rules in any firewall requires thorough intrafirewall and interfirewall analysis to determine the proper rule placement and ordering in the firewalls. In this paper, we identify all anomalies that could exist in a single- or multifirewall environment. We also present a set of techniques and algorithms to automatically discover policy anomalies in centralized and distributed firewalls. These techniques are implemented in a software tool called the "Firewall Policy Advisor" that simplifies the management of filtering rules and maintains the security of next-generation firewalls.

Journal ArticleDOI
TL;DR: The asymptotic scaling for the per user throughput in a large hybrid ad hoc network, i.e., a network where ad hoc nodes are randomly spatially distributed and choose to communicate with a random destination, is determined and it is shown that further investments in infrastructure nodes will not lead to improvement in throughput.
Abstract: We determine the asymptotic scaling for the per user throughput in a large hybrid ad hoc network, i.e., a network with both ad hoc nodes, which communicate with each other via shared wireless links of capacity W bits/s, and infrastructure nodes which in addition are interconnected with each other via high capacity links. Specifically, we consider a network model where ad hoc nodes are randomly spatially distributed and choose to communicate with a random destination. We identify three scaling regimes, depending on the growth of the number of infrastructure nodes, m relative to the number of ad hoc nodes n, and show the asymptotic scaling for the per user throughput as n becomes large. We show that when m /spl lsim/ /spl radic/n/logn the per user throughput is of order W//spl radic/n log n and could be realized by allowing only ad hoc communications, i.e., not deploying the infrastructure nodes at all. Whenever /spl radic/n/log n /spl lsim/ m /spl lsim/ n/log n, the order for the per user throughput is Wm/n and, thus, the total additional bandwidth provided by m infrastructure nodes is effectively shared among ad hoc nodes. Finally, whenever m /spl gsim/ n/log n, the order of the per user throughput is only W/log n, suggesting that further investments in infrastructure nodes will not lead to improvement in throughput. The results are shown through an upper bound which is independent of the routing strategy, and by constructing scenarios showing that the upper bound is asymptotically tight.

Journal ArticleDOI
TL;DR: An analytical framework based on the sampling expansion approach is developed that derives closed-form expression for the bit-error probability (BEP) of TR signaling with AcR that can be used to exploit multipath diversity inherent in wideband channels.
Abstract: Transmitted-reference (TR) signaling, in conjunction with an autocorrelation receiver (AcR), offers a low-complexity alternative to Rake reception. Due to its simplicity, there is renewed interest in TR signaling for ultrawide bandwidth (UWB) systems. To assess the performance of these systems, we develop an analytical framework based on the sampling expansion approach. In particular, we derive closed-form expression for the bit-error probability (BEP) of TR signaling with AcR that can be used to exploit multipath diversity inherent in wideband channels. We further extend our analysis to the BEP derivation of modified AcR with noise averaging. Our methodology does not require the Gaussian approximation and is applicable for any fading scenario, provided that the correlator output signal-to-noise ratio (SNR) can be characterized in terms of a characteristic function. We show that the validity of the conventional Gaussian approximation depends on the time-bandwidth product and the number of transmitted pulses per symbol. Our results enable the derivation of a computationally simple lower bound on the BEP of TR signaling with AcR. This lower bound allows us to obtain the SNR penalty associated with an AcR, as compared with All-Rake and Partial-Rake receivers.

Journal ArticleDOI
TL;DR: This work focuses upon the use of multiple-pulse-position-modulation as a power-efficient transmission format, with signal repetition across the laser array, for atmospheric, line-of-sight optical communication, and on capacity for coded transmission.
Abstract: We study the use of multiple laser transmitters combined with multiple photodetectors for atmospheric, line-of-sight optical communication, and focus upon the use of multiple-pulse-position-modulation as a power-efficient transmission format, with signal repetition across the laser array. Ideal (photon counting) photodetectors are assumed, with and without background radiation. The resulting multiple-input/multiple-output channel has the potential for combating fading effects on turbulent optical channels, for which both log-normal and Rayleigh-fading models are treated. Our focus is upon symbol error probability for uncoded transmission, and on capacity for coded transmission. Full spatial diversity is obtained naturally in this application.

Journal ArticleDOI
TL;DR: It is shown by simulation that the RDG outperforms previously proposed routing graphs in the context of the Greedy perimeter stateless routing (GPSR) protocol, and theoretical bounds on the quality of paths discovered using GPSR are investigated.
Abstract: We propose a new routing graph, the restricted Delaunay graph (RDG), for mobile ad hoc networks. Combined with a node clustering algorithm, the RDG can be used as an underlying graph for geographic routing protocols. This graph has the following attractive properties: 1) it is planar; 2) between any two graph nodes there exists a path whose length, whether measured in terms of topological or Euclidean distance, is only a constant times the minimum length possible; and 3) the graph can be maintained efficiently in a distributed manner when the nodes move around. Furthermore, each node only needs constant time to make routing decisions. We show by simulation that the RDG outperforms previously proposed routing graphs in the context of the Greedy perimeter stateless routing (GPSR) protocol. Finally, we investigate theoretical bounds on the quality of paths discovered using GPSR.

Journal ArticleDOI
TL;DR: For a given resource constraint, randomization over the choice of measurement and over the choices of when to transmit achieves the best performance (in a Bayesian, Neyman-Pearson, and Ali-Silvey sense).
Abstract: There is significant interest in battery-powered sensor networks to be used for detection in a wide variety of applications, from surveillance and security to health and environmental monitoring. Severe energy and bandwidth constraints at each sensor node demand system-level approaches to design that consider detection performance jointly with system-resource constraints. Our approach is to formulate detection problems with constraints on the expected cost arising from transmission (sensor nodes to a fusion node) and measurement (at each sensor node) to address some of the system-level costs in a sensor network. For a given resource constraint, we find that randomization over the choice of measurement and over the choice of when to transmit achieves the best performance (in a Bayesian, Neyman-Pearson, and Ali-Silvey sense). To facilitate design, we describe performance criteria in the send/no-send transmission scenario, where the joint optimization over the sensor nodes decouples into optimization at each sensor node.

Journal ArticleDOI
TL;DR: The design and implementation of pump slowly, fetch quickly (PSFQ), a simple, scalable, and robust transport protocol that is customizable to meet the needs of emerging reliable data applications in sensor networks, are presented.
Abstract: There is a growing need to support reliable data communications in sensor networks that are capable of supporting new applications, such as, assured delivery of high-priority events to sinks, reliable control and management of sensor networks, and remotely programming/retasking sensor nodes over-the-air. We present the design, implementation, and evaluation of pump slowly, fetch quickly (PSFQ), a simple, scalable, and robust transport protocol that is customizable to meet the needs of emerging reliable data applications in sensor networks. PSFQ represents a simple approach because it makes minimum assumptions about the underlying routing infrastructure, it is scalable and energy- efficient because it supports minimum signaling, thereby reducing the communication cost for data reliability, and importantly, it is robust because it is responsive to a wide range of operational error conditions found in sensor network, allowing for the successful operation of the protocol even under highly error-prone conditions. The key idea that underpins the design of PSFQ is to distribute data from a source node by pacing data at a relatively slow speed ("pump slowly"), but allowing nodes that experience data loss to fetch (i.e., recover) any missing segments from their local immediate neighbors aggressively ("fetch quickly"). We present the design and implementation of PSFQ, and evaluate the protocol using the ns-2 simulator and an experimental wireless sensor testbed based on Berkeley motes and the TinyOS operating system. We show that PSFQ can outperform existing related techniques and is highly responsive to the various error conditions experienced in sensor networks. The source code for PSFQ is freely available for experimentation.

Journal ArticleDOI
TL;DR: The average response time is used as the performance metric for a performance management system for cluster-based web services that supports multiple classes of web services traffic and allocates server resources dynamically so to maximize the expected value of a given cluster utility function in the face of fluctuating loads.
Abstract: We present an architecture and prototype implementation of a performance management system for cluster-based web services. The system supports multiple classes of web services traffic and allocates server resources dynamically so to maximize the expected value of a given cluster utility function in the face of fluctuating loads. The cluster utility is a function of the performance delivered to the various classes, and this leads to differentiated service. In this paper, we will use the average response time as the performance metric. The management system is transparent: it requires no changes in the client code, the server code, or the network interface between them. The system performs three performance management tasks: resource allocation, load balancing, and server overload protection. We use two nested levels of management. The inner level centers on queuing and scheduling of request messages. The outer level is a feedback control loop that periodically adjusts the scheduling weights and server allocations of the inner level. The feedback controller is based on an approximate first-principles model of the system, with parameters derived from continuous monitoring. We focus on SOAP-based web services. We report experimental results that show the dynamic behavior of the system.

Journal ArticleDOI
TL;DR: This paper presents a novel power controlled MAC protocol called POWMAC, which enjoys the same single-channel, single-transceiver design of the IEEE 802.11 ad hoc MAC protocol but which achieves a significant throughput improvement over the802.11 protocol.
Abstract: Transmission power control (TPC) has great potential to increase the throughput of a mobile ad hoc network (MANET). Existing TPC schemes achieve this goal by using additional hardware (e.g., multiple transceivers), by compromising the collision avoidance property of the channel access scheme, by making impractical assumptions on the operation of the medium access control (MAC) protocol, or by overlooking the protection of link-layer acknowledgment packets. In this paper, we present a novel power controlled MAC protocol called POWMAC, which enjoys the same single-channel, single-transceiver design of the IEEE 802.11 ad hoc MAC protocol but which achieves a significant throughput improvement over the 802.11 protocol. Instead of alternating between the transmission of control (RTS/CTS) and data packets, as done in the 802.11 scheme, POWMAC uses an access window (AW) to allow for a series of request-to-send/clear-to-send (RTS/CTS) exchanges to take place before several concurrent data packet transmissions can commence. The length of the AW is dynamically adjusted based on localized information to allow for multiple interference-limited concurrent transmissions to take place in the same vicinity of a receiving terminal. Collision avoidance information is inserted into the CTS packet and is used to bound/ the transmission power of potentially interfering terminals in the vicinity of the receiver, rather than silencing such terminals. Simulation results are used to demonstrate the significant throughput and energy gains that can be obtained under the POWMAC protocol.

Journal ArticleDOI
TL;DR: This paper addresses the problem of distributed routing of restoration paths and introduces the concept of "backtracking" to bound the restoration latency, using a link cost model that captures bandwidth sharing among links using various types of aggregate link-state information.
Abstract: The emerging multiprotocol label switching (MPLS) networks enable network service providers to route bandwidth guaranteed paths between customer sites. This basic label switched path (LSP) routing is often enhanced using restoration routing which sets up alternate LSPs to guarantee uninterrupted connectivity in case network links or nodes along primary path fail. We address the problem of distributed routing of restoration paths, which can be defined as follows: given a request for a bandwidth guaranteed LSP between two nodes, find a primary LSP, and a set of backup LSPs that protect the links along the primary LSP. A routing algorithm that computes these paths must optimize the restoration latency and the amount of bandwidth used. We introduce the concept of "backtracking" to bound the restoration latency. We consider three different cases characterized by a parameter called backtracking distance D: 1) no backtracking (D=0); 2) limited backtracking (D=k); and 3) unlimited backtracking (D=/spl infin/). We use a link cost model that captures bandwidth sharing among links using various types of aggregate link-state information. We first show that joint optimization of primary and backup paths is NP-hard in all cases. We then consider algorithms that compute primary and backup paths in two separate steps. Using link cost metrics that capture bandwidth sharing, we devise heuristics for each case. Our simulation study shows that these algorithms offer a way to tradeoff bandwidth to meet a range of restoration latency requirements.

Journal ArticleDOI
TL;DR: Through simulation, it is shown that the proposed mobility model has a significant impact on network performance, especially when compared with other mobility models, and the performance of ad hoc network protocols is effected when different mobility scenarios are utilized.
Abstract: Simulation environments are an important tool for the evaluation of new concepts in networking. The study of mobile ad hoc networks depends on understanding protocols from simulations, before these protocols are implemented in a real-world setting. To produce a real-world environment within which an ad hoc network can be formed among a set of nodes, there is a need for the development of realistic, generic and comprehensive mobility, and signal propagation models. In this paper, we propose the design of a mobility and signal propagation model that can be used in simulations to produce realistic network scenarios. Our model allows the placement of obstacles that restrict movement and signal propagation. Movement paths are constructed as Voronoi tessellations with the corner points of these obstacles as Voronoi sites. Our mobility model also introduces a signal propagation model that emulates properties of fading in the presence of obstacles. As a result, we have developed a complete environment in which network protocols can be studied on the basis of numerous performance metrics. Through simulation, we show that the proposed mobility model has a significant impact on network performance, especially when compared with other mobility models. In addition, we also observe that the performance of ad hoc network protocols is effected when different mobility scenarios are utilized.

Journal ArticleDOI
TL;DR: A lower bound on the best achievable end-to-end distortion as a function of the number of sensors, their total transmit power, the numberof degrees of freedom of the underlying source process, and the spatio-temporal communication bandwidth is presented.
Abstract: For a class of sensor networks, the task is to monitor an underlying physical phenomenon over space and time through an imperfect observation process. The sensors can communicate back to a central data collector over a noisy channel. The key parameters in such a setting are the fidelity (or distortion) at which the underlying physical phenomenon can be estimated by the data collector, and the cost of operating the sensor network. This is a network joint source-channel communication problem, involving both compression and communication. It is well known that these two tasks may not be addressed separately without sacrificing optimality, and the optimal performance is generally unknown. This paper presents a lower bound on the best achievable end-to-end distortion as a function of the number of sensors, their total transmit power, the number of degrees of freedom of the underlying source process, and the spatio-temporal communication bandwidth. Particular coding schemes are studied, and it is shown that in some cases, the lower bound is tight in a scaling-law sense. By contrast, it is shown that the standard practice of separating source from channel coding may incur an exponential penalty in terms of communication resources, as a function of the number of sensors. Hence, such code designs effectively prevent scalability. Finally, it is outlined how the results extend to cases involving missing synchronization and channel fading.

Journal ArticleDOI
TL;DR: In this article, a passive autoconfiguration for mobile ad hoc network (PACMAN) is presented. But the authors focus on the efficient distributed address autoconfigurement of mobile ad-hoc networks.
Abstract: Mobile ad hoc networks (MANETs) enable the communication between mobile nodes via multihop wireless routes without depending on a communication infrastructure. In contrast to infrastructure-based networks, MANET's support autonomous and spontaneous networking and, thus, should be capable of self-organization and -configuration. This paper presents passive autoconfiguration for mobile ad hoc network (PACMAN), a novel approach for the efficient distributed address autoconfiguration of mobile ad hoc networks. Special features of PACMAN are the support for frequent network partitioning and merging, and very low protocol overhead. This is accomplished by using cross-layer information derived from ongoing routing protocol traffic, e.g., address conflicts are detected in a passive manner based on anomalies in routing protocol traffic. Furthermore, PACMAN assigns Internet protocol (IP) addresses in a way that enables their compression, which can significantly reduce the routing protocol overhead. The performance of PACMAN is analyzed in detail based on various simulation results.

Journal ArticleDOI
TL;DR: This paper proposes a simple probabilistic local quantization rule that allows sensors in the network to operate identically and autonomously even when the network undergoes changes in size or topology.
Abstract: Consider a decentralized estimation problem whereby an ad hoc network of K distributed sensors wish to cooperate to estimate an unknown parameter over a bounded interval [-U,U]. Each sensor collects one noise-corrupted sample, performs a local data quantization according to a fixed (but possibly probabilistic) rule, and transmits the resulting discrete message to its neighbors. These discrete messages are then percolated in the network and used by each sensor to form its own minimum mean squared error (MMSE) estimate of the unknown parameter according to a fixed fusion rule. In this paper, we propose a simple probabilistic local quantization rule: each sensor quantizes its observation to the first most significant bit (MSB) with probability 1/2, the second MSB with probability 1/4, and so on. Assuming the noises are uncorrelated and identically distributed across sensors and are bounded to [-U,U], we show that this local quantization strategy together with a fusion rule can guarantee a MSE of 4U/sup 2//K, and that the average length of local messages is bounded (no more than 2.5 bits). Compared with the worst case Cramer-Rao lower bound of U/sup 2//K (even for the centralized counterpart), this is within a factor of at most 4 to the minimum achievable MSE. Moreover, the proposed scheme is isotropic and universal in the sense that the local quantization rules and the final fusion rules are independent of sensor index, noise distribution, network size, or topology. In fact, the proposed scheme allows sensors in the network to operate identically and autonomously even when the network undergoes changes in size or topology.

Journal ArticleDOI
TL;DR: This work develops a cross-layer design for multiuser scheduling at the data link layer, with each user employing adaptive modulation and coding at the physical layer, which enables prescribed QoS guarantees and efficient bandwidth utilization simultaneously.
Abstract: Providing guaranteed quality-of-service (QoS) for multimedia applications over wireless fading channels is challenging. To this end, we develop a cross-layer design for multiuser scheduling at the data link layer, with each user employing adaptive modulation and coding (AMC) at the physical layer. By classifying users into: QoS-guaranteed and best-effort users, the proposed scheduler enables prescribed QoS guarantees and efficient bandwidth utilization simultaneously. Furthermore, our cross-layer scheduler enjoys low-complexity implementation and analysis, provides service isolation and scalability, decouples delay from dynamically-scheduled bandwidth, and is backward compatible with existing separate-layer designs. Accuracy of the performance analysis is verified by simulations and pertinent robustness issues are briefly discussed. Numerical examples illustrate the steady-state statistical performance for a single and multiple users, as well as the asymptotic behavior for a large number of users.

Journal ArticleDOI
TL;DR: A failure location algorithm that aims to locate single and multiple failures in transparent optical networks and can cope with ideal scenarios, as well as with nonideal scenarios having false and/or lost alarms.
Abstract: Fault and attack management has become a very important issue for network operators that are interested to offer a secure and resilient network capable to prevent and localize, as accurately as possible, any failure (fault or attack) that may occur. Hence, an efficient failure location method is needed. To locate failures in opaque optical networks, existing methods which allow monitoring of the optical signal at every regeneration site can be used. However, to the best of our knowledge, no method exists today that performs failure location for transparent optical networks. Such networks are more vulnerable to failures than opaque networks since failures propagate without being isolated due to optoelectronic conversions. In this paper, we present a failure location algorithm that aims to locate single and multiple failures in transparent optical networks. The failure location algorithm developed in this paper can cope with ideal scenarios (i.e., no false and/or lost alarms), as well as with nonideal scenarios having false and/or lost alarms.