scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 2006"


Journal ArticleDOI
TL;DR: FAST TCP is described, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation, and its equilibrium and stability properties are characterized.
Abstract: We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties which the current TCP implementation has at large windows. We describe the architecture and summarize some of the algorithms implemented in our prototype. We characterize its equilibrium and stability properties. We evaluate it experimentally in terms of throughput, fairness, stability, and responsiveness

1,214 citations


Journal ArticleDOI
TL;DR: Simulations show that adding gossiping to AODV results in significant performance improvement, even in networks as small as 150 nodes, and suggest that the improvement should be even more significant in larger networks.
Abstract: Many ad hoc routing protocols are based on some variant of flooding. Despite various optimizations of flooding, many routing messages are propagated unnecessarily. We propose a gossiping-based approach, where each node forwards a message with some probability, to reduce the overhead of the routing protocols. Gossiping exhibits bimodal behavior in sufficiently large networks: in some executions, the gossip dies out quickly and hardly any node gets the message; in the remaining executions, a substantial fraction of the nodes gets the message. The fraction of executions in which most nodes get the message depends on the gossiping probability and the topology of the network. In the networks we have considered, using gossiping probability between 0.6 and 0.8 suffices to ensure that almost every node gets the message in almost every execution. For large networks, this simple gossiping protocol uses up to 35% fewer messages than flooding, with improved performance. Gossiping can also be combined with various optimizations of flooding to yield further benefits. Simulations show that adding gossiping to AODV results in significant performance improvement, even in networks as small as 150 nodes. Our results suggest that the improvement should be even more significant in larger networks

828 citations


Journal ArticleDOI
TL;DR: This foundation work identifies three negative side- effects of reordering introduced by CMT that must be managed before efficient parallel transfer can be achieved and proposes three algorithms which augment and/or modify current SCTP to counter these side-effects.
Abstract: Concurrent multipath transfer (CMT) uses the Stream Control Transmission Protocol's (SCTP) multihoming feature to distribute data across multiple end-to-end paths in a multihomed SCTP association. We identify three negative side-effects of reordering introduced by CMT that must be managed before efficient parallel transfer can be achieved: (1) unnecessary fast retransmissions by a sender; (2) overly conservative congestion window (cwnd) growth at a sender; and (3) increased ack traffic due to fewer delayed acks by a receiver. We propose three algorithms which augment and/or modify current SCTP to counter these side-effects. Presented with several choices as to where a sender should direct retransmissions of lost data, we propose five retransmission policies for CMT. We demonstrate spurious retransmissions in CMT with all five policies and propose changes to CMT to allow the different policies. CMT is evaluated against AppStripe, which is an idealized application that stripes data over multiple paths using multiple SCTP associations. The different CMT retransmission policies are then evaluated with varied constrained receive buffer sizes. In this foundation work, we operate under the strong assumption that the bottleneck queues on the end-to-end paths used in CMT are independent.

615 citations


Journal ArticleDOI
TL;DR: It is shown that the theory of nonnegative matrices may be employed to model communication networks that employ drop-tail queueing and Additive-Increase Multiplicative-Decrease (AIMD) congestion control algorithms and these results can be used to develop tools for analyzing the behavior of AIMD communication networks.
Abstract: We study communication networks that employ drop-tail queueing and Additive-Increase Multiplicative-Decrease (AIMD) congestion control algorithms. It is shown that the theory of nonnegative matrices may be employed to model such networks. In particular, important network properties, such as: 1) fairness; 2) rate of convergence; and 3) throughput, can be characterized by certain nonnegative matrices. We demonstrate that these results can be used to develop tools for analyzing the behavior of AIMD communication networks. The accuracy of the models is demonstrated by several NS studies.

480 citations


Journal ArticleDOI
TL;DR: This paper presents a class of algorithms that can be implemented at the sources to stably and optimally split the flow between each source-destination pair and shows that the connection-level throughput region of such multi-path routing/congestion control algorithms can be larger than that of a single-path congestion control scheme.
Abstract: We consider the problem of congestion-aware multi-path routing in the Internet. Currently, Internet routing protocols select only a single path between a source and a destination. However, due to many policy routing decisions, single-path routing may limit the achievable throughput. In this paper, we envision a scenario where multi-path routing is enabled in the Internet to take advantage of path diversity. Using minimal congestion feedback signals from the routers, we present a class of algorithms that can be implemented at the sources to stably and optimally split the flow between each source-destination pair. We then show that the connection-level throughput region of such multi-path routing/congestion control algorithms can be larger than that of a single-path congestion control scheme.

449 citations


Journal ArticleDOI
TL;DR: The notion of a connected sensor cover is developed and a centralized approximation algorithm that constructs a topology involving a near-optimal connected sensors cover is designed, which proves that the size of the constructed topology is within an O(log n) factor of the optimal size.
Abstract: Spatial query execution is an essential functionality of a sensor network, where a query gathers sensor data within a specific geographic region. Redundancy within a sensor network can be exploited to reduce the communication cost incurred in execution of such queries. Any reduction in communication cost would result in an efficient use of the battery energy, which is very limited in sensors. One approach to reduce the communication cost of a query is to self-organize the network, in response to a query, into a topology that involves only a small subset of the sensors sufficient to process the query. The query is then executed using only the sensors in the constructed topology. The self-organization technique is beneficial for queries that run sufficiently long to amortize the communication cost incurred in self-organization. In this paper, we design and analyze algorithms for suchself-organization of a sensor network to reduce energy consumption. In particular, we develop the notion of a connected sensor cover and design a centralized approximation algorithm that constructs a topology involving a near-optimal connected sensor cover. We prove that the size of the constructed topology is within an O(logn) factor of the optimal size, where n is the network size. We develop a distributed self-organization version of the approximation algorithm, and propose several optimizations to reduce the communication overhead of the algorithm. We also design another distributed algorithm based on node priorities that has a further lower communication overhead, but does not provide any guarantee on the size of the connected sensor cover constructed. Finally, we evaluate the distributed algorithms using simulations and show that our approaches results in significant communication cost reductions.

417 citations


Journal ArticleDOI
TL;DR: This paper shows how the spatial correlation can be exploited on the Medium Access Control (MAC) layer of WSN, and is the first effort which exploits spatial correlation in WSN on the MAC layer.
Abstract: Wireless Sensor Networks (WSN) are mainly characterized by dense deployment of sensor nodes which collectively transmit information about sensed events to the sink. Due to the spatial correlation between sensor nodes subject to observed events, it may not be necessary for every sensor node to transmit its data. This paper shows how the spatial correlation can be exploited on the Medium Access Control (MAC) layer. To the best of our knowledge, this is the first effort which exploits spatial correlation in WSN on the MAC layer. A theoretical framework is developed for transmission regulation of sensor nodes under a distortion constraint. It is shown that a sensor node can act as a representative node for several other sensor nodes observing the correlated data. Based on the theoretical framework, a distributed, spatial Correlation-based Collaborative Medium Access Control (CC-MAC) protocol is then designed which has two components: Event MAC (E-MAC) and Network MAC (N-MAC). E-MAC filters out the correlation in sensor records while N-MAC prioritizes the transmission of route-thru packets. Simulation results show that CC-MAC achieves high performance in terms energy, packet drop rate, and latency.

405 citations


Journal ArticleDOI
TL;DR: This paper studies both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and establishes performance bounds of cross-layer congestion control with imperfect scheduling.
Abstract: In this paper, we study cross-layer design for congestion control in multihop wireless networks. In previous work, we have developed an optimal cross-layer congestion control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the underlying layers. However, the scheduling component in this optimal cross-layer congestion control scheme has to solve a complex global optimization problem at each time, and is hence too computationally expensive for online implementation. In this paper, we study how the performance of cross-layer congestion control will be impacted if the network can only use an imperfect (and potentially distributed) scheduling component that is easier to implement. We study both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and we establish performance bounds of cross-layer congestion control with imperfect scheduling. Compared with a layered approach that does not design congestion control and scheduling together, our cross-layer approach has provably better performance bounds, and substantially outperforms the layered approach. The insights drawn from our analyzes also enable us to design a fully distributed cross-layer congestion control and scheduling algorithm for a restrictive interference model.

359 citations


Journal ArticleDOI
TL;DR: This work introduces the first algorithm that is aware of to employ Bloom filters for longest prefix matching (LPM), and shows that use of this algorithm for Internet Protocol (IP) routing lookups results in a search engine providing better performance and scalability than TCAM-based approaches.
Abstract: We introduce the first algorithm that we are aware of to employ Bloom filters for longest prefix matching (LPM). The algorithm performs parallel queries on Bloom filters, an efficient data structure for membership queries, in order to determine address prefix membership in sets of prefixes sorted by prefix length. We show that use of this algorithm for Internet Protocol (IP) routing lookups results in a search engine providing better performance and scalability than TCAM-based approaches. The key feature of our technique is that the performance, as determined by the number of dependent memory accesses per lookup, can be held constant for longer address lengths or additional unique address prefix lengths in the forwarding table given that memory resources scale linearly with the number of prefixes in the forwarding table. Our approach is equally attractive for Internet Protocol Version 6 (IPv6) which uses 128-bit destination addresses, four times longer than IPv4. We present a basic version of our approach along with optimizations leveraging previous advances in LPM algorithms. We also report results of performance simulations of our system using snapshots of IPv4 BGP tables and extend the results to IPv6. Using less than 2 Mb of embedded RAM and a commodity SRAM device, our technique achieves average performance of one hash probe per lookup and a worst case of two hash probes and one array access per lookup.

290 citations


Journal ArticleDOI
TL;DR: The Shared Wireless Infostation Model (SWIM), which is introduced, is able to reduce the delay of packet delivery at the expense of increased storage at the network nodes, and improves the overall capacity-delay tradeoff by only moderately increasing the storage requirements.
Abstract: In this paper, we introduce the Shared Wireless Infostation Model (SWIM), which extends the Infostation model by incorporating information replication, storage, and diffusion into a mobile ad hoc network architecture with intermittent connectivity. SWIM is able to reduce the delay of packet delivery at the expense of increased storage at the network nodes. Furthermore, SWIM improves the overall capacity-delay tradeoff by only moderately increasing the storage requirements. This tradeoff is examined here in the context of a practical application-acquisition of telemetry data from radio-tagged whales. To reduce the storage requirements, without affecting the network delay, we propose and study a number of schemes for deletion of obsolete information from the network nodes. In particular, through the use of Markov chains, we compare the performance of five such storage deletion schemes, which, by increasing the computational complexity of the routing algorithm, mitigate the storage requirements. The results of our study will allow a network designer to implement such a system and to tune its performance in a delay-tolerant environment with intermittent connectivity, as to ensure with some chosen level of confidence that the information is successfully carried through the mobile network and delivered within some time period.

282 citations


Journal ArticleDOI
TL;DR: Constraint-Based Geolocation (CBG), which infers the geographic location of Internet hosts using multilateration with distance constraints to establish a continuous space of answers instead of a discrete one, is proposed.
Abstract: Geolocation of Internet hosts enables a new class of location-aware applications. Previous measurement-based approaches use reference hosts, called landmarks, with a well-known geographic location to provide the location estimation of a target host. This leads to a discrete space of answers, limiting the number of possible location estimates to the number of adopted landmarks. In contrast, we propose Constraint-Based Geolocation (CBG), which infers the geographic location of Internet hosts using multilateration with distance constraints to establish a continuous space of answers instead of a discrete one. However, to use multilateration in the Internet, the geographic distances from the landmarks to the target host have to be estimated based on delay measurements between these hosts. This is a challenging problem because the relationship between network delay and geographic distance in the Internet is perturbed by many factors, including queueing delays and the absence of great-circle paths between hosts. CBG accurately transforms delay measurements to geographic distance constraints, and then uses multilateration to infer the geolocation of the target host. Our experimental results show that CBG outperforms previous geolocation techniques. Moreover, in contrast to previous approaches, our method is able to assign a confidence region to each given location estimate. This allows a location-aware application to assess whether the location estimate is sufficiently accurate for its needs.

Journal ArticleDOI
TL;DR: This paper examines how transit and customer prices and quality of service are set in a network consisting of multiple ISPs, and shows how positive profit can be achieved using threat strategies with multiple qualities of service.
Abstract: In this paper, we examine how transit and customer prices and quality of service are set in a network consisting of multiple ISPs. Some ISPs may face an identical set of circumstances in terms of potential customer pool and running costs. We examine the existence of equilibrium strategies in this situation and show how positive profit can be achieved using threat strategies with multiple qualities of service. It is shown that if the number of ISPs competing for the same customers is large then it can lead to price wars. ISPs that are not co-located may not directly compete for users, but are nevertheless involved in a non-cooperative game of setting access and transit prices for each other. They are linked economically through a sequence of providers forming a hierarchy, and we study their interaction by considering a multi-stage game. We also consider the economics of private exchange points and show that their viability depends on fundamental limits on the demand and cost.

Journal ArticleDOI
TL;DR: It is shown that maliciously chosen low-rate DoS traffic patterns that exploit TCP's retransmission timeout mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection.
Abstract: Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP's congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a well-known vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCP's retransmission timeout mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. Moreover, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized timeout mechanisms to thwart such low-rate DoS attacks.

Journal ArticleDOI
TL;DR: The aim of this paper is to provide service differentiation in a P2P network based on the amount of services each node has provided to the network community, and to present a generalized incentive mechanism for nodes having heterogeneous utility functions.
Abstract: Conventional peer-to-peer (P2P) networks do not provide service differentiation and incentive for users. Therefore, users can easily obtain information without themselves contributing any information or service to a P2P community. This leads to the well known free-riding problem. Consequently, most of the information requests are directed towards a small number of P2P nodes which are willing to share information or provide service, causing the "tragedy of the commons." The aim of this paper is to provide service differentiation in a P2P network based on the amount of services each node has provided to the network community. Since the differentiation is based on nodes' prior contributions, the nodes are encouraged to share information/services with each other. We first introduce a resource distribution mechanism for all the information sharing nodes. The mechanism is distributed in nature, has linear time complexity, and guarantees Pareto-optimal resource allocation. Second, we model the whole resource request/distribution process as a competition game between the competing nodes. We show that this game has a Nash equilibrium. To realize the game, we propose a protocol in which the competing nodes can interact with the information providing node to reach Nash equilibrium efficiently and dynamically. We also present a generalized incentive mechanism for nodes having heterogeneous utility functions. Convergence analysis of the competition game is carried out. Examples are used to illustrate that the incentive protocol provides service differentiation and can induce productive resource sharing by rational network nodes. Lastly, the incentive protocol is adaptive to node arrival and departure events, and to different forms of network congestion.

Journal ArticleDOI
TL;DR: It is proved that even in this simple case, the optimization problem is NP-hard, and some efficient, scalable, and distributed heuristic approximation algorithms are proposed for solving this problem and the total transmission cost can be significantly improved over direct transmission or the shortest path tree.
Abstract: We consider the problem of correlated data gathering by a network with a sink node and a tree-based communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. For source coding of correlated data, we consider a joint entropy-based coding model with explicit communication where coding is simple and the transmission structure optimization is difficult. We first formulate the optimization problem definition in the general case and then we study further a network setting where the entropy conditioning at nodes does not depend on the amount of side information, but only on its availability. We prove that even in this simple case, the optimization problem is NP-hard. We propose some efficient, scalable, and distributed heuristic approximation algorithms for solving this problem and show by numerical simulations that the total transmission cost can be significantly improved over direct transmission or the shortest path tree. We also present an approximation algorithm that provides a tree transmission structure with total cost within a constant factor from the optimal.

Journal ArticleDOI
Nicolas Hohn1, Darryl Veitch1
TL;DR: This paper studies both theoretically and practically what information about the original traffic can be inferred when sampling, or "thinning", is performed at the packet level, and introduces an alternative flow-based thinning, where practical inversion is possible even at arbitrarily low sampling rate.
Abstract: Routers have the ability to output statistics about packets and flows of packets that traverse them. Since, however, the generation of detailed traffic statistics does not scale well with link speed, increasingly routers and measurement boxes implement sampling strategies at the packet level. In this paper, we study both theoretically and practically what information about the original traffic can be inferred when sampling, or "thinning", is performed at the packet level. While basic packet level characteristics such as first order statistics can be fairly directly recovered, other aspects require more attention. We focus mainly on the spectral density, a second-order statistic, and the distribution of the number of packets per flow, showing how both can be exactly recovered, in theory. We then show in detail why in practice this cannot be done using the traditional packet based sampling, even for high sampling rate. We introduce an alternative flow-based thinning, where practical inversion is possible even at arbitrarily low sampling rate. We also investigate the theory and practice of fitting the parameters of a Poisson cluster process, modeling the full packet traffic, from sampled data.

Journal ArticleDOI
TL;DR: A family of bitmap algorithms that address the problem of counting the number of distinct header patterns (flows) seen on a high-speed link and can be used to detect DoS attacks and port scans and to solve measurement problems.
Abstract: This paper presents a family of bitmap algorithms that address the problem of counting the number of distinct header patterns (flows) seen on a high-speed link. Such counting can be used to detect DoS attacks and port scans and to solve measurement problems. Counting is especially hard when processing must be done within a packet arrival time (8 ns at OC-768 speeds) and, hence, may perform only a small number of accesses to limited, fast memory. A naive solution that maintains a hash table requires several megabytes because the number of flows can be above a million. By contrast, our new probabilistic algorithms use little memory and are fast. The reduction in memory is particularly important for applications that run multiple concurrent counting instances. For example, we replaced the port-scan detection component of the popular intrusion detection system Snort with one of our new algorithms. This reduced memory usage on a ten minute trace from 50 to 5.6 MB while maintaining a 99.77% probability of alarming on a scan within 6 s of when the large-memory algorithm would. The best known prior algorithm (probabilistic counting) takes four times more memory on port scan detection and eight times more on a measurement application. This is possible because our algorithms can be customized to take advantage of special features such as a large number of instances that have very small counts or prior knowledge of the likely range of the count.

Journal ArticleDOI
TL;DR: This paper uses a game-theoretic approach to investigate the performance of selfish routing in Internet-like environments based on realistic topologies and traffic demands in simulations and shows that in contrast to theoretical worst cases, selfish routing achieves close to optimal average latency in such environments.
Abstract: A recent trend in routing research is to avoid inefficiencies in network-level routing by allowing hosts to either choose routes themselves (e.g., source routing) or use overlay routing networks (e.g., Detour or RON). Such approaches result in selfish routing, because routing decisions are no longer based on system-wide criteria but are instead designed to optimize host-based or overlay-based metrics. A series of theoretical results showing that selfish routing can result in suboptimal system behavior have cast doubts on this approach. In this paper, we use a game-theoretic approach to investigate the performance of selfish routing in Internet-like environments based on realistic topologies and traffic demands in our simulations. We show that in contrast to theoretical worst cases, selfish routing achieves close to optimal average latency in such environments. However, such performance benefits come at the expense of significantly increased congestion on certain links. Moreover, the adaptive nature of selfish overlays can significantly reduce the effectiveness of traffic engineering by making network traffic less predictable.

Journal ArticleDOI
TL;DR: A thorough characterization of what is believed to be the first significant live Internet streaming media workload in the scientific literature is presented, and a model for live media workload generation is presented that incorporates many of the findings, and which is implemented in GISMO.
Abstract: We present a thorough characterization of what we believe to be the first significant live Internet streaming media workload in the scientific literature. Our characterization of over 3.5 million requests spanning a 28-day period is done at three increasingly granular levels, corresponding to clients, sessions, and transfers. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different for live versus stored objects. Access to stored objects is user driven, whereas access to live objects is object driven. This reversal of active/passive roles of users and objects leads to interesting dualities. For instance, our analysis underscores a Zipf-like profile for user interest in a given object, which is in contrast to the classic Zipf-like popularity of objects for a given user. Also, our analysis reveals that transfer lengths are highly variable and that this variability is due to client stickiness to a particular live object, as opposed to structural (size) properties of objects. Second, by contrasting two live streaming workloads from two radically different applications, we conjecture that some characteristics of live media access workloads are likely to be highly dependent on the nature of the live content being accessed. This dependence is clear from the strong temporal correlation observed in the traces, which we attribute to the impact of synchronous access to live content. Based on our analysis, we present a model for live media workload generation that incorporates many of our findings, and which we implement in GISMO.

Journal ArticleDOI
TL;DR: It is shown that random walk on torus and billiards belong to the random trip class of models, and it is established that the time-limit distribution of node location for these two models is uniform, for any initial distribution, even in cases where the speed vector does not have circular symmetry.
Abstract: We define "random trip", a generic mobility model for random, independent node motions, which contains as special cases: the random waypoint on convex or nonconvex domains, random walk on torus, billiards, city section, space graph, intercity and other models. We show that, for this model, a necessary and sufficient condition for a time-stationary regime to exist is that the mean trip duration (sampled at trip endpoints) is finite. When this holds, we show that the distribution of node mobility state converges to the time-stationary distribution, starting from the origin of an arbitrary trip. For the special case of random waypoint, we provide for the first time a proof and a sufficient and necessary condition of the existence of a stationary regime, thus closing a long standing issue. We show that random walk on torus and billiards belong to the random trip class of models, and establish that the time-limit distribution of node location for these two models is uniform, for any initial distribution, even in cases where the speed vector does not have circular symmetry. Using Palm calculus, we establish properties of the time-stationary regime, when the condition for its existence holds. We provide an algorithm to sample the simulation state from a time-stationary distribution at time 0 ("perfect simulation"), without computing geometric constants. For random waypoint on the sphere, random walk on torus and billiards, we show that, in the time-stationary regime, the node location is uniform. Our perfect sampling algorithm is implemented to use with ns-2, and is available to download from http://ica1www.epfl.ch/RandomTrip.

Journal ArticleDOI
TL;DR: A distributed algorithm based on dynamic pricing that provides a power and rate allocation that is asymptotically optimal in the number of mobiles and a pricing-based base-station assignment algorithm that results in an overall joint resource allocation and base- station assignment.
Abstract: In this paper, we jointly consider the resource allocation and base-station assignment problems for the downlink in CDMA networks that could carry heterogeneous data services. We first study a joint power and rate allocation problem that attempts to maximize the expected throughput of the system. This problem is inherently difficult because it is in fact a nonconvex optimization problem. To solve this problem, we develop a distributed algorithm based on dynamic pricing. This algorithm provides a power and rate allocation that is asymptotically optimal in the number of mobiles. We also study the effect of various factors on the development of efficient resource allocation strategies. Finally, using the outcome of the power and rate allocation algorithm, we develop a pricing-based base-station assignment algorithm that results in an overall joint resource allocation and base-station assignment. In this algorithm, a base-station is assigned to each mobile taking into account the congestion level of the base-station as well as the transmission environment of the mobile.

Journal ArticleDOI
TL;DR: This paper develops TCP Low Priority (TCP-LP), a distributed algorithm whose goal is to utilize only the excess network bandwidth as compared to the "fair share" of bandwidth as targeted by TCP.
Abstract: Service prioritization among different traffic classes is an important goal for the Internet. Conventional approaches to solving this problem consider the existing best-effort class as the low-priority class, and attempt to develop mechanisms that provide "better-than-best-effort" service. In this paper, we explore the opposite approach, and devise a new distributed algorithm to realize a low-priority service (as compared to the existing best effort) from the network endpoints. To this end, we develop TCP Low Priority (TCP-LP), a distributed algorithm whose goal is to utilize only the excess network bandwidth as compared to the "fair share" of bandwidth as targeted by TCP. The key mechanisms unique to TCP-LP congestion control are the use of one-way packet delays for early congestion indications and a TCP-transparent congestion avoidance policy. The results of our simulation and Internet experiments show that: 1) TCP-LP is largely non-intrusive to TCP traffic; 2) both single and aggregate TCP-LP flows are able to successfully utilize excess network bandwidth; moreover, multiple TCP-LP flows share excess bandwidth fairly; 3) substantial amounts of excess bandwidth are available to the low-priority class, even in the presence of "greedy" TCP flows; 4) the response times of web connections in the best-effort class decrease by up to 90% when long-lived bulk data transfers use TCP-LP rather than TCP; 5) despite their low-priority nature, TCP-LP flows are able to utilize significant amounts of available bandwidth in a wide-area network environment.

Journal ArticleDOI
TL;DR: This work considers a network of rechargeable sensors, deployed redundantly in a random sensing environment, and addresses the problem of how sensor nodes should be activated dynamically so as to maximize a generalized system performance objective.
Abstract: We consider a network of rechargeable sensors, deployed redundantly in a random sensing environment, and address the problem of how sensor nodes should be activated dynamically so as to maximize a generalized system performance objective. The optimal sensor activation problem is a very difficult decision question, and under Markovian assumptions on the sensor discharge/recharge periods, it represents a complex semi-Markov decision problem. With the goal of developing a practical, distributed but efficient solution to this complex, global optimization problem, we first consider the activation question for a set of sensor nodes whose coverage areas overlap completely. For this scenario, we show analytically that there exists a simple threshold activation policy that achieves a performance of at least 3/4 of the optimum over all possible policies. We extend this threshold policy to a general network setting where the coverage areas of different sensors could have partial or no overlap with each other, and show by simulations that the performance of our policy is very close to that of the globally optimal policy. Our policy is fully distributed, and requires the sensor nodes to only keep track of the node activation states in its immediate neighborhood. We also consider the effects of spatial correlation on the performance of the threshold activation policy, and the choice of the optimal threshold.

Journal ArticleDOI
TL;DR: An address-light, integrated MAC and routing protocol for wireless sensor networks (WSNs), AIMRP provides a power-saving algorithm which requires absolutely no synchronization or information exchange and outperforms S-MAC for event-detection applications.
Abstract: We propose an address-light, integrated MAC and routing protocol (abbreviated AIMRP) for wireless sensor networks (WSNs). Due to the broad spectrum of WSN applications, there is a need for protocol solutions optimized for specific application classes. AIMRP is proposed for WSNs deployed for detecting rare events which require prompt detection and response. AIMRP organizes the network into concentric tiers around the sink(s), and routes event reports by forwarding them from one tier to another, in the direction of (one of) the sink(s). AIMRP is address-light in that it does not employ unique per-node addressing, and integrated since the MAC control packets are also responsible for finding the next-hop node to relay the data, via an anycast query. For reducing the energy expenditure due to idle-listening, AIMRP provides a power-saving algorithm which requires absolutely no synchronization or information exchange. We evaluate AIMRP through analysis and simulations, and compare it with another MAC protocol proposed for WSNs, S-MAC. AIMRP outperforms S-MAC for event-detection applications, in terms of total average power consumption, while satisfying identical sensor-to-sink latency constraints.

Journal ArticleDOI
TL;DR: Probabilistic Resilient Multicast can be used to improve the performance of application-layer multicast protocols especially when there are high packet losses and host failures, and through detailed analysis it is shown that this loss recovery technique has efficient scaling properties.
Abstract: We introduce Probabilistic Resilient Multicast (PRM): a multicast data recovery scheme that improves data delivery ratios while maintaining low end-to-end latencies. PRM has both a proactive and a reactive components; in this paper we describe how PRM can be used to improve the performance of application-layer multicast protocols especially when there are high packet losses and host failures. Through detailed analysis in this paper, we show that this loss recovery technique has efficient scaling properties-the overheads at each overlay node asymptotically decrease to zero with increasing group sizes. As a detailed case study, we show how PRM can be applied to the NICE application-layer multicast protocol. We present detailed simulations of the PRM-enhanced NICE protocol for 10 000 node Internet-like topologies. Simulations show that PRM achieves a high delivery ratio (>97%) with a low latency bound (600 ms) for environments with high end-to-end network losses (1%-5%) and high topology change rates (5 changes per second) while incurring very low overheads (<5%).

Journal ArticleDOI
TL;DR: A new distributed flow control algorithm for multiservice networks, where the application's utility is only assumed to be continuously increasing over the available bandwidth, and it is shown that the algorithm converges, and that at convergence, the utility achieved by each application is well balanced in a proportionally (or max-min) fair manner.
Abstract: This paper is concerned with flow control and resource allocation problems in computer networks in which real-time applications may have hard quality of service (QoS) requirements. Recent optimal flow control approaches are unable to deal with these problems since QoS utility functions generally do not satisfy the strict concavity condition in real-time applications. For elastic traffic, we show that bandwidth allocations using the existing optimal flow control strategy can be quite unfair. If we consider different QoS requirements among network users, it may be undesirable to allocate bandwidth simply according to the traditional max-min fairness or proportional fairness. Instead, a network should have the ability to allocate bandwidth resources to various users, addressing their real utility requirements. For these reasons, this paper proposes a new distributed flow control algorithm for multiservice networks, where the application's utility is only assumed to be continuously increasing over the available bandwidth. In this, we show that the algorithm converges, and that at convergence, the utility achieved by each application is well balanced in a proportionally (or max-min) fair manner.

Journal ArticleDOI
TL;DR: An original TCAM-based IP lookup scheme that achieves both ultra-high lookup throughput and optimal utilization of the memory while being power-efficient is proposed.
Abstract: Using ternary content addressable memory (TCAM) for high-speed IP address lookup has been gaining popularity due to its deterministic high performance. However, restricted by the slow improvement of memory accessing speed, the route lookup engines for next-generation terabit routers demand exploiting parallelism among multiple TCAM chips. Traditional parallel methods always incur excessive redundancy and high power consumption. We propose in this paper an original TCAM-based IP lookup scheme that achieves both ultra-high lookup throughput and optimal utilization of the memory while being power-efficient. In our multi-chip scheme, we devise a load-balanced TCAM table construction algorithm together with an adaptive load balancing mechanism. The power efficiency is well controlled by decreasing the number of TCAM entries triggered in each lookup operation. Using four 133 MHz TCAM chips and given 25% more TCAM entries than the original route table, the proposed scheme achieves a lookup throughput of up to 533 MPPS while remains simple for ASIC implementation.

Journal ArticleDOI
TL;DR: The paper investigates the distribution of bandwidth among anonymous network stations, some of which are selfish, and argues that a desirable station strategy should yield a fair, Pareto efficient, and subgame perfect Nash equilibrium.
Abstract: CSMA/CA, the contention mechanism of the IEEE 802.11 DCF medium access protocol, has recently been found vulnerable to selfish backoff attacks consisting in nonstandard configuration of the constituent backoff scheme. Such attacks can greatly increase a selfish station's bandwidth share at the expense of honest stations applying a standard configuration. The paper investigates the distribution of bandwidth among anonymous network stations, some of which are selfish. A station's obtained bandwidth share is regarded as a payoff in a noncooperative CSMA/CA game. Regardless of the IEEE 802.11 parameter setting, the payoff function is found similar to a multiplayer Prisoners' Dilemma; moreover, the number (though not the identities) of selfish stations can be inferred by observation of successful transmission attempts. Further, a repeated CSMA/CA game is defined, where a station can toggle between standard and nonstandard backoff configurations with a view of maximizing a long-term utility. It is argued that a desirable station strategy should yield a fair, Pareto efficient, and subgame perfect Nash equilibrium. One such strategy, called CRISP, is described and evaluated.

Journal ArticleDOI
TL;DR: The economic interests of a wireless access point owner and his paying client are studied, and it is found that if a client has a "web browser" utility function, it is a Nash equilibrium for the provider to charge the client a constant price per unit time.
Abstract: We study the economic interests of a wireless access point owner and his paying client, and model their interaction as a dynamic game. The key feature of this game is that the players have asymmetric information - the client knows more than the access provider. We find that if a client has a "web browser" utility function (a temporal utility function that grows linearly), it is a Nash equilibrium for the provider to charge the client a constant price per unit time. On the other hand, if the client has a "file transferor" utility function (a utility function that is a step function), the client would be unwilling to pay until the final time slot of the file transfer. We also study an expanded game where an access point sells to a reseller,which in turn sells to a mobile client and show that if the client has a web browser utility function, that constant price is a Nash equilibrium of the three player game. Finally, we study a two player game in which the access point does not know whether he faces a web browser or file transferor type client, and show conditions for which it is not a Nash equilibrium for the access point to maintain a constant price.

Journal ArticleDOI
TL;DR: This work provides a fundamental understanding about establishing a group key via a distributed and collaborative approach for a dynamic peer group by considering three interval-based distributed rekeying algorithms for updating the group key.
Abstract: We consider several distributed collaborative key agreement and authentication protocols for dynamic peer groups. There are several important characteristics which make this problem different from traditional secure group communication. They are: 1) distributed nature in which there is no centralized key server; 2) collaborative nature in which the group key is contributory (i.e., each group member will collaboratively contribute its part to the global group key); and 3) dynamic nature in which existing members may leave the group while new members may join. Instead of performing individual rekeying operations, i.e., recomputing the group key after every join or leave request, we discuss an interval-based approach of rekeying. We consider three interval-based distributed rekeying algorithms, or interval-based algorithms for short, for updating the group key: 1) the Rebuild algorithm; 2) the Batch algorithm; and 3) the Queue-batch algorithm. Performance of these three interval-based algorithms under different settings, such as different join and leave probabilities, is analyzed. We show that the interval-based algorithms significantly outperform the individual rekeying approach and that the Queue-batch algorithm performs the best among the three interval-based algorithms. More importantly, the Queue-batch algorithm can substantially reduce the computation and communication workload in a highly dynamic environment. We further enhance the interval-based algorithms in two aspects: authentication and implementation. Authentication focuses on the security improvement, while implementation realizes the interval-based algorithms in real network settings. Our work provides a fundamental understanding about establishing a group key via a distributed and collaborative approach for a dynamic peer group.