scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 2007"


Journal ArticleDOI
TL;DR: The model allows stations to have different traffic arrival rates, enabling the question of fairness between competing flows to be addressed, and accurately capture many interesting features of nonsaturated operation.
Abstract: Analysis of the 802.11 CSMA/CA mechanism has received considerable attention recently. Bianchi presented an analytic model under a saturated traffic assumption. Bianchi's model is accurate, but typical network conditions are nonsaturated and heterogeneous. We present an extension of his model to a nonsaturated environment. The model's predictions, validated against simulation, accurately capture many interesting features of nonsaturated operation. For example, the model predicts that peak throughput occurs prior to saturation. Our model allows stations to have different traffic arrival rates, enabling us to address the question of fairness between competing flows. Although we use a specific arrival process, it encompasses a wide range of interesting traffic types including, in particular, VoIP.

660 citations


Journal ArticleDOI
TL;DR: This work presents ClassBench, a suite of tools for benchmarking packet classification algorithms and devices and seeks to eliminate the significant access barriers to realistic test vectors for researchers and initiate a broader discussion to guide the refinement of the tools and codification of a formal benchmarking methodology.
Abstract: Packet classification is an enabling technology for next generation network services and often a performance bottleneck in high-performance routers. The performance and capacity of many classification algorithms and devices, including TCAMs, depend upon properties of filter sets and query patterns. Despite the pressing need, no standard performance evaluation tools or filter sets are publicly available. In response to this problem, we present ClassBench, a suite of tools for benchmarking packet classification algorithms and devices. ClassBench includes a filter set generator that produces synthetic filter sets that accurately model the characteristics of real filter sets. Along with varying the size of the filter sets, we provide high-level control over the composition of the filters in the resulting filter set. The tool suite also includes a trace generator that produces a sequence of packet headers to exercise packet classification algorithms with respect to a given filter set. Along with specifying the relative size of the trace, we provide a simple mechanism for controlling locality of reference. While we have already found ClassBench to be very useful in our own research, we seek to eliminate the significant access barriers to realistic test vectors for researchers and initiate a broader discussion to guide the refinement of the tools and codification of a formal benchmarking methodology. (The ClassBench tools are publicly available at the following site: http://www.arl.wustl.edu/~det3/ClassBench/.)

478 citations


Journal ArticleDOI
TL;DR: A fixed-point formalization of the well-known analysis of Bianchi is studied, and it is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases.
Abstract: We study a fixed-point formalization of the well-known analysis of Bianchi. We provide a significant simplification and generalization of the analysis. In this more general framework, the fixed-point solution and performance measures resulting from it are studied. Uniqueness of the fixed point is established. Simple and general throughput formulas are provided. It is shown that the throughput of any flow will be bounded by the one with the smallest transmission rate. The aggregate throughput is bounded by the reciprocal of the harmonic mean of the transmission rates. In an asymptotic regime with a large number of nodes, explicit formulas for the collision probability, the aggregate attempt rate, and the aggregate throughput are provided. The results from the analysis are compared with ns2 simulations and also with an exact Markov model of the backoff process. It is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases.

375 citations


Journal ArticleDOI
TL;DR: Novel deterministic and hybrid approaches based on Combinatorial Design are presented for deciding how many and which keys to assign to each key-chain before the sensor network deployment to obtain efficient key distribution schemes.
Abstract: Secure communications in wireless sensor networks operating under adversarial conditions require providing pairwise (symmetric) keys to sensor nodes. In large scale deployment scenarios, there is no priory knowledge of post deployment network configuration since nodes may be randomly scattered over a hostile territory. Thus, shared keys must be distributed before deployment to provide each node a key-chain. For large sensor networks it is infeasible to store a unique key for all other nodes in the key-chain of a sensor node. Consequently, for secure communication either two nodes have a key in common in their key-chains and they have a wireless link between them, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path have a key in common. Length of the key-path is the key factor for efficiency of the design. This paper presents novel deterministic and hybrid approaches based on Combinatorial Design for deciding how many and which keys to assign to each key-chain before the sensor network deployment. In particular, Balanced Incomplete Block Designs (BIBD) and Generalized Quadrangles (GQ) are mapped to obtain efficient key distribution schemes. Performance and security properties of the proposed schemes are studied both analytically and computationally. Comparison to related work shows that the combinatorial approach produces better connectivity with smaller key-chain sizes.

371 citations


Journal ArticleDOI
TL;DR: This paper shows that a combination of queue-length-based scheduling at the base station and congestion control implemented either atThe base station or at the end users can lead to fair resource allocation and queue- length stability.
Abstract: We consider the problem of allocating resources (time slots, frequency, power, etc.) at a base station to many competing flows, where each flow is intended for a different receiver. The channel conditions may be time-varying and different for different receivers. It is well-known that appropriately chosen queue-length based policies are throughput-optimal while other policies based on the estimation of channel statistics can be used to allocate resources fairly (such as proportional fairness) among competing users. In this paper, we show that a combination of queue-length-based scheduling at the base station and congestion control implemented either at the base station or at the end users can lead to fair resource allocation and queue-length stability.

363 citations


Journal ArticleDOI
TL;DR: A novel filtering technique, called Hop-Count Filtering (HCF), is presented-which builds an accurate IP-to-hop-count (IP2HC) mapping table-to detect and discard spoofed IP packets.
Abstract: IP spoofing has often been exploited by Distributed Denial of Service (DDoS) attacks to: 1) conceal flooding sources and dilute localities in flooding traffic, and 2) coax legitimate hosts into becoming reflectors, redirecting and amplifying flooding traffic. Thus, the ability to filter spoofed IP packets near victim servers is essential to their own protection and prevention of becoming involuntary DoS reflectors. Although an attacker can forge any field in the IP header, he cannot falsify the number of hops an IP packet takes to reach its destination. More importantly, since the hop-count values are diverse, an attacker cannot randomly spoof IP addresses while maintaining consistent hop-counts. On the other hand, an Internet server can easily infer the hop-count information from the Time-to-Live (TTL) field of the IP header. Using a mapping between IP addresses and their hop-counts, the server can distinguish spoofed IP packets from legitimate ones. Based on this observation, we present a novel filtering technique, called Hop-Count Filtering (HCF)--which builds an accurate IP-to-hop-count (IP2HC) mapping table--to detect and discard spoofed IP packets. HCF is easy to deploy, as it does not require any support from the underlying network. Through analysis using network measurement data, we show that HCF can identify close to 90% of spoofed IP packets, and then discard them with little collateral damage. We implement and evaluate HCF in the Linux kernel, demonstrating its effectiveness with experimental measurements.

350 citations


Journal ArticleDOI
TL;DR: The nature of delay-capacity trade-off is related to the nature of node motion, thereby providing a better understanding of the delay- capacity relationship in ad hoc networks in comparison to earlier works.
Abstract: Since the original work of Grossglauser and Tse, which showed that mobility can increase the capacity of an ad hoc network, there has been a lot of interest in characterizing the delay-capacity relationship in ad hoc networks. Various mobility models have been studied in the literature, and the delay-capacity relationships under those models have been characterized. The results indicate that there are trade-offs between the delay and capacity, and that the nature of these trade-offs is strongly influenced by the choice of the mobility model. Some questions that arise are: (i) How representative are these mobility models studied in the literature? (ii) Can the delay-capacity relationship be significantly different under some other "reasonable" mobility model? (iii) What sort of delay-capacity trade-off are we likely to see in a real world scenario? In this paper, we take the first step toward answering some of these questions. In particular, we analyze, among others, the mobility models studied in recent related works, under a unified framework. We relate the nature of delay-capacity trade-off to the nature of node motion, thereby providing a better understanding of the delay-capacity relationship in ad hoc networks in comparison to earlier works.

344 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient solution to determine the user-AP associations for max-min fair bandwidth allocation, and shows the strong correlation between fairness and load balancing, which enables them to use load balancing techniques for obtaining optimal maximum-minFair bandwidth allocation.
Abstract: The traffic load of wireless LANs is often unevenly distributed among the access points (APs), which results in unfair bandwidth allocation among users. We argue that the load imbalance and consequent unfair bandwidth allocation can be greatly reduced by intelligent association control. In this paper, we present an efficient solution to determine the user-AP associations for max-min fair bandwidth allocation. We show the strong correlation between fairness and load balancing, which enables us to use load balancing techniques for obtaining optimal max-min fair bandwidth allocation. As this problem is NP-hard, we devise algorithms that achieve constant-factor approximation. In our algorithms, we first compute a fractional association solution, in which users can be associated with multiple APs simultaneously. This solution guarantees the fairest bandwidth allocation in terms of max-min fairness. Then, by utilizing a rounding method, we obtain the integral solution from the fractional solution. We also consider time fairness and present a polynomial-time algorithm for optimal integral solution. We further extend our schemes for the on-line case where users may join and leave dynamically. Our simulations demonstrate that the proposed algorithms achieve close to optimal load balancing (i.e., max-min fairness) and they outperform commonly used heuristics.

285 citations


Journal ArticleDOI
TL;DR: This paper develops a fair hop-by-hop congestion control algorithm with the MAC constraint being imposed in the form of a channel access time constraint, using an optimization based framework, and shows that this algorithm is globally stable using a Lyapunov function based approach.
Abstract: This paper focuses on congestion control over multi-hop, wireless networks. In a wireless network, an important constraint that arises is that due to the MAC (Media Access Control) layer. Many wireless MACs use a time-division strategy for channel access, where, at any point in space, the physical channel can be accessed by a single user at each instant of time. In this paper, we develop a fair hop-by-hop congestion control algorithm with the MAC constraint being imposed in the form of a channel access time constraint, using an optimization-based framework. In the absence of delay, we show that this algorithm are globally stable using a Lyapunov-function-based approach. Next, in the presence of delay, we show that the hop-by-hop control algorithm has the property of spatial spreading. In other words, focused loads at a particular spatial location in the network get "smoothed" over space. We derive bounds on the "peak load" at a node, both with hop-by-hop control, as well as with end-to-end control, show that significant gains are to be had with the hop-by-hop scheme, and validate the analytical results with simulation.

266 citations


Journal ArticleDOI
TL;DR: A unifying treatment of max-min fairness is given, which encompasses all existing results in a simplifying framework, and extends its applicability to new examples, and shows that, if the set of feasible allocations has the free disposal property, then Max-min Programming reduces to a simpler algorithm, called Water Filling, whose complexity is much lower.
Abstract: Max-min fairness is widely used in various areas of networking. In every case where it is used, there is a proof of existence and one or several algorithms for computing it; in most, but not all cases, they are based on the notion of bottlenecks. In spite of this wide applicability, there are still examples, arising in the context of wireless or peer-to-peer networks, where the existing theories do not seem to apply directly. In this paper, we give a unifying treatment of max-min fairness, which encompasses all existing results in a simplifying framework, and extend its applicability to new examples. First, we observe that the existence of max-min fairness is actually a geometric property of the set of feasible allocations. There exist sets on which max-min fairness does not exist, and we describe a large class of sets on which a max-min fair allo cation does exist. This class contains, but is not limited to the compact, convex sets of RN. Second, we give a general purpose centralized algorithm, called Max-min Programming, for computing the max-min fair allocation in all cases where it exists (whether the set of feasible allocations is in our class or not). Its complexity is of the order of N linear programming steps in RN, in the case where the feasible set is defined by linear constraints. We show that, if the set of feasible allocations has the free disposal property, then Max-min Programming reduces to a simpler algorithm, called Water Filling, whose complexity is much lower. Free disposal corresponds to the cases where a bottleneck argument can be made, andWater Filling is the general form of all previously known centralized algorithms for such cases. All our results apply mutatis mutandis to min-max fairness. Our results apply to weighted, unweighted and util-max-min and min-max fairness. Distributed algorithms for the computation of max-min fair allocations are outside the scope of this paper.

265 citations


Journal ArticleDOI
TL;DR: A model to characterize the performance of multihop radio networks in the presence of energy constraints and design routing algorithms to optimally utilize the available energy is developed.
Abstract: In this paper, we develop a model to characterize the performance of multihop radio networks in the presence of energy constraints and design routing algorithms to optimally utilize the available energy. The energy model allows us to consider different types of energy sources in heterogeneous environments. The proposed algorithm is shown to achieve a competitive ratio (i.e., the ratio of the performance of any offline algorithm that has knowledge of all past and future packet arrivals to the performance of our online algorithm) that is asymptotically optimal with respect to the number of nodes in the network. The algorithm assumes no statistical information on packet arrivals and can easily be incorporated into existing routing schemes (e.g., proactive or on-demand methodologies) in a distributed fashion. Simulation results confirm that the algorithm performs very well in terms of maximizing the throughput of an energy-constrained network. Further, a new threshold-based scheme is proposed to reduce the routing overhead while incurring only minimum performance degradation.

Journal ArticleDOI
TL;DR: It is shown that controlling the offered load at the sources can eliminate problems in multi-hop ad hoc networks, and a quantitative analysis for the impact of hidden nodes and signal capture on sustainable throughput is provided.
Abstract: In multi-hop ad hoc networks, stations may pump more traffic into the networks than can be supported, resulting in high packet-loss rate, re-routing instability and unfairness problems. This paper shows that controlling the offered load at the sources can eliminate these problems. To verify the simulation results, we set up a real 6-node multi-hop network. The experimental measurements confirm the existence of the optimal offered load. In addition, we provide an analysis to estimate the optimal offered load that maximizes the throughput of a multi-hop traffic flow. We believe this is a first paper in the literature to provide a quantitative analysis (as opposed to simulation) for the impact of hidden nodes and signal capture on sustainable throughput. The analysis is based on the observation that a large-scale 802.11 network with hidden nodes is a network in which the carrier-sensing capability breaks down partially. Its performance is therefore somewhere between that of a carrier-sensing network and that of an Aloha network. Indeed, our analytical closed-form solution has the appearance of the throughput equation of the Aloha network. Our approach allows one to identify whether the performance of an 802.11 network is hidden-node limited or spatial-reuse limited.

Journal ArticleDOI
TL;DR: It is found that both Scalable-TCP and FAST- TCP consistently exhibit substantial unfairness, even when competing flows share identical network path characteristics.
Abstract: In this paper, we present experimental results evaluating the performance of the scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP, and H-TCP proposals in a series of benchmark tests. In summary, we find that both scalable-TCP and FAST-TCP consistently exhibit substantial unfairness, even when competing flows share identical network path characteristics. Scalable-TCP, HS-TCP, FAST-TCP, and BIC-TCP all exhibit much greater RTT unfairness than does standard TCP, to the extent that long RTT flows may be completely starved of bandwidth. Scalable-TCP, HS-TCP, and BIC-TCP all exhibit slow convergence and sustained unfairness following changes in network conditions such as the start-up of a new flow. FAST-TCP exhibits complex convergence behavior.

Journal ArticleDOI
TL;DR: The design and evaluation of a new Internet routing architecture (NIRA) that gives a user the ability to choose the sequence of providers his packets take and shows that NIRA supports user choice with low overhead are presented.
Abstract: In today's Internet, users can choose their local Internet service providers (ISPs), but once their packets have entered the network, they have little control over the overall routes their packets take. Giving a user the ability to choose between provider-level routes has the potential of fostering ISP competition to offer enhanced service and improving end-to-end performance and reliability. This paper presents the design and evaluation of a new Internet routing architecture (NIRA) that gives a user the ability to choose the sequence of providers his packets take. NIRA addresses a broad range of issues, including practical provider compensation, scalable route discovery, efficient route representation, fast route fail-over, and security. NIRA supports user choice without running a global link-state routing protocol. It breaks an end-to-end route into a sender part and a receiver part and uses address assignment to represent each part. A user can specify a route with only a source and a destination address, and switch routes by switching addresses. We evaluate NIRA using a combination of network measurement, simulation, and analysis. Our evaluation shows that NIRA supports user choice with low overhead.

Journal ArticleDOI
TL;DR: It is shown that a general version of effective bandwidth can be expressed within the framework of a Probabilistic version of the network calculus, where both arrivals and service are specified in terms of probabilistic bounds.
Abstract: This paper establishes a link between two principal tools for the analysis of network traffic, namely, effective bandwidth and network calculus. It is shown that a general version of effective bandwidth can be expressed within the framework of a probabilistic version of the network calculus, where both arrivals and service are specified in terms of probabilistic bounds. By formulating well-known effective bandwidth expressions in terms of probabilistic envelope functions, the developed network calculus can be applied to a wide range of traffic types, including traffic that has self-similar characteristics. As applications, probabilistic lower bounds are presented on the service given by three different scheduling algorithms: static priority, earliest deadline first, and generalized processor sharing. Numerical examples show the impact of specific traffic models and scheduling algorithms on the multiplexing gain in a network.

Journal ArticleDOI
TL;DR: A local rerouting based approach called failure insensitive routing, which ensures that when no more than one link failure notification is suppressed, a packet is guaranteed to be forwarded along a loop-free path to its destination if such a path exists.
Abstract: Link failures are part of the day-to-day operation of a network due to many causes such as maintenance, faulty interfaces, and accidental fiber cuts. Commonly deployed link state routing protocols such as OSPF react to link failures through global link state advertisements and routing table recomputations causing significant forwarding discontinuity after a failure. Careful tuning of various parameters to accelerate routing convergence may cause instability when the majority of failures are transient. To enhance failure resiliency without jeopardizing routing stability, we propose a local rerouting based approach called failure insensitive routing. The proposed approach prepares for failures using interface-specific forwarding, and upon a failure, suppresses the link state advertisement and instead triggers local rerouting using a backwarding table. With this approach, when no more than one link failure notification is suppressed, a packet is guaranteed to be forwarded along a loop-free path to its destination if such a path exists. This paper demonstrates the feasibility, reliability, and stability of our approach.

Journal ArticleDOI
TL;DR: It is shown that opportunistic band skipping is most beneficial in low signal to noise scenarios, which are typically the cases when the node throughput in single-band (no opportunism) system is the minimum.
Abstract: In this paper, we study the gains from opportunistic spectrum usage when neither sender or receiver are aware of the current channel conditions in different frequency bands. Hence to select the best band for sending data, nodes first need to measure the channel in different bands which takes time away from sending actual data. We analyze the gains from opportunistic band selection by deriving an optimal skipping rule, which balances the throughput gain from finding a good quality band with the overhead of measuring multiple bands. We show that opportunistic band skipping is most beneficial in low signal to noise scenarios, which are typically the cases when the node throughput in single-band (no opportunism) system is the minimum. To study the impact of opportunism on network throughput, we devise a CSMA/CA protocol, multi-band opportunistic auto rate (MOAR), which implements the proposed skipping rule on a per node pair basis. The proposed protocol exploits both time and frequency diversity, and is shown to result in typical throughput gains of 20% or more over a protocol which only exploits time diversity, opportunistic auto rate (OAR).

Journal ArticleDOI
TL;DR: This work formalizes problems that incorporate two major requirements of multipath routing and establishes the intractability of these problems in terms of computational complexity, and establishes efficient solutions with proven performance guarantees.
Abstract: Unlike traditional routing schemes that route all traffic along a single path, multipath routing strategies split the traffic among several paths in order to ease congestion. It has been widely recognized that multipath routing can be fundamentally more efficient than the traditional approach of routing along single paths. Yet, in contrast to the single-path routing approach, most studies in the context of multipath routing focused on heuristic methods. We demonstrate the significant advantage of optimal (or near optimal) solutions. Hence, we investigate multipath routing adopting a rigorous (theoretical) approach. We formalize problems that incorporate two major requirements of multipath routing. Then, we establish the intractability of these problems in terms of computational complexity. Finally, we establish efficient solutions with proven performance guarantees.

Journal ArticleDOI
TL;DR: This paper considers nonconcave utility functions, which turn utility maximization into difficult, nonconvex optimization problems, and presents conditions under which the standard price-based distributed algorithm can still converge to the globally optimal rate allocation despiteNonconcavity of utility functions.
Abstract: A common assumption behind most of the recent research on network rate allocation is that traffic flows are elastic, which means that their utility functions are concave and continuous and that there is no hard limit on the rate allocated to each flow. These critical assumptions lead to the tractability of the analytic models for rate allocation based on network utility maximization, but also limit the applicability of the resulting rate allocation protocols. This paper focuses on inelastic flows and removes these restrictive and often invalid assumptions. First, we consider nonconcave utility functions, which turn utility maximization into difficult, nonconvex optimization problems. We present conditions under which the standard price-based distributed algorithm can still converge to the globally optimal rate allocation despite nonconcavity of utility functions. In particular, continuity of price-based rate allocation at all the optimal prices is a sufficient condition for global convergence of rate allocation by the standard algorithm, and continuity at at least one optimal price is a necessary condition. We then show how to provision link capacity to guarantee convergence of the standard distributed algorithm. Second, we model real-time flow utilities as discontinuous functions. We show how link capacity can be provisioned to allow admission of all real-time flows, then propose a price-based admission control heuristics when such link capacity provisioning is impossible, and finally develop an optimal distributed algorithm to allocate rates between elastic and real-time flows.

Journal ArticleDOI
TL;DR: It is argued that the use of dynamic addressing can enable scalable routing in ad hoc networks, and an initial design of a routing layer based on dynamic addressing is provided, and its performance is evaluated.
Abstract: It is well known that the current ad hoc protocol suites do not scale to work efficiently in networks of more than a few hundred nodes. Most current ad hoc routing architectures use flat static addressing and thus, need to keep track of each node individually, creating a massive overhead problem as the network grows. Could dynamic addressing alleviate this problem? In this paper, we argue that the use of dynamic addressing can enable scalable routing in ad hoc networks. We provide an initial design of a routing layer based on dynamic addressing, and evaluate its performance. Each node has a unique permanent identifier and a transient routing address, which indicates its location in the network at any given time. The main challenge is dynamic address allocation in the face of node mobility. We propose mechanisms to implement dynamic addressing efficiently. Our initial evaluation suggests that dynamic addressing is a promising approach for achieving scalable routing in large ad hoc and mesh networks.

Journal ArticleDOI
TL;DR: LESOP serves as the first example in demonstrating the migration from the OSI paradigm to the Embedded Wireless Interconnect (EWI) architecture platform, a two-layer efficient architecture proposed here for wireless sensor networks.
Abstract: We propose the Low Energy Self-Organizing Protocol (LESOP) for target tracking in dense wireless sensor networks. A cross-layer design perspective is adopted in LESOP for high protocol efficiency, where direct interactions between the Application layer and the Medium Access Control (MAC) layer are exploited. Unlike the classical Open Systems Interconnect (OSI) paradigm of communication networks, the Transport and Network layers are excluded in LESOP to simplify the protocol stack. A lightweight yet efficient target localization algorithm is proposed and implemented, and a Quality of Service (QoS) knob is found to control the tradeoff between the tracking error and the network energy consumption. Furthermore, LESOP serves as the first example in demonstrating the migration from the OSI paradigm to the Embedded Wireless Interconnect (EWI) architecture platform, a two-layer efficient architecture proposed here for wireless sensor networks.

Journal ArticleDOI
TL;DR: This paper presents several implementable algorithms that are robust to asynchronism and dynamic topology changes, and can be proven to converge under very general asynchronous timing assumptions.
Abstract: Distributed algorithms for averaging have attracted interest in the control and sensing literature. However, previous works have not addressed some practical concerns that will arise in actual implementations on packet-switched communication networks such as the Internet. In this paper, we present several implementable algorithms that are robust to asynchronism and dynamic topology changes. The algorithms are completely distributed and do not require any global coordination. In addition, they can be proven to converge under very general asynchronous timing assumptions. Our results are verified by both simulation and experiments on Planetlab, a real-world TCP/IP network. We also present some extensions that are likely to be useful in applications.

Journal ArticleDOI
TL;DR: Both the analytical and experimental results show that the proposed reversible sketch data structure along with reverse hashing algorithms are able to achieve online traffic monitoring and accurate change/intrusion detection over massive data streams on high speed links, all in a manner that scales to large key space size.
Abstract: A key function for network traffic monitoring and analysis is the ability to perform aggregate queries over multiple data streams. Change detection is an important primitive which can be extended to construct many aggregate queries. The recently proposed sketches are among the very few that can detect heavy changes online for high speed links, and thus support various aggregate queries in both temporal and spatial domains. However, it does not preserve the keys (e. g., source IP address) of flows, making it difficult to reconstruct the desired set of anomalous keys. To address this challenge, we propose the reversible sketch data structure along with reverse hashing algorithms to infer the keys of culprit flows. There are two phases. The first operates online, recording the packet stream in a compact representation with negligible extra memory and few extra memory accesses. Our prototype single FPGA board implementation can achieve a throughput of over 16 Gb/s for 40-byte packet streams (the worst case). The second phase identifies heavy changes and their keys from the representation in nearly real time. We evaluate our scheme using traces from large edge routers with OC-12 or higher links. Both the analytical and experimental results show that we are able to achieve online traffic monitoring and accurate change/intrusion detection over massive data streams on high speed links, all in a manner that scales to large key space size. To the best of our knowledge, our system is the first to achieve these properties simultaneously.

Journal ArticleDOI
TL;DR: This paper presents a mechanism that lets the network converge to its optimal forwarding state without risking any transient loops and the related packet loss, and shows by simulations that sub-second loop-free convergence is possible on a large Tier-1 ISP network.
Abstract: When using link-state protocols such as OSPF or IS-IS, forwarding loops can occur transiently when the routers adapt their forwarding tables as a response to a topological change. In this paper, we present a mechanism that lets the network converge to its optimal forwarding state without risking any transient loops and the related packet loss. The mechanism is based on an ordering of the updates of the forwarding tables of the routers. Our solution can be used in the case of a planned change in the state of a set of links and in the case of unpredictable changes when combined with a local protection scheme. The supported topology changes are link transitions from up to down, down to up, and updates of link metrics. Finally, we show by simulations that sub-second loop-free convergence is possible on a large Tier-1 ISP network.

Journal ArticleDOI
TL;DR: It is shown that using the techniques described, it is possible to classify almost all out-of-sequence packets in the authors' traces and that the uncertainty in the classification can be quantified, which is an indicator of the performance of a TCP connection, and the quality of its end-end path.
Abstract: We present a classification methodology and a measurement study for out-of-sequence packets in TCP connections going over the Sprint IP backbone. Out-of-sequence packets can result from many events including loss, looping, reordering, or duplication in the network. It is important to quantify and understand the causes of such out-of-sequence packets since it is an indicator of the performance of a TCP connection, and the quality of its end-end path. Our study is based on passively observed packets from a point inside a large backbone network--as opposed to actively sending and measuring end-end probe traffic at the sender or receiver. A new methodology is thus required to infer the causes of a connection's out-of-sequence packets using only measurements taken in the "middle" of the connection's end-end path. We describe techniques that classify observed out-of-sequence behavior based only on the previously- and subsequently-observed packets within a connection and knowledge of how TCP behaves. We analyze numerous several-hour packet-level traces from a set of OC-12 and OC-48 links for tens of millions connections generated in nearly 7600 unique ASes. We show that using our techniques, it is possible to classify almost all out-of-sequence packets in our traces and that we can quantify the uncertainty in our classification. Our measurements show a relatively consistent rate of out-of-sequence packets of approximately 4%. We observe that a majority of out-of-sequence packets are retransmissions, with a smaller percentage resulting from in-network reordering.

Journal ArticleDOI
TL;DR: A very simple (Km+nlogn) time K-approximation algorithm that can be used in hop-by-hop routing protocols and results compare favorably with existing algorithms.
Abstract: A fundamental problem in quality-of-service (QoS) routing is to find a path between a source-destination node pair that satisfies two or more end-to-end QoS constraints. We model this problem using a graph with n vertices and m edges with K additive QoS parameters associated with each edge, for any constant K ≥ 2. This problem is known to be NP-hard. Fully polynomial time approximation schemes (FPTAS) for the case of K = 2 have been reported in the literature. We concentrate on the general case and make the following contributions. 1) We present a very simple O(Km + nlogn) time K -approximation algorithm that can be used in hop-by-hop routing protocols. 2) We present an FPTAS for one optimization version of the QoS routing problem with a time complexity of O(m(n/e)K-1).3) We present an FPTAS for another optimization version of the QoS routing problem with a time complexity of O(n log n + m(H/e)K-1) when there exists an H-hop path satisfying all QoS constraints. When K is reduced to 2, our results compare favorably with existing algorithms. The results of this paper hold for both directed and undirected graphs. For ease of presentation, undirected graph is used.

Journal ArticleDOI
TL;DR: It is shown that systems with heavy-tailed lifetime distributions are more resilient than those with light-tailed (e.g., exponential) distributions and that for a given average degree, k-regular graphs exhibit the highest level of fault tolerance.
Abstract: To model P2P networks that are commonly faced with high rates of churn and random departure decisions by end-users, this paper investigates the resilience of random graphs to lifetime-based node failure and derives the expected delay before a user is forcefully isolated from the graph and the probability that this occurs within his/her lifetime. Using these metrics, we show that systems with heavy-tailed lifetime distributions are more resilient than those with light-tailed (e.g., exponential) distributions and that for a given average degree, k-regular graphs exhibit the highest level of fault tolerance. As a practical illustration of our results, each user in a system with n = 100 billion peers, 30-minute average lifetime, and 1-minute node-replacement delay can stay connected to the graph with probability 1 - 1/n using only 9 neighbors. This is in contrast to 37 neighbors required under previous modeling efforts. We finish the paper by observing that many P2P networks are almost surely (i.e., with probability 1 -- o(1)) connected if they have no isolated nodes and derive a simple model for the probability that a P2P system partitions under churn.

Journal ArticleDOI
TL;DR: Two feedback-based bandwidth allocation algorithms to be used within the HCCA, referred to as feedback based dynamic scheduler (FBDS) and proportional-integral (PI)-FBDS, which have been designed with the objective of providing services with bounded delays are proposed.
Abstract: The 802.11e working group has recently proposed the hybrid coordination function (HCF) to provide service differentiation for supporting real-time transmissions over 802.11 WLANs. The HCF is made of a contention-based channel access, known as enhanced distributed coordination access, and of a HCF controlled channel access (HCCA), which requires a Hybrid Coordinator for bandwidth allocation to nodes hosting applications with QoS requirements. The 802.11e proposal includes a simple scheduler providing a Constant Bit Rate service, which is not well suited for bursty media flows. This paper proposes two feedback-based bandwidth allocation algorithms to be used within the HCCA, which have been referred to as feedback based dynamic scheduler (FBDS) and proportional-integral (PI)-FBDS. These algorithms have been designed with the objective of providing services with bounded delays. Given that the 802.11e standard allows queue lengths to be fed back, a control theoretic approach has been employed to design the FBDS, which exploits a simple proportional controller, and the PI-FBDS, which implements a proportional-integral controller. Proposed algorithms can be easily implemented since their computational complexities scale linearly with the number of traffic streams. Moreover, a call admission control scheme has been proposed as an extension of the one described in the 802.11e draft. Performance of the proposed algorithms have been theoretically analyzed and computer simulations, using the ns-2 simulator, have been carried out to compare their behaviors in realistic scenarios where video, voice, and FTP flows, coexist at various network loads. Simulation results have shown that, unlike the simple scheduler of the 802.11e draft, both FBDS and PI-FBDS are able to provide services with real-time constraints. However, while the FBDS admits a smaller quota of traffic streams than the simple scheduler, PI-FBDS allows the same quota of traffic that would be admitted using the simple scheduler, but still providing delay bound guarantees.

Journal ArticleDOI
TL;DR: This paper develops a mathematical model to analyze the availabilities of connections with different protection modes, and develops provisioning strategies for a given set of connection demands in which an appropriate, possibly different, level of protection is provided according to its predefined availability requirement.
Abstract: In an optical WDM mesh network, different protection schemes (such as dedicated or shared protection) can be used to improve the service availability against network failures. However, in order to satisfy a connection's service-availability requirement in a cost-effective and resource-efficient manner, we need a systematic mechanism to select a proper protection scheme for each connection request while provisioning the connection. In this paper, we propose to use connection availability as a metric to provide differentiated protection services in a wavelength-convertible WDM mesh network. We develop a mathematical model to analyze the availabilities of connections with different protection modes (i.e., unprotected, dedicated protected, or shared protected). In the shared-protection case, we investigate how a connection's availability is affected by backup resource sharing. The sharing might cause backup resource contention between several connections when multiple simultaneous (or overlapping) failures occur in the network. Using a continuous-time Markov model, we derive the conditional probability for a connection to acquire backup resources in the presence of backup resource contention. Through this model, we show how the availability of a shared-protected connection can be quantitatively computed. Based on the analytical model, we develop provisioning strategies for a given set of connection demands in which an appropriate, possibly different, level of protection is provided to each connection according to its predefined availability requirement, e.g., 0.999, 0.997. We propose integer linear programming (ILP) and heuristic approaches to provision the connections cost effectively while satisfying the connections' availability requirements. The effectiveness of our provisioning approaches is demonstrated through numerical examples. The proposed provisioning strategies inherently facilitate the service differentiation in optical WDM mesh networks.

Journal ArticleDOI
TL;DR: It is proved that O(log(N) delay is achievable with a simple frame based algorithm that uses queue backlog information, and is the first analytical proof that sublineardelay is achievable in a packet switch with random inputs.
Abstract: We consider the fundamental delay bounds for scheduling packets In an N times N packet switch operating under the crossbar constraint Algorithms that make scheduling decisions without considering queue backlog are shown to incur an average delay of at least O(N) We then prove that O(log(N)) delay is achievable with a simple frame based algorithm that uses queue backlog information This is the best known delay bound for packet switches, and is the first analytical proof that sublinear delay is achievable in a packet switch with random inputs