scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 2008"


Journal ArticleDOI
TL;DR: The results show that using COPE at the forwarding layer, without modifying routing and higher layers, increases network throughput, and the gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.
Abstract: This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that using COPE at the forwarding layer, without modifying routing and higher layers, increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.

2,190 citations


Journal ArticleDOI
TL;DR: A detailed exploration of the single-copy routing space is performed in order to identify efficient single- copy solutions that can be employed when low resource usage is critical, and can help improve the design of general routing schemes that use multiple copies.
Abstract: Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks, etc. In this context, conventional routing schemes fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. Furthermore, proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. With this in mind, we introduce a new family of routing schemes that "spray" a few message copies into the network, and then route each copy independently towards the destination. We show that, if carefully designed, spray routing not only performs significantly fewer transmissions per message, but also has lower average delivery delays than existing schemes; furthermore, it is highly scalable and retains good performance under a large range of scenarios. Finally, we use our theoretical framework proposed in our 2004 paper to analyze the performance of spray routing. We also use this theory to show how to choose the number of copies to be sprayed and how to optimally distribute these copies to relays.

1,162 citations


Journal ArticleDOI
TL;DR: A hybrid MAC protocol for wireless sensor networks that combines the strengths of TDMA and CSMA while offsetting their weaknesses, ZMAC, which achieves high channel utilization and low latency under low contention and reduces collision among two-hop neighbors at a low cost.
Abstract: This paper presents the design, implementation and performance evaluation of a hybrid MAC protocol, called Z-MAC, for wireless sensor networks that combines the strengths of TDMA and CSMA while offsetting their weaknesses. Like CSMA, Z-MAC achieves high channel utilization and low latency under low contention and like TDMA, achieves high channel utilization under high contention and reduces collision among two-hop neighbors at a low cost. A distinctive feature of Z-MAC is that its performance is robust to synchronization errors, slot assignment failures, and time-varying channel conditions; in the worst case, its performance always falls back to that of CSMA. Z-MAC is implemented in TinyOS.

762 citations


Journal ArticleDOI
TL;DR: The combined strategy is shown to yield data rates that are arbitrarily close to the optimal operating point achieved when all network controllers are coordinated and have perfect knowledge of future events.
Abstract: We consider optimal control for general networks with both wireless and wireline components and time varying channels. A dynamic strategy is developed to support all traffic whenever possible, and to make optimally fair decisions about which data to serve when inputs exceed network capacity. The strategy is decoupled into separate algorithms for flow control, routing, and resource allocation, and allows each user to make decisions independent of the actions of others. The combined strategy is shown to yield data rates that are arbitrarily close to the optimal operating point achieved when all network controllers are coordinated and have perfect knowledge of future events. The cost of approaching this fair operating point is an end-to-end delay increase for data that is served by the network.

552 citations


Journal ArticleDOI
TL;DR: The authors' classification of failures reveals the nature and extent of failures in the Sprint IP backbone and provides a probabilistic failure model, which can be used to generate realistic failure scenarios, as input to various network design and traffic engineering problems.
Abstract: As the Internet evolves into a ubiquitous communication infrastructure and supports increasingly important services, its dependability in the presence of various failures becomes critical. In this paper, we analyze IS-IS routing updates from the Sprint IP backbone network to characterize failures that affect IP connectivity. Failures are first classified based on patterns observed at the IP-layer; in some cases, it is possible to further infer their probable causes, such as maintenance activities, router-related and optical layer problems. Key temporal and spatial characteristics of each class are analyzed and, when appropriate, parameterized using well-known distributions. Our results indicate that 20% of all failures happen during a period of scheduled maintenance activities. Of the unplanned failures, almost 30% are shared by multiple links and are most likely due to router-related and optical equipment-related problems, respectively, while 70% affect a single link at a time. Our classification of failures reveals the nature and extent of failures in the Sprint IP backbone. Furthermore, our characterization of the different classes provides a probabilistic failure model, which can be used to generate realistic failure scenarios, as input to various network design and traffic engineering problems.

383 citations


Journal ArticleDOI
TL;DR: This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks, based on the "social network" among user identities, where an edge between two identities indicates a human-established trust relationship.
Abstract: Peer-to-peer and other decentralized, distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack, a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system, the malicious user is able to ldquoout voterdquo the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks. Our protocol is based on the ldquosocial networkrdquo among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately small ldquocutrdquo in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create. We show the effectiveness of SybilGuard both analytically and experimentally.

311 citations


Journal ArticleDOI
TL;DR: A novel caching algorithm for P2P traffic that is based on object segmentation, and proportional partial admission and eviction of objects is developed, which is close to the byte hit rate achieved by an off-line optimal algorithm with complete knowledge of future requests.
Abstract: Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. We explore the potential of deploying proxy caches in different autonomous systems (ASes) with the goal of reducing the cost incurred by Internet service providers and alleviating the load on the Internet backbone. We conduct an eight-month measurement study to analyze the P2P traffic characteristics that are relevant to caching, such as object popularity, popularity dynamics, and object size. Our study shows that the popularity of P2P objects can be modeled by a Mandelbrot-Zipf distribution, and that several workloads exist in P2P traffic. Guided by our findings, we develop a novel caching algorithm for P2P traffic that is based on object segmentation, and proportional partial admission and eviction of objects. Our trace-based simulations show that with a relatively small cache size, a byte hit rate of up to 35% can be achieved by our algorithm, which is close to the byte hit rate achieved by an off-line optimal algorithm with complete knowledge of future requests. Our results also show that our algorithm achieves a byte hit rate that is at least 40% more, and at most triple, the byte hit rate of the common Web caching algorithms. Furthermore, our algorithm is robust in face of aborted downloads, which is a common case in P2P systems.

268 citations


Journal ArticleDOI
TL;DR: A new analytical model is developed that incorporates this lack of coordination of CSMA-based random access in a multi-hop environment, identifies dominating and starving flows and accurately predicts per-flow throughput in a large-scale network and proposes metrics that quantify throughput imbalances due to the MAC protocol operation.
Abstract: Multi-hop wireless networks employing random access protocols have been shown to incur large discrepancies in the throughputs achieved by the flows sharing the network. Indeed, flow throughputs can span orders of magnitude from near starvation to many times greater than the mean. In this paper, we address the foundations of this disparity. We show that the fundamental cause is not merely differences in the number of contending neighbors, but a generic coordination problem of CSMA-based random access in a multi-hop environment. We develop a new analytical model that incorporates this lack of coordination, identifies dominating and starving flows and accurately predicts per-flow throughput in a large-scale network. We then propose metrics that quantify throughput imbalances due to the MAC protocol operation. Our model and metrics provide a deeper understanding of the behavior of CSMA protocols in arbitrary topologies and can aid the design of effective protocol solutions to the starvation problem.

243 citations


Journal ArticleDOI
TL;DR: This paper presents Cruiser, a fast and accurate P2P crawler, which can capture a complete snapshot of the Gnutella network of more than one million peers in just a few minutes, and shows how inaccuracy in snapshots can lead to erroneous conclusions--such as a power-law degree distribution.
Abstract: In recent years, peer-to-peer (P2P) file-sharing systems have evolved to accommodate growing numbers of participating peers. In particular, new features have changed the properties of the unstructured overlay topologies formed by these peers. Little is known about the characteristics of these topologies and their dynamics in modern file-sharing applications, despite their importance. This paper presents a detailed characterization of P2P overlay topologies and their dynamics, focusing on the modern Gnutella network. We present Cruiser, a fast and accurate P2P crawler, which can capture a complete snapshot of the Gnutella network of more than one million peers in just a few minutes, and show how inaccuracy in snapshots can lead to erroneous conclusions-such as a power-law degree distribution. Leveraging recent overlay snapshots captured with Cruiser, we characterize the graph-related properties of individual overlay snapshots and overlay dynamics across slices of back-to-back snapshots. Our results reveal that while the Gnutella network has dramatically grown and changed in many ways, it still exhibits the clustering and short path lengths of a small world network. Furthermore, its overlay topology is highly resilient to random peer departure and even systematic attacks. More interestingly, overlay dynamics lead to an ldquoonion-likerdquo biased connectivity among peers where each peer is more likely connected to peers with higher uptime. Therefore, long-lived peers form a stable core that ensures reachability among peers despite overlay dynamics.

214 citations


Journal ArticleDOI
TL;DR: This paper model the CTC problem as a maximum cover tree (MCT) problem, determines an upper bound on the network lifetime for the MCT problem and develops a (1+w)H(M circ) approximation algorithm to solve it, which shows that the lifetime obtained is close to the upper bound.
Abstract: In this paper, we consider the connected target coverage (CTC) problem with the objective of maximizing the network lifetime by scheduling sensors into multiple sets, each of which can maintain both target coverage and connectivity among all the active sensors and the sink. We model the CTC problem as a maximum cover tree (MCT) problem and prove that the MCT problem is NP-Complete. We determine an upper bound on the network lifetime for the MCT problem and then develop a (1+w)H(M circ) approximation algorithm to solve it, where w is an arbitrarily small number, H(M circ)=1 lesilesM circ(1/i) and M circ is the maximum number of targets in the sensing area of any sensor. As the protocol cost of the approximation algorithm may be high in practice, we develop a faster heuristic algorithm based on the approximation algorithm called Communication Weighted Greedy Cover (CWGC) algorithm and present a distributed implementation of the heuristic algorithm. We study the performance of the approximation algorithm and CWGC algorithm by comparing them with the lifetime upper bound and other basic algorithms that consider the coverage and connectivity problems independently. Simulation results show that the approximation algorithm and CWGC algorithm perform much better than others in terms of the network lifetime and the performance improvement can be up to 45% than the best-known basic algorithm. The lifetime obtained by our algorithms is close to the upper bound. Compared with the approximation algorithm, the CWGC algorithm can achieve a similar performance in terms of the network lifetime with a lower protocol cost.

213 citations


Journal ArticleDOI
TL;DR: It is proved that applying ideas from network coding allows to realize significant benefits in terms of energy efficiency for the problem of broadcasting, and proposes very simple algorithms that allow to realize these benefits in practice.
Abstract: We consider the problem of broadcasting in an ad hoc wireless network, where all nodes of the network are sources that want to transmit information to all other nodes. Our figure of merit is energy efficiency, a critical design parameter for wireless networks since it directly affects battery life and thus network lifetime. We prove that applying ideas from network coding allows to realize significant benefits in terms of energy efficiency for the problem of broadcasting, and propose very simple algorithms that allow to realize these benefits in practice. In particular, our theoretical analysis shows that network coding improves performance by a constant factor in fixed networks. We calculate this factor exactly for some canonical configurations. We then show that in networks where the topology dynamically changes, for example due to mobility, and where operations are restricted to simple distributed algorithms, network coding can offer improvements of a factor of log n, where n is the number of nodes in the network. We use the insights gained from the theoretical analysis to propose low-complexity distributed algorithms for realistic wireless ad hoc scenarios, discuss a number of practical considerations, and evaluate our algorithms through packet level simulation.

Journal ArticleDOI
TL;DR: A simple, low-complexity protocol, called variable-structure congestion control protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, negligible packet loss rate, low persistent queue length, and reasonable fairness.
Abstract: Achieving efficient and fair bandwidth allocation while minimizing packet loss and bottleneck queue in high bandwidth-delay product networks has long been a daunting challenge. Existing end-to-end congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ECN) have significant limitations in achieving this goal. While the XCP protocol addresses this challenge, it requires multiple bits to encode the congestion-related information exchanged between routers and end-hosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a non-trivial and time-consuming standardization process. In this paper, we design and implement a simple, low-complexity protocol, called variable-structure congestion control protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, negligible packet loss rate, low persistent queue length, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios and find that it significantly outperforms many recently-proposed TCP variants, such as HSTCP, FAST, CUBIC, etc. To gain insight into the behavior of VCP, we analyze a simplified fluid model and prove its global stability for the case of a single bottleneck shared by synchronous flows with identical round-trip times.

Journal ArticleDOI
TL;DR: This paper design and study DoS attacks in order to assess the damage that difficult-to-detect attackers can cause, and quantifies via simulations and analytical modeling the scalability of doS attacks as a function of key performance parameters such as mobility, system size, node density, and counter-DoS strategy.
Abstract: Significant progress has been made towards making ad hoc networks secure and DoS resilient. However, little attention has been focused on quantifying DoS resilience: Do ad hoc networks have sufficiently redundant paths and counter-DoS mechanisms to make DoS attacks largely ineffective? Or are there attack and system factors that can lead to devastating effects? In this paper, we design and study DoS attacks in order to assess the damage that difficult-to-detect attackers can cause. The first attack we study, called the JellyFish attack, is targeted against closed-loop flows such as TCP; although protocol compliant, it has devastating effects. The second is the black hole attack, which has effects similar to the JellyFish, but on open-loop flows. We quantify via simulations and analytical modeling the scalability of DoS attacks as a function of key performance parameters such as mobility, system size, node density, and counter-DoS strategy. One perhaps surprising result is that such DoS attacks can increase the capacity of ad hoc networks, as they starve multi-hop flows and only allow one-hop communication, a capacity-maximizing, yet clearly undesirable situation.

Journal ArticleDOI
TL;DR: It is shown that minimum latency broadcasting is NP-complete for ad hoc networks and a simple distributed collision-free broadcasting algorithm for broadcasting a message is presented.
Abstract: Network wide broadcasting is a fundamental operation in ad hoc networks. In broadcasting, a source node sends a message to all the other nodes in the network. In this paper, we consider the problem of collision-free broadcasting in ad hoc networks. Our objective is to minimize the latency and the number of transmissions in the broadcast. We show that minimum latency broadcasting is NP-complete for ad hoc networks. We also present a simple distributed collision-free broadcasting algorithm for broadcasting a message. For networks with bounded node transmission ranges, our algorithm simultaneously guarantees that the latency and the number of transmissions are within O(1) times their re spective optimal values. Our algorithm and analysis extend to the case when multiple messages are broadcast from multiple sources. Experimental studies indicate that our algorithms perform much better in practice than the analytical guarantees provided for the worst case.

Journal ArticleDOI
TL;DR: It is shown that the proposed scheme can significantly reduce the data traffic and improve the network lifetime and a distributed gradient algorithm designed accordingly can converge to the optimal value efficiently under all network configurations.
Abstract: An optimal routing and data aggregation scheme for wireless sensor networks is proposed in this paper. The objective is to maximize the network lifetime by jointly optimizing data aggregation and routing. We adopt a model to integrate data aggregation with the underlying routing scheme and present a smoothing approximation function for the optimization problem. The necessary and sufficient conditions for achieving the optimality are derived and a distributed gradient algorithm is designed accordingly. We show that the proposed scheme can significantly reduce the data traffic and improve the network lifetime. The distributed algorithm can converge to the optimal value efficiently under all network configurations.

Journal ArticleDOI
TL;DR: A real-time and reliable transport (RT) protocol is presented for WSANs to reliably and collaboratively transport event features from the sensor field to the actor nodes with minimum energy dissipation and to timely react to sensor information with a right action.
Abstract: Wireless Sensor and Actor Networks (WSANs) are characterized by the collective effort of heterogenous nodes called sensors and actors. Sensor nodes collect information about the physical world, while actor nodes take action decisions and perform appropriate actions upon the environment. The collaborative operation of sensors and actors brings significant advantages over traditional sensing, including improved accuracy, larger coverage area and timely actions upon the sensed phenomena. However, to realize these potential gains, there is a need for an efficient transport layer protocol that can address the unique communication challenges introduced by the coexistence of sensors and actors. In this paper, a Real-Time and Reliable Transport (RT)2 protocol is presented for WSANs. The objective of the (RT)2 protocol is to reliably and collaboratively transport event features from the sensor field to the actor nodes with minimum energy dissipation and to timely react to sensor information with a right action. In this respect, the (RT)2 protocol simultaneously addresses congestion control and timely event transport reliability objectives in WSANs. To the best of our knowledge, this is the first research effort focusing on real-time and reliable transport protocol for WSANs. Performance evaluations via simulation experiments show that the (RT)2 protocol achieves high performance in terms of reliable event detection, communication latency and energy consumption in WSANs.

Journal ArticleDOI
TL;DR: This work studies the performance of a large dense network with one mobile relay and shows that network lifetime improves over that of a purely static network by up to a factor of four and constructs a joint mobility and routing algorithm which can yield a network lifetime close to the upper bound.
Abstract: We investigate the benefits of a heterogeneous architecture for wireless sensor networks (WSNs) composed of a few resource rich mobile relay nodes and a large number of simple static nodes. The mobile relays have more energy than the static sensors. They can dynamically move around the network and help relieve sensors that are heavily burdened by high network traffic, thus extending the latter's lifetime. We first study the performance of a large dense network with one mobile relay and show that network lifetime improves over that of a purely static network by up to a factor of four. Also, the mobile relay needs to stay only within a two-hop radius of the sink. We then construct a joint mobility and routing algorithm which can yield a network lifetime close to the upper bound. The advantage of this algorithm is that it only requires a limited number of nodes in the network to be aware of the location of the mobile relay. Our simulation results show that one mobile relay can at least double the network lifetime in a randomly deployed WSN. By comparing the mobile relay approach with various static energy-provisioning methods, we demonstrate the importance of node mobility for resource provisioning in a WSN.

Journal ArticleDOI
TL;DR: An analytic model is presented for evaluating the queueing delays and channel access times at nodes in wireless networks using the IEEE 802.11 Distributed Coordination Function (DCF) as the MAC protocol and gives closed form expressions for obtaining the delay and queue length characteristics.
Abstract: In this paper, we present an analytic model for evaluating the queueing delays and channel access times at nodes in wireless networks using the IEEE 802.11 Distributed Coordination Function (DCF) as the MAC protocol. The model can account for arbitrary arrival patterns, packet size distributions and number of nodes. Our model gives closed form expressions for obtaining the delay and queue length characteristics and models each node as a discrete time G/G/1fs queue. The service time distribution for the queues is derived by accounting for a number of factors including the channel access delay due to the shared medium, impact of packet collisions, the resulting backoffs as well as the packet size distribution. The model is also extended for ongoing proposals under consideration for 802.11e wherein a number of packets may be transmitted in a burst once the channel is accessed. Our analytical results are verified through extensive simulations. The results of our model can also be used for providing probabilistic quality of service guarantees and determining the number of nodes that can be accommodated while satisfying a given delay constraint.

Journal ArticleDOI
TL;DR: This paper solves the joint power control and SIR assignment problem through distributed algorithms in the uplink of multi-cellular wireless networks through a re-parametrization via the left Perron Frobenius eigenvectors and a distributed algorithm that picks out a particular Pareto-optimal Sir assignment and the associated powers through utility maximization.
Abstract: This paper solves the joint power control and SIR assignment problem through distributed algorithms in the uplink of multi-cellular wireless networks. The 1993 Foschini-Miljanic distributed power control can attain a given fixed and feasible SIR target. In data networks, however, SIR needs to be jointly optimized with transmit powers in wireless data networks. In the vast research literature since the mid-1990s, solutions to this joint optimization problem are either distributed but suboptimal, or optimal but centralized. For convex formulations of this problem, we report a distributed and optimal algorithm. The main issue that has been the research bottleneck is the complicated, coupled constraint set, and we resolve it through a re-parametrization via the left Perron Frobenius eigenvectors, followed by development of a locally computable ascent direction. A key step is a new characterization of the feasible SIR region in terms of the loads on the base stations, and an indication of the potential interference from mobile stations, which we term spillage. Based on this load-spillage characterization, we first develop a distributed algorithm that can achieve any Pareto-optimal SIR assignment, then a distributed algorithm that picks out a particular Pareto-optimal SIR assignment and the associated powers through utility maximization. Extensions to power-constrained and interference-constrained cases are carried out. The algorithms are theoretically sound and practically implementable: we present convergence and optimality proofs as well as simulations using 3GPP network and path loss models.

Journal ArticleDOI
TL;DR: The capability approach to network denial-of-service (DoS) attacks is motivated, the Traffic Validation Architecture (TVA) architecture is evaluated, and an implementation on Click router is evaluated to evaluate the computational costs of TVA.
Abstract: We motivate the capability approach to network denial-of-service (DoS) attacks, and evaluate the traffic validation architecture (TVA) architecture which builds on capabilities. With our approach, rather than send packets to any destination at any time, senders must first obtain ldquopermission to sendrdquo from the receiver, which provides the permission in the form of capabilities to those senders whose traffic it agrees to accept. The senders then include these capabilities in packets. This enables verification points distributed around the network to check that traffic has been authorized by the receiver and the path in between, and hence to cleanly discard unauthorized traffic. To evaluate this approach, and to understand the detailed operation of capabilities, we developed a network architecture called TVA. TVA addresses a wide range of possible attacks against communication between pairs of hosts, including spoofed packet floods, network and host bottlenecks, and router state exhaustion. We use simulations to show the effectiveness of TVA at limiting DoS floods, and an implementation on Click router to evaluate the computational costs of TVA. We also discuss how to incrementally deploy TVA into practice.

Journal ArticleDOI
TL;DR: This paper presents an optimization and decision version of the multi-constrained quality-ofservice (QoS) routing problem, where one seeks to find a path from a source to a destination in the presence of additive end-to-end QoS constraints, and presents a time complexity better than that of the best-known algorithm designed for this special case.
Abstract: We study the multi-constrained quality-of-service (QoS) routing problem where one seeks to find a path from a source to a destination in the presence of K ges 2 additive end-to-end QoS constraints. This problem is NP-hard and is commonly modeled using a graph with n vertices and m edges with K additive QoS parameters associated with each edge. For the case of K = 2, the problem has been well studied, with several provably good polynomial time-approximation algorithms reported in the literature, which enforce one constraint while approximating the other. We first focus on an optimization version of the problem where we enforce the first constraint and approximate the other K - 1 constraints. We present an O(mn log log log n + mn/epsi) time (1 + epsi)(K - 1)-approximation algorithm and an O(mn log log log n + m(n/epsi)K-1) time (1 + epsi)-approximation algorithm, for any epsi > 0. When K is reduced to 2, both algorithms produce an (1 + epsi)-approximation with a time complexity better than that of the best-known algorithm designed for this special case. We then study the decision version of the problem and present an O(m(n/epsi)K-1) time algorithm which either finds a feasible solution or confirms that there does not exist a source-destination path whose first weight is bounded by the first constraint and whose every other weight is bounded by (1 - epsi) times the corresponding constraint. If there exists an H-hop source-destination path whose first weight is bounded by the first constraint and whose every other weight is bounded by (1 - epsi) times the corresponding constraint, our algorithm finds a feasible path in O(m(H/epsi)K-1) time. This algorithm improves previous best-known algorithms with O((m + n log n)n/epsi) time for K = 2 and 0(mn(n/epsi)K-1) time for if ges 2.

Journal ArticleDOI
TL;DR: The results suggest that, under the assumption that the bootstrap server is not a bottleneck, the performance does not depend critically on either altruistic user behavior or on load-balancing strategies such as rarest first.
Abstract: Motivated by the study of peer-to-peer file swarming systems a la BitTorrent, we introduce a probabilistic model of coupon replication systems. These systems consist of users aiming to complete a collection of distinct coupons. Users enter the system with an initial coupon provided by a bootstrap server, acquire other coupons from other users, and leave once they complete their coupon collection. For open systems, with exogenous user arrivals, we derive stability condition for a layered scenario, where encounters are between users holding the same number of coupons. We also consider a system where encounters are between users chosen uniformly at random from the whole population. We show that sojourn time in both systems is asymptotically optimal as the number of coupon types becomes large. We also consider closed systems with no exogenous user arrivals. In a special scenario where users have only one missing coupon, we evaluate the size of the population ultimately remaining in the system, as the initial number of users N goes to infinity. We show that this size decreases geometrically with the number of coupons K. In particular, when the ratio K/ log(N) is above a critical threshold, we prove that this number of leftovers is of order log(log(N)). These results suggest that, under the assumption that the bootstrap server is not a bottleneck, the performance does not depend critically on either altruistic user behavior or on load-balancing strategies such as rarest first.

Journal ArticleDOI
TL;DR: New convexity results surrounding the Shannon capacity formula are provided, allowing us to abandon suboptimal high-SIR approximations that have almost become entrenched in the literature and can be back-substituted into many existing problems for similar benefit.
Abstract: We seek distributed protocols that attain the global optimum allocation of link transmitter powers and source rates in a cross-layer design of a mobile ad hoc network. Although the underlying network utility maximization is nonconvex, convexity plays a major role in our development. We provide new convexity results surrounding the Shannon capacity formula, allowing us to abandon suboptimal high-SIR approximations that have almost become entrenched in the literature. More broadly, these new results can be back-substituted into many existing problems for similar benefit. Three protocols are developed. The first is based on a convexification of the underlying problem, relying heavily on our new convexity results. We provide conditions under which it produces a globally optimum resource allocation. We show how it may be distributed through message passing for both rate- and power-allocation. Our second protocol relaxes this requirement and involves a novel sequence of convex approximations, each exploiting existing TCP protocols for source rate allocation. Message passing is only used for power control. Our convexity results again provide sufficient conditions for global optimality. Our last protocol, motivated by a desire of power control devoid of message passing, is a near optimal scheme that makes use of noise measurements and enjoys a convergence rate that is orders of magnitude faster than existing methods.

Journal ArticleDOI
TL;DR: An approach to IP traceback based on the probabilistic packet marking paradigm, which is called randomize-and-link, uses large checksum cords to ldquolinkrdquo message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers.
Abstract: This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm. Our approach, which we call randomize-and-link, uses large checksum cords to ldquolinkrdquo message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages.

Journal ArticleDOI
TL;DR: It is found that if the curvature, that defines the extend of the bending, is selected in the adequate range, the accuracy of Internet distance embedding can be improved, and a new efficient centralized embedding algorithm is presented that enables the accurate embedding of short distances.
Abstract: Estimating distances in the Internet has been studied in the recent years due to its ability to improve the performance of many applications, e.g., in the peer-to-peer realm. One scalable approach to estimate distances between nodes is to embed the nodes in some d dimensional geometric space and to use the pair distances in this space as the estimate for the real distances. Several algorithms were suggested in the past to do this in low dimensional Euclidean spaces. It was noted in recent years that the Internet structure has a highly connected core and long stretched tendrils, and that most of the routing paths between nodes in the tendrils pass through the core. Therefore, we suggest in this work, to embed the Internet distance metric in a hyperbolic space where routes are bent toward the center. We found that if the curvature, that defines the extend of the bending, is selected in the adequate range, the accuracy of Internet distance embedding can be improved. We demonstrate the strength of our hyperbolic embedding with two applications: selecting the closest server and building an application level multicast tree. For the latter, we present a distributed algorithm for building geometric multicast trees that achieve good trade-offs between delay (stretch) and load (stress). We also present a new efficient centralized embedding algorithm that enables the accurate embedding of short distances, something that have never been done before.

Journal ArticleDOI
TL;DR: This paper proposes two probabilistic power adaptation algorithms and analyzes their theoretical properties along with the numerical behavior and approximate the discrete power control iterations by an equivalent ordinary differential equation to prove that the proposed stochastic learning power control algorithm converges to a stable Nash equilibrium.
Abstract: Distributed power control is an important issue in wireless networks. Recently, noncooperative game theory has been applied to investigate interesting solutions to this problem. The majority of these studies assumes that the transmitter power level can take values in a continuous domain. However, recent trends such as the GSM standard and Qualcomm's proposal to the IS-95 standard use a finite number of discretized power levels. This motivates the need to investigate solutions for distributed discrete power control which is the primary objective of this paper. We first note that, by simply discretizing, the previously proposed continuous power adaptation techniques will not suffice. This is because a simple discretization does not guarantee convergence and uniqueness. We propose two probabilistic power adaptation algorithms and analyze their theoretical properties along with the numerical behavior. The distributed discrete power control problem is formulated as an N-person, nonzero sum game. In this game, each user evaluates a power strategy by computing a utility value. This evaluation is performed using a stochastic iterative procedures. We approximate the discrete power control iterations by an equivalent ordinary differential equation to prove that the proposed stochastic learning power control algorithm converges to a stable Nash equilibrium. Conditions when more than one stable Nash equilibrium or even only mixed equilibrium may exist are also studied. Experimental results are presented for several cases and compared with the continuous power level adaptation solutions.

Journal ArticleDOI
TL;DR: This paper contains an introduction to the problem field of geographic routing, a specific routing algorithm based on a synthesis of the greedy forwarding and face routing approaches, and an algorithmic analysis of the presented algorithm from both a worst-case and an average-case perspective.
Abstract: The one type of routing in ad hoc and sensor networks that currently appears to be most amenable to algorithmic analysis is geographic routing. This paper contains an introduction to the problem field of geographic routing, presents a specific routing algorithm based on a synthesis of the greedy forwarding and face routing approaches, and provides an algorithmic analysis of the presented algorithm from both a worst-case and an average-case perspective.

Journal ArticleDOI
TL;DR: A "relative quality" metric (rPSNR) is introduced that bypasses the problem of assessing the quality of video transmitted over IP networks by measuring video quality against a quality benchmark that the network is expected to provide.
Abstract: This paper investigates the problem of assessing the quality of video transmitted over IP networks. Our goal is to develop a methodology that is both reasonably accurate and simple enough to support the large-scale deployments that the increasing use of video over IP are likely to demand. For that purpose, we focus on developing an approach that is capable of mapping network statistics, e.g., packet losses, available from simple measurements, to the quality of video sequences reconstructed by receivers. A first step in that direction is a loss-distortion model that accounts for the impact of network losses on video quality, as a function of application-specific parameters such as video codec, loss recovery technique, coded bit rate, packetization, video characteristics, etc. The model, although accurate, is poorly suited to large-scale, on-line monitoring, because of its dependency on parameters that are difficult to estimate in real-time. As a result, we introduce a ldquorelative qualityrdquo metric (rPSNR) that bypasses this problem by measuring video quality against a quality benchmark that the network is expected to provide. The approach offers a lightweight video quality monitoring solution that is suitable for large-scale deployments. We assess its feasibility and accuracy through extensive simulations and experiments.

Journal ArticleDOI
TL;DR: A dynamic queue-length aware algorithm is constructed that maximizes throughput and achieves an average delay that is independent of N, the first order-optimal delay result for opportunistic scheduling with asymmetric links.
Abstract: We consider a one-hop wireless network with independent time varying ON/OFF channels and N users, such as a multi-user uplink or downlink. We first show that general classes of scheduling algorithms that do not consider queue backlog must incur average delay that grows at least linearly with N . We then construct a dynamic queue-length aware algorithm that maximizes throughput and achieves an average delay that is independent of N. This is the first order-optimal delay result for opportunistic scheduling with asymmetric links. The delay bounds are achieved via a technique of queue grouping together with Lyapunov drift and statistical multiplexing concepts.

Journal ArticleDOI
TL;DR: This paper presents a general methodology for building comprehensive behavior profiles of Internet backbone traffic in terms of communication patterns of end-hosts and services, relying on data mining and entropy-based techniques.
Abstract: Recent spates of cyber-attacks and frequent emergence of applications affecting Internet traffic dynamics have made it imperative to develop effective techniques that can extract, and make sense of, significant communication patterns from Internet traffic data for use in network operations and security management. In this paper, we present a general methodology for building comprehensive behavior profiles of Internet backbone traffic in terms of communication patterns of end-hosts and services. Relying on data mining and entropy-based techniques, the methodology consists of significant cluster extraction, automatic behavior classification and structural modeling for in-depth interpretive analyses. We validate the methodology using data sets from the core of the Internet.