scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Computer Communications and Networks in 2004"


Proceedings Article•DOI•
11 Oct 2004
TL;DR: This article designs a centralized approximation algorithm that delivers a near-optimal (within a factor of O(lg n)) solution, and presents a distributed version of the algorithm.
Abstract: In overdeployed sensor networks, one approach to conserve energy is to keep only a small subset of sensors active at any instant. In this article, we consider the problem of selecting a minimum size connected K-cover, which is defined as a set of sensors M such that each point in the sensor network is "covered" by at least K different sensors in M, and the communication graph induced by M is connected. For the above optimization problem, we design a centralized approximation algorithm that delivers a near-optimal (within a factor of O(lg n)) solution, and present a distributed version of the algorithm. We also present a communication-efficient localized distributed algorithm which is empirically shown to perform well

323 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: An email worm model is presented that accounts for the behaviors of email users by considering email checking time and the probability of opening email attachments, and suggests that the node degrees of an email network are heavy-tailed distributed.
Abstract: Email worms constitute one of the major Internet security problems. In this paper, we present an email worm model that accounts for the behaviors of email users by considering email checking time and the probability of opening email attachments. Email worms spread over a logical network defined by email address relationship, which plays an important role in determining the spreading dynamics of an email worm. Our observations suggest that the node degrees of an email network are heavy-tailed distributed. We compare email worm propagation on three topologies: power law, small world and random graph topologies; and then study how the topology affects immunization defense on email worms. The impact of the power law topology on the spread of email worms is mixed: email worms spread more quickly on a power law topology than on a small world topology or a random graph topology, but immunization defense is more effective on a power law topology than on the other two

140 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: This study applies the classical SIS model and a modification of SIR model to simulate worm propagation in two different network topologies and shows that time to infect a large portion of the network vary significantly depending on where the infection begins.
Abstract: There has been a constant barrage of worms over the Internet during the recent past. Besides threatening network security, these worms cause an enormous economic burden in terms of loss of productivity at the victim hosts. In addition, these worms create unnecessary network data traffic that causes network congestion, thereby hurting all users. To develop appropriate tools for thwarting quick spread of worms, researchers are trying to understand the behavior of the worm propagation with the aid of epidemiological models. In this study, we apply the classical SIS model and a modification of SIR model to simulate worm propagation in two different network topologies. Whereas in the SIR model once a node is cured after infection it becomes permanently immune, our modification allows this immunity to be temporary, since the cured nodes may again become infected, maybe with a different strain of the same worm. The simulation study also shows that time to infect a large portion of the network vary significantly depending on where the infection begins. This information could be usefully employed to choose hosts for quarantine to delay worm propagation to the rest of the network

102 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: An analytical model is presented to predict energy consumption in saturated IEEE 802.11 single-hop ad hoc networks under ideal channel conditions, finding that the energy cost to transmit useful data increases almost linearly with the network size; and transmitting large payloads is more energy efficient under saturation conditions.
Abstract: This paper presents an analytical model to predict energy consumption in saturated IEEE 802.11 single-hop ad hoc networks under ideal channel conditions. The model we introduce takes into account the different operational modes of the IEEE 802.11 DCF MAC, and is validated against packet-level simulations. In contrast to previous works that attempted to characterize the energy consumption of IEEE 802.11 cards in isolated, contention-free channels (i.e., single sender/receiver pair), this paper investigates the extreme opposite case, i.e., when nodes need to contend for channel access under saturation conditions. In such scenarios, our main findings include: (1) contrary to what most previous results indicate, the radio's transmit mode has marginal impact on overall energy consumption, while other modes (receive, idle, etc.) are responsible for most of the energy consumed; (2) the energy cost to transmit useful data increases almost linearly with the network size; and (3) transmitting large payloads is more energy efficient under saturation conditions

86 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: This paper proposes using a mobile collector, such as an airplane or a vehicle, to collect sensor data from remote fields, and presents three different schedules for the collector: round-robin, rate-based, and min movement.
Abstract: This paper proposes using a mobile collector, such as an airplane or a vehicle, to collect sensor data from remote fields. We present three different schedules for the collector: round-robin, rate-based, and min movement. The data are not immediately transmitted to the base station after being sensed but buffered at a cluster head; hence, it is important to ensure the latency is within an acceptable range. We compare the latency and the energy expended of the three schedules. We use the ns-2 network simulator to study the scenarios and illustrate conditions under which rate-based outperforms round-robin in latency, and vice-versa. The benefit of min movement is in minimizing the energy expended

84 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: End-to-end measurements of UDP/IP flows across an Internet backbone network demonstrate the high prevalence of packet reordering relative to packet loss, and show a strong correlation between packet rate and reordering on the network the authors studied.
Abstract: We performed end-to-end measurements of UDP/IP flows across an Internet backbone network. Using this data, we characterized the packet reordering processes seen in the network. Our results demonstrate the high prevalence of packet reordering relative to packet loss, and show a strong correlation between packet rate and reordering on the network we studied. We conclude that, given the increased parallelism in modern networks and the demands of high performance applications, new application and protocol designs should treat packet reordering on an equal footing to packet loss, and must be robust and resilient to both in order to achieve high performance

67 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: An iterative heuristic (IMSH) is proposed which uses the modified Suurballe's heuristic for computing the least cost SRLG diverse routes between a node pair and presents an 1/2-cost-improvement optimality check criterion for dedicated protection.
Abstract: Survivable routing of a connection involves computation of a pair of diverse routes such that at most one route fails when failures occur in the network topology. A subset of links in the network that share the risk of failure at the same time are said to belong to a shared risk link group (SRLG) [J. Strand et al., Feb 2001]. A network with shared risk link groups defined over its links is an SRLG network. A failure of an SRLG is equivalent to the failure of all the links in the SRLG. For a connection to be survivable in an SRLG network, its working and protection paths must be routed on SRLG diverse paths. SRLG diverse routing problem has been proved to be NP-complete in J.Q. Hu (2003). According to the quality of service requirement of a survivable connection request, dedicated protection or shared protection can be used to establish the connection request. With dedicated protection, the connection is established on both the SRLG diverse working and protection paths. The simplest heuristic for computing SRLG diverse path pair is the two-step approach, but it suffers from the trap topology problem. In the previous study by Pin-Han Ho, an iterative heuristic (ITSH) using the two-step approach was proposed to compute the least cost SRLG diverse path pair. Suurballe's algorithm computes a pair of least cost link-disjoint paths between a node pair. In this work, we present a modified Suurballe's heuristic for computing the SRLG diverse routes between a node pair. We then propose an iterative heuristic (IMSH) which uses the modified Suurballe's heuristic for computing the least cost SRLG diverse routes. We also present an 1/2-cost-improvement optimality check criterion for dedicated protection

59 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: A new version of a multiobjective multicast routing algorithm for traffic-engineering, based on the strength Pareto evolutionary algorithm (SPEA), which simultaneously optimizes the maximum link utilization, the cost of the tree, the maximum end-to-end delay and the average delay is presented.
Abstract: This paper presents a new version of a multiobjective multicast routing algorithm (MMA) for traffic-engineering, based on the strength Pareto evolutionary algorithm (SPEA), which simultaneously optimizes the maximum link utilization, the cost of the tree, the maximum end-to-end delay and the average delay. In this way, a set of optimal solutions, known as Pareto set, is calculated in only one run, without a priori restrictions. Simulation results show that MMA is able to find Pareto optimal solutions. They also show that for dynamic multicast routing, where the traffic requests arrive one after another, MMA outperforms other known algorithms

55 citations


Proceedings Article•DOI•
Chang Liu1, Lu Ruan1•
11 Oct 2004
TL;DR: Simulation study showed that the cycles generated by the algorithm can lead to near optimal solutions when used by either ILP or the heuristic algorithm.
Abstract: An important problem in p-cycle network design is to find a set of p-cycles to protect a given working capacity distribution so that the total spare capacity used by the p-cycles is minimized. Existing approaches for solving the problem include ILP and heuristic algorithm. Both require a set of candidate p-cycles to be precomputed. In this paper, we propose an algorithm to compute a small set of candidate p-cycles that can lead to good performance when used by ILP or the heuristic algorithm. The key idea of the algorithm is to generate a combination of high efficiency cycles and short cycles so that both densely distributed and sparsely distributed working capacities can be efficiently protected by the candidate cycles. The algorithm is also flexible in that the number of cycles generated is controlled by an input parameter. Simulation study showed that the cycles generated by our algorithm can lead to near optimal solutions when used by either ILP or the heuristic algorithm

53 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: A genetic-algorithm-based neighbor-selection strategy for hybrid peer-to-peer networks that enhances the decision process performed at the tracker for transfer coordination and can significantly improve the efficiency of distribution, especially for low-connectivity peers.
Abstract: BitTorrent is a popular, open-source, hybrid peer-to-peer content distribution system that is conducive for distribution of large-volume contents In this paper, we propose a genetic-algorithm-based neighbor-selection strategy for hybrid peer-to-peer networks, which enhances the decision process performed at the tracker for transfer coordination We also investigate how the strategy affects system throughput and distribution efficiency as well as peer contributions We show through computer simulations that by increasing content availability to the clients from their immediate neighbors, it can significantly improve the system performance without trading off users' satisfaction The proposed strategy can significantly improve the efficiency of distribution, especially for low-connectivity peers, and it is suitable to deploy for online decisions

51 citations


Proceedings Article•DOI•
11 Oct 2004
TL;DR: Two techniques to estimate the user location in the continuous physical space, namely the center of mass technique and time averaging technique are presented and can be applied to any of the current WLAN location determination systems to enhance their accuracy.
Abstract: WLAN location determination systems add to the value of a wireless network by providing the user location without using any extra hardware. Current systems return the estimated user location from a set of discrete locations in the area of interest, which limits the accuracy of such systems to how far apart the selected points are. In this paper, we present two techniques to estimate the user location in the continuous physical space, namely the center of mass technique and time averaging technique. We test the performance of the two techniques in the context of the Horus WLAN location determination system under two different testbeds. Using the center of mass technique, the performance of the Horus system is enhanced by more than 13% for the first testbed and more than 6% for the second testbed. The time-averaging technique enhances the performance of the Horus system by more than 24% for the first testbed and more than 15% for the second testbed. The techniques are general and can be applied to any of the current WLAN location determination systems to enhance their accuracy. Moreover, the two techniques are independent and can be applied together to further enhance the accuracy of the current WLAN location determination systems

Proceedings Article•DOI•
11 Oct 2004
TL;DR: A comparison study on MPEG vs wavelet encoded video traces for one hour movie excerpts with rate control allows for the evaluation of long range dependency and self similarity of the generated video traffic, which has not been studied before in the context of comparing DCT and wavelet-based encoding.
Abstract: Wavelet-based encoding is now emerging as an efficient way to encode video for streaming over the Internet and for wireless applications. "Wavelet-based video coding has been recently added to the JPEG-2000 video standards. As wavelet encoded video emerges as the next generation video encoding method, it is vital to compare the efficiency of wavelet encoded video against the widely used DCT-based MPEG encoded video. However, due to the lack of long wavelet encoded video streams, most research has so far been based on short video traces. This paper presents a comparison study on MPEG vs wavelet encoded video traces for one hour movie excerpts with rate control. These long video sequences allow for the evaluation of long range dependency and self similarity of the generated video traffic, which has not been studied before in the context of comparing DCT and wavelet-based encoding. We focus on the elementary as well as self-similar traffic characteristics of the encoded video. A hump behavior for the variability of frame sizes is observed for increasing video bit rates for both wavelet and MPEG encoded video. In addition, the quality characteristics of the encoded video is examined and related to the traffic. Our results indicate that the wavelet encoded video results in higher video quality than MPEG encoded video. For the frame size variability we find different characteristics depending on the aggregation level for a given data rate. The results also indicate that the variation of quality resulting from the wavelet encoding is lower than for the MPEG encoded video

Proceedings Article•DOI•
01 Dec 2004
TL;DR: This work considers the problem of minimizing the differential delay in a virtually concatenated Ethernet over SONET (EoS) system by suitable path selection by proposing two algorithms based on a modified link metric that linearly combines the original link weight and the inverse of that weight.
Abstract: We consider the problem of minimizing the differential delay in a virtually concatenated Ethernet over SONET (EoS) system by suitable path selection. The link capacity adjustment scheme (LCAS) enables network service providers to dynamically add STS-n channels to or drop them from a virtually concatenated group (VCG). A new STS-n channel can be added to the VCG provided that the differential delay between the new STS-n channel and the existing STS-n channels in the VCG is within a certain bound that reflects the available memory buffer supported by the EoS system. We model the problem of finding such a STS-n channel as a constrained path selection problem where the cost of the required (feasible) path is constrained not only by an upper bound but also by a lower bound. We propose two algorithms to find such a path. Algorithm I uses the well-known k-shortest-path algorithm. Algorithm II is based on a modified link metric that linearly combines the original link weight (the link delay) and the inverse of that weight. The theoretical properties of such a metric are studied and used to develop a highly efficient heuristic for path selection. Simulations are conducted to evaluate the performance of both algorithms in terms of the miss rate and the execution time (average computational complexity)

Proceedings Article•DOI•
11 Oct 2004
TL;DR: This work performs four studies designed to highlight the various strengths and weaknesses of the protocols: node density, traffic/congestion, mobility, and a combination study examining all three parameters together.
Abstract: This work classifies the current geocast routing protocols of a mobile ad hoc network (MANET) into three categories. We then simulate a typical geocast routing protocol in each category. We performed four studies designed to highlight the various strengths and weaknesses of the protocols: node density, traffic/congestion, mobility, and a combination study examining all three parameters together.

Proceedings Article•DOI•
Yong Liu1, A.L.N. Reddy1•
11 Oct 2004
TL;DR: A simple extension to the current link state protocols, OSPF or IS-IS, that can route around failures faster and involves minimal number of routers for rerouting is proposed.
Abstract: Most current backbone networks use link-state protocol, OSPF or IS-IS, as their intra-domain routing protocol. Link-state protocols perform global routing table update to route around the failures. It usually takes seconds. As real-time applications like VoIP emerge in recent years, there is a requirement for a fast rerouting mechanism to route around failures before all routers on the network update their routing tables. In addition, fast rerouting is more appropriate than global routing table update when failures are transient. We propose such a fast rerouting extension for link-state protocols. In our approach, when a link fails, the affected traffic is rerouted along a pre-computed rerouting path. In case rerouting cannot be done locally, the local router signals the minimal number of upstream routers to setup the rerouting path for rerouting. We propose algorithms that simplify the rerouting operation and the rerouting path setup. With a simple extension to the current link state protocols, our scheme can route around failures faster and involves minimal number of routers for rerouting.

Proceedings Article•DOI•
11 Oct 2004
TL;DR: Simulations-based performance analysis indicates that the proposed proxy-RED scheme improves overall performance of the network, and significantly reduces packet loss rate and improves goodput for a small buffer, and minimizes delay for a large buffer size.
Abstract: Wireless access points act as bridges between wired and wireless networks. Since the actually available bandwidth in wireless networks is much smaller than the bandwidth in wired networks, there is a disparity in channel capacity which makes the access point a significant network congestion point in the downstream direction. A current architectural trend in wireless local area networks (WLAN) is to move functionality from access points to a centralized gateway in order to reduce cost and improve features. In this paper, we study the use of RED, a well known active queue management (AQM) scheme, and explicit congestion notification (ECN) to handle bandwidth disparity between the wired and the wireless interface of an access point Then, we propose the proxy-RED scheme, as a solution for reducing the AQM overhead from the access point. Simulations-based performance analysis indicates that the proposed proxy-RED scheme improves overall performance of the network. In particular, the proxy-RED scheme significantly reduces packet loss rate and improves goodput for a small buffer, and minimizes delay for a large buffer size

Proceedings Article•DOI•
11 Oct 2004
TL;DR: This paper proposes an idealized deterministic edge append scheme in which it is assumed that the IP header can be modified to include the marking option field of fixed size and a deterministic pipelined packet marking scheme that is backward compatible with IPv4.
Abstract: Recently, several schemes have been proposed for IP traffic source identification for tracing attacks that employ source address spoofing such as denial of service (DoS) attacks Most of these schemes are based on packet marking (ie, augmenting IP packets with partial path information) A major challenge to packet marking schemes is the limited space available in the IP header for marking purposes In this paper, we focus on this issue and propose a topology based encoding schemes supported by real Internet measurements In particular, we propose an idealized deterministic edge append scheme in which we assume that the IP header can be modified to include the marking option field of fixed size Also, we propose a deterministic pipelined packet marking scheme that is backward compatible with IPv4 (ie, no IP header modification) The validity of both schemes depends directly on the statistical information that we extract from large data sets that represent Internet maps Our studies show that it is possible to encode an entire path using 52 bits

Proceedings Article•DOI•
11 Oct 2004
TL;DR: The authors' admission control scheme is extended by using variable service interval to improve the efficiency about 20% - 30% and avoid over guarantee on packet delay and the packet loss rate of VBR traffic can be guaranteed.
Abstract: The IEEE 802.11 working group is currently working on the standard IEEE 802.11e and introduces the hybrid coordination function (HCF) to provide better QoS support to real-time traffic. A reference design of simple scheduling and admission control algorithm is proposed in a TGe consensus proposal. However, this scheduling and admission control unit only consider the mean data rate and mean packet size. The rate and packet size variation are not taken into account. Thus, it is only efficient to CBR traffic and the packet loss rate of VBR traffic may be very high. In W. F. Fan et al. (Aug. 2004), we analyzed the packet loss rate of the reference scheme and proposed a new method to determine the effective TXOP duration for admission control so that the packet loss rate of VBR flows can be guaranteed. In both reference and our proposed scheme, all stations use fixed schedule service interval (SI) which is the minimum of all maximum service interval of all admitted flows. Thus, maximum packet delay of all stations is limited by the most stringent SI and some traffic with larger delay bound may be over-guaranteed. Also, the efficiency of the admission control scheme in W. F. Fan et al. (Aug. 2004), becomes lower than that of the reference one. In this paper, we extend our admission control scheme by using variable service interval to improve the efficiency about 20% - 30% and avoid over guarantee on packet delay. Also, the packet loss rate of VBR traffic can be guaranteed

Proceedings Article•DOI•
11 Oct 2004
TL;DR: By using autonomous system information effectively, structured peer-to-peer networks can achieve lookup performance approaching that based on proximity neighbor selection, but with much less network traffic.
Abstract: With the rise of peer-to-peer networks, two problems have become prominent: (1) significant network traffic among peers, to probe the latency among those peers to improve lookup performance; (2) the need for increased information flow across protocol layer boundaries, to allow for cross-layer adaptations. The particular work here focuses on improvements in structured peer-to-peer networks based on autonomous system information. We find that by using autonomous system information effectively we can achieve lookup performance approaching that based on proximity neighbor selection, but with much less network traffic. We also demonstrate improvements in replication in structured peer-to-peer networks using AS topology and scoping information. Finally, we review this approach in the context of network architecture

Proceedings Article•DOI•
11 Oct 2004
TL;DR: ADCA is proposed, a high-performance MAC that works with high-capacity physical layer that exploits two ideas of adaptive batch transmission and opportunistic selection of high-rate hosts to simultaneously reduce the overhead and improve the aggregate throughput.
Abstract: The next-generation wireless technologies, e.g., 802.11n and 802.15.3a, offer a physical-layer speed at least an-order-of-magnitude higher than the current standards. However, direct application of current MACs leads to high protocol overhead and significant throughput degradation. In this paper, we propose ADCA, a high-performance MAC that works with high-capacity physical layer. ADCA exploits two ideas of adaptive batch transmission and opportunistic selection of high-rate hosts to simultaneously reduce the overhead and improve the aggregate throughput. It opportunistically favors high-rate hosts by providing higher access probability and more access time, while ensuring each low-rate host certain minimum amount of channel access time. Simulations show that the ADCA design increases the throughput by 112% and reduces the average delay by 55% compared with the legacy DCF. It delivers more than 100 Mbps MAC-layer throughput as compared with 35 Mbps offered by the legacy MAC

Proceedings Article•DOI•
Le Zou1, Mi Lu1, Zixiang Xiong1•
11 Oct 2004
TL;DR: A novel algorithm, named partial-partition avoiding geographic routing (PAGER), to solve the dead-end problem of location-based routing in sensor networks, and generates considerably shorter paths, higher delivery ratio and lower energy consumption than the greedy perimeter stateless routing protocol.
Abstract: The dead-end problem is an importance issue of location-based routing in sensor networks, which occurs when a message falls into a local minimum using greedy forwarding. Current methods for this problem are insufficient either in eliminating traffic/path memorization or finding satisfied short paths. In this paper, we propose a novel algorithm, named partial-partition avoiding geographic routing (PAGER), to solve the problem. The basic idea of PAGER is to divide a sensor network graph into functional sub-graphs, and provide each sensor node with message forwarding directions based on these sub-graphs. That results in loop-free short paths without memorization of traffics/paths in sensor nodes. We implement our algorithm in a protocol and evaluate it in sensor networks with different parameters. Results show that PAGER generates considerably shorter paths, higher delivery ratio and lower energy consumption than the greedy perimeter stateless routing protocol. At the same time, PAGER achieves better performance in handling large-scale networks than the ad-hoc on-demand distance vector protocol

Proceedings Article•DOI•
11 Oct 2004
TL;DR: This paper proposes a minimal adjustment to the classic random early detection (RED) algorithm, called hyperbola RED (HRED), that uses thehyperbola as the drop probability curve, and concludes that HRED is insensitive to the network conditions and parameter settings, and can achieve higher network utilization than the other RED schemes.
Abstract: Active queue management (AQM) is an area of critical importance for the operation of networks. In this paper, we propose a minimal adjustment to the classic random early detection (RED) algorithm, called hyperbola RED (HRED), that uses the hyperbola as the drop probability curve. The control law of HRED can regulate the queue size close to the reference queue size which is settable by the user. As a result, it is expected that HRED is no longer sensitive to the level of network load, its behavior shows low dependence on the parameter settings, and it can achieve higher network utilization. Additionally, very little work needs to be done to migrate from RED to HRED on Internet routers because only the drop profile is adjusted. We implemented HRED on a real Internet router to examine and compare its performance with the classic RED and parabola RED that are currently deployed on Internet routers. From experiments on the real network, we conclude that HRED is insensitive to the network conditions and parameter settings, and can achieve higher network utilization than the other RED schemes

Proceedings Article•DOI•
11 Oct 2004
TL;DR: A location-based routing algorithm with cluster-based flooding for vehicle to vehicle communication and a limiting function for flood mechanisms in reactive ad-hoc protocols is presented.
Abstract: We present a location-based routing algorithm with cluster-based flooding for vehicle to vehicle communication. We consider a motorway environment with associated high mobility and contrast and compare position-based and non-position-based routing strategies, along with a limiting function for flood mechanisms in reactive ad-hoc protocols. The performance of dynamic source routing (DSR) and ad-hoc on-demand distance vector routing (AODV) for non-positional and location routing algorithm with cluster-based flooding (LORA/spl I.bar/CBF) for positional algorithms are considered. Our proposed flooding limiting technique is compared with AODV and DSR by simulation. The mobility of the vehicles on a motorway using a microscopic traffic model developed in OPNET has been used to evaluate average route discovery (RD) time, end-to-end delay (EED), routing load, routing overhead, overhead, and delivery ratio.

Proceedings Article•DOI•
11 Oct 2004
TL;DR: In this paper, networks are designed to achieve 100% restorability under single link failures, while maximizing coverage against any second link failure in the network.
Abstract: Double link failure models, in which any two links in the network fail in an arbitrary order, are becoming critical in survivable optical network design. A significant finding is that designs offering complete dual-failure restorability require almost triple the amount of spare capacity. In this paper, networks are designed to achieve 100% restorability under single link failures, while maximizing coverage against any second link failure in the network. In the event of a single link failure, the restoration model attempts to dynamically find a second alternate link-disjoint end-to-end path to provide coverage against a sequential overlapping link failure. Sub-graph routing (M. T. Frederick et al., Feb. 2003) is extended to provide dual-failure restorability for a network provisioned to tolerate all single-link failures. This strategy is compared with shared-mesh protection. The results indicate that sub-graph routing can achieve overlapping second link failure restorability for 95-99% of connections. It is also observed that sub-graph routing can inherently provide complete dual-failure coverage for ~72-81% of the connections

Proceedings Article•DOI•
Pei Zheng1, Chen Wang•
11 Oct 2004
TL;DR: This work proposes a scalable and highly available P2P multi-source content distribution system called SODON, and the sub-seeding and sub-tracking schemes effectively reduce the tracker load while offering a comparatively high availability of shared files.
Abstract: Multi-source content distribution and file download systems such as BitTorrent and Edonkey emerge as a better scheme than the traditional ftp systems in distributing files to very large scale systems. However, a major drawback of those systems is that they largely rely on a tracker server to maintain state information of a horde. Furthermore, if the peer holding all the pieces of a file leaves the system, there is a high probability that the other peers cannot download the entire file. We propose a scalable and highly available P2P multi-source content distribution system called SODON to solve those problems. SODON distributes all the pieces of a file from a seed to a number of carefully selected sub-seeds in order to guarantee a complete set of pieces that are available for a long time. The sub-seeds also work as sub-trackers to maintain the horde state information. As a result, the original tracker server is not overwhelmed by a large population of download requests. Our traces on BitTorrent and simulation result show that SODON is feasible to build, and the sub-seeding and sub-tracking schemes effectively reduce the tracker load while offering a comparatively high availability of shared files

Proceedings Article•DOI•
11 Oct 2004
TL;DR: It is proved that finding optimal solutions is NP-hard for both variations of the problem, and a theoretically best possible heuristic for the A-problem and three different heuristics for the E-problem are proposed, one of them being also theoreticallybest possible.
Abstract: Various network monitoring and performance evaluation schemes generate considerable amount of traffic, which affects network performance. In this paper we describe a method for minimizing network monitoring overhead based on shortest path tree (SPT) protocol. We describe two different variations of the problem: the A-problem and the E-problem, and show that there is a significant difference between them. We prove that finding optimal solutions is NP-hard for both variations, and propose a theoretically best possible heuristic for the A-problem and three different heuristics for the E-problem, one of them being also theoretically best possible. We show that one can compute in polynomial time an O(ln|V|)-approximate solution for each of these problems. Then, we analyze the performance of our heuristics on large graphs generated using Waxman and power-law models as well as on real ISP topology maps. Experiment results show more than 80% improvement when using our heuristics on real topologies over the naive approaches

Proceedings Article•DOI•
11 Oct 2004
TL;DR: Simulation results confirm the stability of the proposed AQM algorithm under various network environments and show its performance advantages over other competitive AQM algorithms.
Abstract: In this paper, we propose a new AQM algorithm that considers both the average queue length and the estimated packet arrival rate together in order to detect and control incipient congestion. It predicts the average queue length and controls it to maintain a certain reference value to achieve high link utilization and low queueing delay. Simulation results confirm the stability of our proposed algorithm under various network environments and show its performance advantages over other competitive AQM algorithms

Proceedings Article•DOI•
11 Oct 2004
TL;DR: The mobility model, which can be used to estimate the probability of a cellular user entering a WLAN while moving within the "location area" of the cellular network, is the main contribution of this work.
Abstract: In this article, we propose a model that can be used to study the costs and benefits of integrating cellular and wireless LANs, from a vendor and a user perspective. We explain the model and the approach to calculate the costs for a simple handoff topology that supports a voice call across a cellular and a WLAN. The mobility model, which can be used to estimate the probability of a cellular user entering a WLAN while moving within the "location area" of the cellular network, is the main contribution of this work. The mobility model takes into account a normal user movement and a biased user movement based on a 2D random walk model. We show how this model could be used to study the costs and benefits to the vendor and the user, especially if a user were to be supported with call continuity across the two networks and the vendor had to bear the extra costs of handing over an active call. We have restricted our analysis to a scenario where the user with an active voice call moves from the cellular network to the WLAN. Graphs and discussions are provided to demonstrate the use of the model to study the integration benefits, from a vendor and a user perspective

Proceedings Article•DOI•
Dongli Zhang1, Dan Ionescu1•
11 Oct 2004
TL;DR: This paper explores the application of the large deviation theory (LDT) asymptotic loss estimators to the core links which aggregates lots of VPN services, and derived based on LDT and Gaussian traffic model.
Abstract: When provisioning the quantitative QoS based VPN services over packet switched networks, parameters such as packet loss, delay and delay jitter, besides the required bandwidth, have to be guaranteed. While the bandwidth is easier to guarantee during the lifetime of a VPN service, maintaining a value of the loss parameter below a preset value presents serious difficulties. Thus one of the main issues to be solved is to estimate the packet loss parameter effectively and accurately. Linking the stochastic characteristics of the traffic process to the loss probability has been examined as a solution under various contexts. Based on the large deviation theory (LDT), two types of asymptotic loss estimators have been studied extensively: the large buffer asymptotic and the large number of source asymptotic. This paper explores the application of the above estimators to the core links which aggregates lots of VPN services. To reduce the computational complexity, aggregate traffic approximation estimator is employed instead on the latter of the study. Firstly, the two practical estimators are derived based on LDT and Gaussian traffic model. Then their performances are evaluated in the live MPLS VPN network. Conclusion and suggestion for future research are formulated at the end of this paper

Proceedings Article•DOI•
11 Oct 2004
TL;DR: This paper presents an in-the-middle hybrid architecture which uses a mix of both topologies to create a decentralized P2P infrastructure that provides scalable and guaranteed lookups in addition to mutual anonymity and also allows hosting content with the content-owner.
Abstract: A core area of P2P systems research is the topology of the overlay network It has ranged from random unstructured networks like Gnutella to Super-Peer architectures to the recent trend of structured overlays based on distributed hash tables (DHTs) (I Stoica et al, 2001), (A Rowstron et al, 2001), (S Ratnasamy et al, 2001) While the unstructured networks have excessive lookup costs and un-guaranteed lookups, the structured systems offer no anonymity and delegate control over data items to unrelated peers In this paper, we present an in-the-middle hybrid architecture which uses a mix of both topologies to create a decentralized P2P infrastructure The system provides scalable and guaranteed lookups in addition to mutual anonymity and also allows hosting content with the content-owner We validate our architecture through a thorough analytical and performance analysis of the system