scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2008"


Proceedings Article•DOI•
02 Jun 2008
TL;DR: The social networking in YouTube videos is investigated, finding that the links to related videos generated by uploaders' choices have clear small-world characteristics, indicating that the videos have strong correlations with each other, and creates opportunities for developing novel techniques to enhance the service quality.
Abstract: YouTube has become the most successful Internet website providing a new generation of short video sharing service since its establishment in early 2005. YouTube has a great impact on Internet traffic nowadays, yet itself is suffering from a severe problem of scalability. Therefore, understanding the characteristics of YouTube and similar sites is essential to network traffic engineering and to their sustainable development. To this end, we have crawled the YouTube site for four months, collecting more than 3 million YouTube videos' data. In this paper, we present a systematic and in-depth measurement study on the statistics of YouTube videos. We have found that YouTube videos have noticeably different statistics compared to traditional streaming videos, ranging from length and access pattern, to their growth trend and active life span. We investigate the social networking in YouTube videos, as this is a key driving force toward its success. In particular, we find that the links to related videos generated by uploaders' choices have clear small-world characteristics. This indicates that the videos have strong correlations with each other, and creates opportunities for developing novel techniques to enhance the service quality.

773 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: This paper presents a bilateral protocol for SLA negotiation using the alternate offers mechanism wherein a party is able to respond to an offer by modifying some of its terms to generate a counter offer.
Abstract: Service level agreements (SLAs) between grid users and providers have been proposed as mechanisms for ensuring that the users' quality of service (QoS) requirements are met, and that the provider is able to realise utility from its infrastructure. This paper presents a bilateral protocol for SLA negotiation using the alternate offers mechanism wherein a party is able to respond to an offer by modifying some of its terms to generate a counter offer. We apply this protocol to the negotiation between a resource broker and a provider for advance reservation of compute nodes, and implement and evaluate it on a real grid system.

115 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: The second order statistics of the number of packet losses in finite Markov models over several relevant time scales are derived and adapted to loss processes visible in wired and wireless transmission channels.
Abstract: Real-time Internet services are gaining in popularity due to rapid provisioning of broadband access technologies. Delivery of high quality of experience (QoE) is important for consumer acceptance of multimedia applications. IP packet level errors affect QoE and the resulting quality degradations have to be taken into account in network operation. We derive the second order statistics of the number of packet losses in finite Markov models over several relevant time scales and adapt them to loss processes visible in wired and wireless transmission channels. Higher order Markov chains offer a large set of parameters to be exploited by complex fitting procedures. We experience that the 2-state Gilbert-Elliott model already captures a wide range of observed loss pattern appropriately and discuss how such models can be used to examine the quality degradations caused by packet losses.

68 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: It is demonstrated that the networks exhibit fundamental differences during different stages of a swarm, suggesting that the initial stage is not predictive of the overall performance and an interesting venue for improving BitTorrent's performance.
Abstract: This paper describes an experimental study that closely examines the underlying topologies of multiple complex networks formed in BitTorrent swarms. Our results demonstrate that the networks exhibit fundamental differences during different stages of a swarm, suggesting that the initial stage is not predictive of the overall performance. We also find a power-law degree distribution in the network of peers that are unchoked by others, which indicates the presence of a robust scale-free network. However, unlike previous studies, we find no clear evidence of persistent clustering in any of the networks, precluding the presence of a small-world that is potentially efficient for peer-to-peer downloading. These results suggest an interesting venue for improving BitTorrent's performance. We present a first attempt to introduce clustering into BitTorrent. Our approach is theoretically proven and makes minimal changes to the tracker only. Its effectiveness is verified through a series of simulations and experiments.

41 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: This analysis highlights P2P video multicast characteristics such as high bandwidth requirements, high peer churn, low peer persistence in the P1P multicast system, significant variance in the media stream quality delivered to peers, relatively large channel start times, and flash crowd effects of popular video content.
Abstract: We evaluate the performance of a large-scale live P2P video multicast session comprising more than 120, 000 peers on the Internet. Our analysis highlights P2P video multicast characteristics such as high bandwidth requirements, high peer churn, low peer persistence in the P2P multicast system, significant variance in the media stream quality delivered to peers, relatively large channel start times, and flash crowd effects of popular video content. Our analysis also indicates that peers are widely spread across the IP address space, spanning dozens of countries and hundreds of ISPs and Internet ASes. As part of the P2P multicast evaluation several QoS measures such as fraction of stream blocks correctly received, number of consecutive stream blocks lost, and channel startup time across peers. We correlate the observed quality with the underlying network and with peer behavior, suggesting several avenues for optimization and research in P2P video multicast systems.

39 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: A sensor movement scheduling algorithm is developed that achieves near-optimal system detection performance within a given detection delay bound and is validated by extensive simulations using the real data traces collected by 23 sensor nodes.
Abstract: Recent years have witnessed the deployments of wireless sensor networks in a class of mission-critical applications such as object detection and tracking. These applications often impose stringent QoS requirements including high detection probability, low false alarm rate and bounded detection delay. Although a dense all-static network may initially meet these QoS requirements, it does not adapt to unpredictable dynamics in network conditions (e.g., coverage holes caused by death of nodes) or physical environments (e.g., changed spatial distribution of events). This paper exploits reactive mobility to improve the target detection performance of wireless sensor networks. In our approach, mobile sensors collaborate with static sensors and move reactively to achieve the required detection performance. Specifically, mobile sensors initially remain stationary and are directed to move toward a possible target only when a detection consensus is reached by a group of sensors. The accuracy of final detection result is then improved as the measurements of mobile sensors have higher signal-to-noise ratios after the movement. We develop a sensor movement scheduling algorithm that achieves near-optimal system detection performance within a given detection delay bound. The effectiveness of our approach is validated by extensive simulations using the real data traces collected by 23 sensor nodes.

35 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: This study informs all-optical packet router designs and network service providers who operate their buffer sizes in this regime, of the negative impact investment in larger buffers can have on the quality of service performance.
Abstract: The past few years have seen researchers debate the size of buffers required at core Internet routers. Much of this debate has focused on TCP throughput, and recent arguments supported by theory and experimentation suggest that few tens of packets of buffering suffice at bottleneck routers for TCP traffic to realise acceptable link utilisation. This paper introduces a small fraction of real-time (i.e. open-loop) traffic into the mix, and discovers an anomalous behaviour: In this specific regime of very small buffers, losses for real-time traffic do not fall monotonically with buffer size, but instead exhibit a region where larger buffers cause higher losses. Our contributions pertaining to this phenomenon are threefold: First, we demonstrate this anomalous loss performance for real-time traffic via extensive simulations including real video traces. Second, we provide qualitative explanations for the anomaly and develop a simple analytical model that reveals the dynamics of buffer sharing between TCP and real-time traffic leading to this behaviour. Third, we show how various factors such as traffic characteristics and link rates impact the severity of this anomaly. Our study particularly informs all-optical packet router designs (envisaged to have buffer sizes in the few tens of packets) and network service providers who operate their buffer sizes in this regime, of the negative impact investment in larger buffers can have on the quality of service performance.

27 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: This paper proposes a novel profile-based approach to identify traffic flows belonging to the target application, which shows that one can identify the popular P2P applications with very high accuracy.
Abstract: Accurate identification of network applications is important to many network activities. Traditional port-based technique has become much less effective since many new applications no longer use well-known port numbers. In this paper, we propose a novel profile-based approach to identify traffic flows belonging to the target application. In contrast to classifying traffic based on statistics of individual flows in previous studies, we build behavioral profiles of the target application, which describe dominant patterns of the application. Based on the behavioral profiles, a two-level matching is used in identifying new traffic. We first determine if a host participates in the application by comparing its behavior with the profiles. Subsequently, for each flow of the host we compare if it matches with the patterns in the profiles to determine which flows belong to this application. We demonstrate the effectiveness of our method on campus traffic traces. Our results show that one can identify the popular P2P applications with very high accuracy.

27 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: A novel framework for a QoS-constrained resource provisioning problem is presented, and a capacity planning approach to optimizing computer resources for all service sites owned by service providers subject to multiple QoS metrics defined in the SLA and their violation penalties is proposed.
Abstract: The composition of services has been a useful approach to integrating business applications within and across organizational boundaries. In this approach, individual services are federated into composite services which are able to execute a given task subject to a service level agreement (SLA). An SLA is a contract agreed between a customer and a service provider who define a set of several quality of services (QoS). An SLA violation penalty is a way to ensure the credibility of an advertised SLA by a service provider. In this paper, we consider a set of computer resources used by a service broker who represents service providers to host enterprise applications for differentiated customer services subject to an SLA and its violation penalty. We present a novel framework for a QoS-constrained resource provisioning problem, and propose a capacity planning approach to optimizing computer resources for all service sites owned by service providers subject to multiple QoS metrics defined in the SLA and their violation penalties. Simulation results show that the proposed approach is efficient for reliable resource planning in service composition.

23 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: A two-layer architecture is proposed that makes the coexistence of various algorithms explicit and proposes novel control algorithms, investigate their behavior under various conditions, and compare them with existing approaches.
Abstract: Pre-congestion notification (PCN) marks packets when the PCN traffic rate exceeds an admissible link rate and this marking information is used as feedback from the network to take admission decisions for new flows. This idea is currently under standardization in the IETF. Different marking algorithms are discussed and various admission control algorithms are proposed that decide based on the packet markings whether further flows should be accepted or blocked. In this paper, we propose a two-layer architecture that makes the coexistence of various algorithms explicit. We propose novel control algorithms, investigate their behavior under various conditions, and compare them with existing approaches.

21 citations


Proceedings Article•DOI•
02 Jun 2008
TL;DR: This paper proves that all routers can get by with very small buffers, and proposes a simple active queue management policy called bounded jitter policy (BJP), and shows that under the proposed policy each flow will preserve its smooth pattern across the network.
Abstract: In this paper we explore whether a general topology network built up of routers with very small buffers, can maintain high throughput under TCP's congestion control mechanism. Recent results on buffer sizing challenged the widely used assumption that routers should buffer millions of packets. These new results suggest that when smooth TCP traffic goes through a single tiny buffer of size O(log W), then close-to-peak throughput can be achieved; W is the maximum window size of TCP flows. In this work, we want to know if a network of many routers can perform well when all buffers in the network are made very small, independent of the structure of the network. This scenario represents a real network where packets go through several buffering stages on their routes. Assuming the ingress TCP traffic to a network is paced, we first prove that all routers can get by with very small buffers, if the network has a tree structure. For networks with general topology, we propose a simple active queue management policy called bounded jitter policy (BJP), and show that under the proposed policy each flow will preserve its smooth pattern across the network. Logarithmic size buffers would therefore be enough in every router of the network.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: In a network with free-riding nodes, the main result shows that a O(log(N)) mean broadcast time can be achieved if nodes remain connected to the network for the duration of at least one more contact after downloading the file, otherwise a significantly worse O(N) time is required to broadcast the file.
Abstract: In this paper we obtain the scaling law for the mean broadcast time of a file in a P2P network with an initial population of N nodes. In the model, at Poisson rate lambda a node initiates a contact with another node chosen uniformly at random. This contact is said to be successful if the contacted node possesses the file, in which case the initiator downloads the file and can later upload it to other nodes. In a network with altruistic nodes (i.e., nodes do not leave the network) we show that the mean broadcast time is O(log(N)). In a network with free-riding nodes, our main result shows that a O(log(N)) mean broadcast time can be achieved if nodes remain connected to the network for the duration of at least one more contact after downloading the file, otherwise a significantly worse O(N) time is required to broadcast the file.

Proceedings Article•DOI•
K. Kotla1, A.L.N. Reddy1•
02 Jun 2008
TL;DR: It is shown that PERT experiences lower drop rates than SACK and leads to lower overall drop rates with different mixes of PERT and SACK protocols, and that a single PERT flow can fully utilize a high-speed, high-delay link.
Abstract: This paper investigates the issues in making a delay-based protocol adaptive to heterogeneous environments. We address how a delay-based protocol can compete with a loss- based protocol such as TCP. We investigate if potential noise and variability in delay measurements in environments such as cable and ADSL access networks impact the protocol behavior significantly. We investigate these issues in the context of incremental deployment of a new delay-based protocol, PERT. We propose design modifications to PERT to compete with SACK. We show that PERT experiences lower drop rates than SACK and leads to lower overall drop rates with different mixes of PERT and SACK protocols. Second, we show that a single PERT flow can fully utilize a high-speed, high-delay link. The results from ns-2 simulations indicate that PERT can adapt to heterogeneous networks and can operate well in an environment of heterogeneous protocols. We also show that proposed changes retain the desirable properties of PERT such as low loss rates and fairness, when operating alone. The protocol has also been implemented in the Linux kernel and tested through experiments on live networks, by measuring the throughput and losses between nodes in our lab at TAMU and different machines on the planet-lab.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: This paper investigates an approximation that can quickly find a 'good' path subject to flexible objectives and constraints, without amalgamating link metrics, and the application of simulated annealing to improve path quality.
Abstract: Many applications need routing paths with diverse Quality of Service (QoS) objectives and constraints based on multiple metrics, including delay, bandwidth, and error rate. Since most multi-metric path selection problems are NP complete heuristics are used. However, current heuristics tend to be specific to a problem, computationally complex, or amalgamate metrics into one cost (which limit the allowable solution space). This paper investigates: (a) an approximation that can quickly find a 'good' path subject to flexible objectives and constraints, without amalgamating link metrics, and (b) the application of simulated annealing to improve path quality. Results for networks of 100 to 1000 nodes show the new approximation reduces min-max path cost by 28% over existing multi-metric heuristics (and by 40% over single metric heuristics); simulated annealing, however, could only provide a further 3% improvement.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: This paper designs a mechanism called Adaptive Buffer Sizing (ABS), which is composed of two Integral controllers for dynamic buffer adjustment and two gradient-based components for intelligent parameter training, and demonstrates that ABS successfully stabilizes the buffer size at its minimum value under given constraints.
Abstract: Most existing criteria [3], [5], [8] for sizing router buffers rely on explicit formulation of the relationship between buffer size and characteristics of Internet traffic. However, this is a non-trivial, if not impossible, task given that the number of flows, their individual RTTs, and congestion control methods, as well as flow responsiveness, are unknown. In this paper, we undertake a completely different approach that uses control- theoretic buffer-size tuning in response to traffic dynamics. Motivated by the monotonic relationship between buffer size and loss rate and utilization, we design a mechanism called Adaptive Buffer Sizing (ABS), which is composed of two Integral controllers for dynamic buffer adjustment and two gradient-based components for intelligent parameter training. We demonstrate via ns2 simulations that ABS successfully stabilizes the buffer size at its minimum value under given constraints, scales to a wide spectrum of flow populations and link capacities, exhibits fast convergence rate and stable dynamics in various network settings, and is robust to load changes and generic Internet traffic (including FTP, HTTP, and non-TCP flows). All of these demonstrate that ABS is a promising mechanism for tomorrow's router infrastructure and may be of significant interest for the ongoing collaborative research and development efforts (e.g., GENI and FIND) in reinventing the Internet.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: This work proposes a single-domain edge-to-edge (g2g) dynamic capacity contracting mechanism, where a network customer can enter into a bandwidth contract on a g2g path at a future time, at a predetermined price, and compute the risk-neutral prices for these g2G bailout forward contracts (BFCs).
Abstract: Despite the huge success of the Internet in providing basic communication services, the Internet architecture needs to be upgraded so as to provide end-to-end QoS services to its customers. Currently, a user or an enterprise that needs end-to-end bandwidth guarantees between two arbitrary points in the Internet for a short period of time has no way of expressing its needs. To allow these much needed basic QoS services, we propose a single-domain edge-to-edge (g2g) dynamic capacity contracting mechanism, where a network customer can enter into a bandwidth contract on a g2g path at a future time, at a predetermined price. For practical and economic viability, such forward contracts must involve a bailout option to account for bandwidth becoming unavailable at service delivery time, and must be priced appropriately to enable ISPs manage risks in their contracting and investments. Our design allows ISPs to advertise point-to-point different prices for each of their g2g paths instead of the current point-to-anywhere prices, allowing for better end-to-end paths, temporal flexibility and efficiency of bandwidth usage. We compute the risk-neutral prices for these g2g bailout forward contracts (BFCs), taking into account correlations between different contracts due to correlated demand patterns and overlapping paths. We implement this multiple g2g BFC framework on a realistic network model with Rocketfuel topologies, and evaluate our contract switching mechanism in terms of key network performance metrics like fraction of bailouts, revenue earned by the provider, and adaptability to link failures.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: Four recent protocols in explicit congestion control based on multi-byte router feedback are implemented in the existing Linux TCP/IP stack (Linux) in a manner that is transparent to applications and their previously undocumented properties are demonstrated.
Abstract: Innovative efforts to provide a clean-slate design of congestion control for future high-speed heterogeneous networks have recently led to the development of explicit congestion control. These methods (N. Dukkipati, et al., Jun. 2005), (S. Jain, et al., June 2007), (D. Katabi, et al., Aug. 2002), (Y. Zhang, et al., April 2006) rely on multi-byte router feedback and aim to contribute to the design of a more scalable Internet of tomorrow. However, experimental evaluation and deployment experience with these approaches, especially in high- capacity networks and multi-link settings, is still missing from the literature. This paper aims to fill this void and investigate the behavior of these methods in single and multi-link topologies involving real systems and gigabit networks. We implement four recent protocols XCP (D. Katabi, et al., Aug. 2002), JetMax (Y. Zhang, et al., April 2006), RCP (N. Dukkipati, et al., Jun. 2005) and PIQI-RCP (S. Jain, et al., June 2007) in the existing Linux TCP/IP stack (Linux) in a manner that is transparent to applications and conduct experiments in Emulab using a variety of network configurations. Our experiments not only confirm the known behavior of these methods, but also demonstrate their previously undocumented properties.

Proceedings Article•
01 Jan 2008

Proceedings Article•DOI•
02 Jun 2008
TL;DR: A simple model is presented that captures the cause for the inefficiency of TCP over autorate links and several techniques at both the TCP level and the link layer to alleviate contention are examined.
Abstract: Wireless multi-hop access networks are an increasingly popular option to provide cost-efficient last-mile Internet access. However, despite extensive research, performance of even basic communication services, such as TCP, is still problematic. Measurements collected on a wireless testbed indicate that the poor performance of multi-hop access networks is caused by poor interactions between TCP congestion control and link- layer bit-rate adaptation resulting in severely reduced network efficiency even over short wireless paths (< 6 hops). However, bit-rate adaptation improves fairness across TCP flows. The same principal observations hold for hybrid wireless/wireline paths. To investigate approaches to improve TCP performance, we present a simple model that captures the cause for the inefficiency of TCP over autorate links. We then examine several techniques at both the TCP level and the link layer (TCP Vegas, clamping, limiting the buffer size at the wireless routers) to alleviate contention. None of these techniques works for all scenarios, but the simple approach to limit the buffer size is attractive in many settings that include four or more wireless hops.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: This work contributes an original self-overload control (SOC) policy that allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time.
Abstract: Unexpected increases in demand and most of all flash crowds are considered the bane of every Web application as they may cause intolerable delays or even service unavailability. Proper quality of service policies must guarantee rapid reactivity and responsiveness even in such critical situations. Previous solutions fail to meet common performance requirements when the system has to face sudden and unpredictable surges of traffic. Indeed they often rely on a proper setting of key parameters which requires laborious manual tuning, preventing a fast adaptation of the control policies. We contribute an original self-overload control (SOC) policy. This allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time. Our policy does not require any prior information on the incoming traffic or manual configuration of key parameters. We ran extensive simulations under a wide range of operating conditions, showing that SOC rapidly adapts to time varying traffic and self-optimizes the resource utilization. It admits as many new sessions as possible in observance of the agreements, even under intense workload variations. We compared our algorithm to previously proposed approaches highlighting a more stable behavior and a better performance.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: This paper proposes a light-weight rule distribution scheme that quickly balances workloads among all firewalls, and shows that using a commodity PC, this approach can reduce the peak firewall workload in distributed firewall systems by 40% within less than five minutes, compared against alternative solutions that only optimize rule ordering on individualFirewalls.
Abstract: Firewalls are widely deployed nowadays to enforce security policies of enterprise networks. While having played crucial roles in securing these networks, firewalls themselves are subject to performance limitations. An overloaded firewall can cause severe damage to the protected enterprise network, because any legitimate communication through it is either degraded or even completely severed. In this paper, we address how to dynamically balance packet filtering workloads on distributed firewalls efficiently in large enterprise networks. We model dynamic load balancing on distributed firewalls as a minimax optimization problem, and show that it is strongly NP-complete even if we eliminate all precedence relationships among policy rules by rule rewriting. Accordingly, we propose a light-weight rule distribution scheme that quickly balances workloads among all firewalls. Our scheme is adaptive to incoming traffic. Moreover, dynamically placing and ordering policy rules on distributed firewalls reduces the probability that attackers successfully infer the rule distribution. Experimental results show that using a commodity PC, our approach can reduce the peak firewall workload in distributed firewall systems by 40% within less than five minutes, compared against alternative solutions that only optimize rule ordering on individual firewalls.

Proceedings Article•DOI•
Xinyang Zhang1, A. Charny1•
02 Jun 2008
TL;DR: The goal is to understand practical limitations of the specific admission algorithms proposed for PCN and to understand the properties of the termination algorithms of which little is known from prior research.
Abstract: Pre-Congestion Notification (PCN) has been proposed at IETF as a framework for scalable measurement-based admission control (MBAC) for a DiffServ-capable network domain. PCN encompasses two functions: admission control of new flows and termination of admitted flows when the available capacity is unexpectedly reduced after a network failure. While there has been extensive research on MBAC, our goal is to understand practical limitations of the specific admission algorithms proposed for PCN and to understand the properties of the termination algorithms of which little is known from prior research.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: By numerical analysis, the optimal number of transmission opportunities for transmitting bandwidth requests of the TBEB satisfying QoS on delay bound and loss bound is found.
Abstract: We investigate the delay of the bandwidth request under the truncated binary exponential backoff (TBEB) mechanism in the IEEE 802.16e, considering error-free/error- prone wireless channels. We derive the distribution of delay of bandwidth request packets in the TBEB by an analytic method on the assumption of Bernoulli request arrival process and error-free channel conditions, and extend the analytic results to the error-prone channel condition where the request transmissions have error with i.i.d. By numerical analysis we can find the optimal number of transmission opportunities for transmitting bandwidth requests of the TBEB satisfying QoS on delay bound and loss bound.

Proceedings Article•DOI•
C.E. Koksal1•
02 Jun 2008
TL;DR: It is shown that an extra speedup of O(log n/log log n) is necessary to support 100% throughput for any admissible multicast traffic if fanout splitting of multicast packets is not allowed, and some problems in unicast switch scheduling are revisit.
Abstract: We show an isomorphism between maximal matching for packet scheduling in crossbar switches and strictly non-blocking circuit switching in three-stage Clos networks We use the analogy for a crossbar switch of size n times n to construct a simple multicast packet scheduler of complexity O(n log n) based on maximal matching We show that, with this simple scheduler, a speedup of O(log n/log log n) is necessary to support 100% throughput for any admissible multicast traffic If fanout splitting of multicast packets is not allowed, we show that an extra speedup of 2 is necessary, even when the arrival rates are within the admissible region for mere unicast traffic Also we revisit some problems in unicast switch scheduling We illustrate that the analogy provides useful perspectives and we give a simple proof for a well known result

Proceedings Article•DOI•
02 Jun 2008
TL;DR: Four distributed rate and delay controls are derived accounting for their bandwidth and end-to-end delay requirements while also allowing for multiple flow priorities and the stability and performance of discrete time versions of these controls are demonstrated numerically in a widely spanned real network topology.
Abstract: There is growing evidence that a new generation of potentially high-revenue applications requiring quality of service (QoS) guarantee are emerging. Current methods of QoS provisioning have scalability concerns and cannot guarantee end-to-end delay. For a theoretical fluid model, we derive four distributed rate and delay controls accounting for their bandwidth and end-to-end delay requirements while also allowing for multiple flow priorities. We show that two of them are globally stable in the presence of arbitrary information time lags and two are globally stable without time lags. The global stability in the presence of time lags of the later two is studied numerically. Under all controls, the stable flow rates attain the end-to-end delay requirements. We also show that by enhancing the network with bandwidth reservation and admission control, minimum rate is also guaranteed by our controls. By guaranteeing end- to-end delays, our controls facilitate router buffer sizing that prevent buffer overflow in the fluid model. The distributed rate- delay combined control algorithms provide a scalable theoretical foundation for a QoS-guarantee control plane in current and in "clean slate" IP networks. To translate the theory into practice, we describe a control plane protocol facilitating our controls in the edge routers. The stability and performance of discrete time versions of our controls are demonstrated numerically in a widely spanned real network topology.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: TC-ROME is presented, a novel transport-layer framework that allows an establishment of multiple TCP connections between one client with possibly multiple home addresses and multiple co-located or distributed servers and provides the means to overcome bandwidth fluctuations and network bottlenecks.
Abstract: Live streaming of multimedia content over the Internet is rapidly increasing, e.g. with the Web 2.0 and via peer-to-peer systems. While many homes today are equipped with sufficient bandwidth (DSL) to support high quality live streaming, the quality of experience of a user is dismal: the video resolution and size is small and the displaying is often interrupted. One of the key problems arises because TCP is used as the underlying transport layer protocol: packet loss, retransmissions and timeouts prevent TCP from delivering a timely, steady flow of packets and thus inhibits the provision of online multimedia content. This paper presents TCP-ROME, a novel transport-layer framework that allows an establishment of multiple TCP connections between one client with possibly multiple home addresses and multiple co-located or distributed servers. The download over multiple TCP connections increases the total throughput by aggregating the resources of multiple TCP connections. Moreover, TCP-ROME provides the means to overcome bandwidth fluctuations and network bottlenecks by dynamically coordinating the downloaded content from the different servers and by coordinating and adapting the streaming rate of the different connections to meet the bandwidth requirements of the video.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: This work considers a reputation-based mechanism for providing incentives to peers for resource provisioning besides resource consuming and confirms that the reputation mechanism discourages selfish behavior and drives the system to a state where each peer obtains utility in accordance to its hidden intentions in dissatisfying others.
Abstract: Peer-to-peer networks are voluntary resource sharing systems among rational agents that are resource providers and consumers. While altruistic resource sharing is necessary for efficient operation, this can only be imposed by incentive mechanisms, otherwise peers tend to behave selfishly. Selfishness in general terms means only consuming resources in order to absorb maximal utility from them and not providing resources to other peers because this would require effort and would not give any utility. In peer-to- peer networks, this behavior, known as free riding, amounts to only downloading content from others and leads to system performance degradation. In this work, we consider a reputation-based mechanism for providing incentives to peers for resource provisioning besides resource consuming. We consider networks where the access technology does not separate upstream and downstream traffic, and these flow through the same capacity-limited access link. Peers do not know other peers' strategies and their intentions to conform to the protocol and share their resources. A separate utility maximization problem is solved by each peer, where the peer allocates a portion of its link bandwidth to its own downloads, acting as client, and it also allocates the remaining bandwidth for serving requests made to it by other peers. The optimization is carried out under a constraint on the level of dissatisfaction the peer intends to cause by not fulfilling others' requests. This parameter is private information for each peer. The reputation of a peer as a server is updated based on the amount of allocated bandwidth compared to the requested one. Reputation acts towards gradually revealing hidden intentions of peers and accordingly guiding the resource allocation by rewarding or penalizing peers in subsequent bandwidth allocations. Our results confirm that the reputation mechanism discourages selfish behavior and drives the system to a state where each peer obtains utility in accordance to its hidden intentions in dissatisfying others.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: A dynamic control policy (DCP) is proposed that works on top of a given sub-optimal algorithm, and dynamically but in a large time-scale adjusts the time given to the algorithm according to queue backlog and channel correlations.
Abstract: It is well known that the generalized max-weight matching (GMWM) scheduling policy, and in general throughput-optimal scheduling policies, often require the solution of a complex optimization problem, making their implementation prohibitively difficult in practice. This has motivated many researchers to develop distributed sub-optimal algorithms that approximate the GMWM policy. One major assumption commonly shared in this context is that the time required to find an appropriate schedule vector is negligible compared to the length of a timeslot. This assumption may not be accurate as the time to find schedule vectors usually increases polynomially with the network size. On the other hand, we intuitively expect that for many sub-optimal algorithms, the schedule vector found becomes a better estimate of the one returned by the GMWM policy as more time is given to the algorithm. We thus, in this paper, consider the problem of scheduling from a new perspective through which we carefully incorporate channel variations and time-efficiency of sub-optimal algorithms into the scheduler design. Specifically, we propose a dynamic control policy (DCP) that works on top of a given sub-optimal algorithm, and dynamically but in a large time-scale adjusts the time given to the algorithm according to queue backlog and channel correlations. This policy does not require the knowledge of the structure of the given sub-optimal algorithm, and with low-overhead can be implemented in a distributed manner. Using a novel Lyapunov analysis, we characterize the stability region induced by DCP, and show that our characterization can be tight. We also show that the stability region of DCP is at least as large as the one for any other static policy. Finally, we provide two case studies to gain further intuition into the performance of DCP.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: A cost-efficient video multicast method for live video streaming to heterogeneous mobile terminals over a content delivery network (CDN), where CDN consists of a video server, several proxies with wireless access points, and overlay links among the server and proxies.
Abstract: This paper presents a cost-efficient video multicast method for live video streaming to heterogeneous mobile terminals over a content delivery network (CDN), where CDN consists of a video server, several proxies with wireless access points, and overlay links among the server and proxies. In this method, the original video sent from the server is converted into multiple versions with various qualities by letting proxies execute transcoding services based on the users' requirements, and delivered to mobile terminals along video delivery paths. To suppress the required computation and transfer costs in CDN, we propose an algorithm to calculate cost-efficient video delivery paths which minimizes the sum of the computation cost for proxies and the transfer cost on overlay links. Our basic idea for deriving cost-efficient delivery paths is to place transcoding service on different proxies in load-balancing manner, and to construct a minimal Sterner tree from all transcoding points of requested qualities. The overall goal of the placement is the balance between computation and transfer cost. Through simulations, we show that our algorithm can calculate more cost-efficient video delivery paths and achieve lower request rejections than other algorithms.

Proceedings Article•DOI•
02 Jun 2008
TL;DR: A MAC layer solution called pushback is proposed, that appropriately delays packet transmissions to overcome periods of poor channel quality and high interference, while ensuring that the throughput requirement of the node is met.
Abstract: In sensor networks, application layer QoS requirements are critical to meet while conserving energy. One of the leading factors for energy wastage is failed transmission attempts due to channel dynamics and interference. Existing techniques are unaware of the channel dynamics and lead to suboptimal channel access patterns. We propose a MAC layer solution called pushback, that appropriately delays packet transmissions to overcome periods of poor channel quality and high interference, while ensuring that the throughput requirement of the node is met. It uses a hidden Markov model (HMM) based channel model that is maintained without any additional signaling overhead. The pushback algorithm is shown to improve the packet success rate by up to 71% and reduce the number of transmissions needed by up to 38% while ensuring the same throughput.