scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2004"


Proceedings ArticleDOI
07 Jun 2004
TL;DR: A control-theoretic approach for admission control in multitiered Web sites that both prevents overload and enforces absolute client response times, while still maintaining high throughput under load.
Abstract: Managing the performance of multiple-tiered Web sites under high client loads is a critical problem with the advent of dynamic content and database-driven servers on the Internet. This paper presents a control-theoretic approach for admission control in multitiered Web sites that both prevents overload and enforces absolute client response times, while still maintaining high throughput under load. We use classical control theoretic techniques to design a proportional integral (PI) controller for admission control of client HTTP requests. In addition, we present a processor-sharing model that is used to make the controller self-tuning, so that no parameter setting is required beyond a target response time. Our controller is implemented as a proxy, called Yaksha, which operates by taking simple external measurements of the client response times. Our design is noninvasive and requires minimal operator intervention. We evaluate our techniques experimentally using a 3-tiered dynamic content Web site as a testbed. Using the industry standard TPC-W client workload generator, we study the performance of the PI admission controller with extensive experiments. We show that the controller effectively bounds the response times of requests for dynamic content while still maintaining high throughput levels, even when the client request rate is many times that of the server's maximum processing rate. We demonstrate the effectiveness of our self-tuning mechanism, showing that it responds and adapts smoothly to changes in the workload.

222 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This work uses an on-line feedback loop with an adaptive controller that throttles storage access requests to ensure that the available system throughput is shared among workloads according to their performance goals and their relative importance.
Abstract: Ensuring performance isolation and differentiation among workloads that share a storage infrastructure is a basic requirement in consolidated data centers. Existing management tools rely on resource provisioning to meet performance goals; they require detailed knowledge of the system characteristics and the workloads. Provisioning is inherently slow to react to system and workload dynamics, and in the general case, it is impossible to provision for the worst case. We propose a software-only solution that ensures predictable performance for storage access. It is applicable to a wide range of storage systems and makes no assumptions about workload characteristics. We use an on-line feedback loop with an adaptive controller that throttles storage access requests to ensure that the available system throughput is shared among workloads according to their performance goals and their relative importance. The controller considers the system as a "black box" and adapts automatically to system and workload changes. The controller is distributed to ensure high availability under overload conditions, and it can be used for both block and file access protocols. The evaluation of Triage, our experimental prototype, demonstrates workload isolation and differentiation, in an overloaded cluster file-system where workloads and system components are changing.

124 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: In this article, a rank-based peer-selection mechanism for peer-to-peer media streaming systems is proposed, where contributors to the system are rewarded with flexibility and choice in peer selection, resulting in high quality streaming sessions.
Abstract: We propose a rank-based peer-selection mechanism for peer-to-peer media streaming systems. The mechanism provides incentives for cooperation through service differentiation. Contributors to the system are rewarded with flexibility and choice in peer selection, resulting in high quality streaming sessions. Free-riders are given limited options in peer selection, if any, and hence receive low quality streaming. Through simulation and wide-area measurement studies, we verify that the mechanism can provide near optimal streaming quality to the cooperative users until the bottleneck shifts from the sources to the network.

120 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This work examines actual traces of request arrivals to the application tier of e-commerce sites, and shows that the arrival process is effectively Poisson, and derives three simple methods to approximate the allocation that maximizes profits.
Abstract: Server providers that support e-commerce applications as a service to multiple e-commerce websites traditionally use a tiered server architecture. This architecture includes an application tier to process requests that require dynamically generated content. How this tier is provisioned can significantly impact a provider's profit margin. We study methods to provision servers in the application serving tier to increase a server provider's profits. First, we examine actual traces of request arrivals to the application tier of e-commerce sites, and show that the arrival process is effectively Poisson. Next, we construct an optimization problem in the context of a set of application servers modeled as M/G/l/PS queueing systems, and derive three simple methods to approximate the allocation that maximizes profits. Simulation results demonstrate that our approximation methods achieve profits that are close to optimal and are significantly higher than those achieved via simple heuristics.

95 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper presents a small world overlay protocol (SWOP), the first piece of work that addresses how to handle dynamic flash crowds in a structured P2P network environment and shows that the SWOP protocol can achieve improved object lookup performance over the existing protocols.
Abstract: This paper considers the problem of how to construct and maintain an overlay structured P2P network based on the small world paradigm. Two main attractive properties of a small world network are (1) low average hop distance between any two randomly chosen nodes, and (2) high clustering coefficient of nodes. Having a low average hop distance implies a low latency for object lookup, while having a high clustering coefficient implies the underlying network can effectively provide object lookup even under heavy demands (for example, in a flash crowd scenario). We present a small world overlay protocol (SWOP) for constructing a small world overlay P2P network. We compare the performance of our system with that of other structured P2P networks such as Chord. We show that the SWOP protocol can achieve improved object lookup performance over the existing protocols. We also exploit the high clustering coefficient of a SWOP network to design an object replication algorithm that can effectively handle heavy object lookup traffic. As a result, a SWOP network can quickly and efficiently deliver popular and dynamic objects to a large number of requesting nodes. To the best of our knowledge, ours is the first piece of work that addresses how to handle dynamic flash crowds in a structured P2P network environment.

61 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: The goal of this paper is to find the max-min rate allocation in overlay multicast, which is pareto-optimal in terms of network resource utilization, andmax-min fair, and a heuristic algorithm of overlay multicasts tree construction designed to handle the dynamic client join/departure.
Abstract: Although initially proposed as the deployable alternative to IP multicast, overlay multicast actually offers us great flexibilities on QoS-aware resource allocation for network applications. For example, in overlay multicast streaming, (1) the streaming rate of each client can be diversified to better accommodate network heterogeneity, through various end-to-end streaming adaptation techniques; and (2) one can freely organize the overlay session by rearranging the multicast tree, for the purpose of better resource utilization and fairness among all clients. The goal of this paper, is to find the max-min rate allocation in overlay multicast, which is pareto-optimal in terms of network resource utilization, and max-min fair. We approach this goal in two steps. First, we present a distributed algorithm, which is able to return the max-min rate allocation for any given overlay multicast tree. Second, we study the problem of finding the optimal tree, whose max-min rate allocation is optimal among all trees. After proving its NP-hardness, we propose a heuristic algorithm of overlay multicast tree construction. A variation of the heuristic is also designed to handle the dynamic client join/departure. Both of them have approximation bound of 1/2 to the optimal value. Experimental results show that they achieve high average throughput, almost saturate link utilization, and consistent min-favorability.

45 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This study addresses the problem of the topological synthesis of a service overlay network, where endsystems and nodes of the overlay network (provider nodes) are connected through ISPs that supports bandwidth reservations and expresses the topology design problem as an optimization problem.
Abstract: The Internet still lacks adequate support for QoS applications with real-time requirements. In great part, this is due to the fact that provisioning of end-to-end QoS to traffic that traverses multiple autonomous systems (ASs) requires a level of cooperation between ASs that is difficult to achieve in the current architecture. Recently, service overlay networks have been considered as an approach to QoS deployment that avoids these difficulties. In this study, we address the problem of the topological synthesis of a service overlay network, where endsystems and nodes of the overlay network (provider nodes) are connected through ISPs that supports bandwidth reservations. We express the topology design problem as an optimization problem. Even though the design problem is related to the (in general NP-hard) quadratic assignment problem, we are able to show that relatively simple heuristic algorithms can deliver results that are sometimes close to the optimal solution.

45 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This work proves that REED can asymptotically achieve k tolerance if certain constraints on node density are satisfied, and investigates via simulations the clustering properties of REED, and shows that building multiple cluster head overlays does not consume significant energy.
Abstract: Clustering sensor nodes increases the scalability and energy efficiency of communications among them. In hostile environments, unexpected failures or attacks on cluster heads (through which communication takes place) may partition the network or degrade application performance. In this work, we propose a new approach, REED (Robust Energy Efficient Distributed clustering), for clustering sensors deployed in hostile environments. Our primary objective is to construct a k (i.e., k-connected) network, where k is a constant determined by the application. Fault tolerance can be achieved by selecting k independent sets of cluster heads (i.e., cluster head overlays) on top of the physical network, so that each node can quickly switch to other cluster heads in case of failures or attacks on its current cluster head. The independent cluster head overlays also provide multiple vertex-disjoint routing paths for load balancing and security. Network lifetime is prolonged by selecting cluster heads with high residual energy and low communication cost, and periodically reclustering the network in order to distribute energy consumption among sensor nodes. We prove that REED can asymptotically achieve k tolerance if certain constraints on node density are satisfied. We also investigate via simulations the clustering properties of REED, and show that building multiple cluster head overlays does not consume significant energy.

38 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: A set of metrics for service availability is defined to quantify the performance of IP backbone networks and capture the impact of routing dynamics on packet forwarding and goodness factors based on service availability are derived.
Abstract: Traditional SLAs, defined by average delay or packet loss, often camouflage the instantaneous performance perceived by end-users. We define a set of metrics for service availability to quantify the performance of IP backbone networks and capture the impact of routing dynamics on packet forwarding. Given a network topology and its link weights, we propose a novel technique to compute the associated service availability by taking into account transient routing dynamics and operational conditions, such as BGP table size and traffic distributions. Even though there are numerous models for characterizing topologies, none of them provide insights on the expected performance perceived by end customers. Our simulations show that the amount of service disruption experienced by similar networks (i.e., with similar intrinsic properties such as average out-degree or network diameter) could be significantly different, making it imperative to use new metrics for characterizing networks. In the second part of the paper, we derive goodness factors based on service availability viewed from three perspectives: ingress node (from one node to many destinations), link (traffic traversing a link), and network-wide (across all source-destination pairs). We show how goodness factors can be used in various applications and describe our numerical results.

28 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: A two-tier architecture that acts as an interface between application QoS requirements and network-provided QoS capabilities and enhances generality and scalability in providing QoS support for Internet applications and end-devices is designed and implemented.
Abstract: Paving the first mile of quality-of-service (QoS) support has become essential for full deployment and utilization of QoS proposals in the Internet. Most of existing efforts have been made to provide network services and control without paying much attention to how applications can use these services. In this paper, we design and implement a two-tier architecture, called QoS Gateway (QoSGW), that acts as an interface between application QoS requirements and network-provided QoS capabilities. The QoSGW is to support small embedded network devices that rely on network-provided QoS. Our architecture, in its full version, is composed of two key components: (i) an agent that resides on the end-host and provides an adequate interface for QoS-dependent applications, and (ii) a QoS manager that provides an interface to network services for the agents. Using these two components enhances generality and scalability in providing QoS support for Internet applications and end-devices. The QoSGW is intended to promote QoS deployment and facilitate construction of QoS-aware access networks.

23 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper analyzes by simulation the benefit of bounding the stochastic processes of a queue with methods from network calculus, and discusses how the results can be used for dimensioning buffers for multiplexed traffic.
Abstract: Quality of Service (QoS) is an area with high academic curiosity. Our long-term goal is to develop a unified mathematical model. This paper is a first step towards this ambitious goal. The most widespread models for network QoS are network calculus and queueing theory. While the strength of queueing theory is its proven applicability to a wide area of problems, Network calculus can offer performance guarantees. We analyse by simulation the benefit of bringing the two of them together, i.e., bounding the stochastic processes of a queue with methods from network calculus. A basic result from network calculus is that enforcing traffic shaping and service curves bounds the buffer. This leads to denying buffer states in queues with infinite buffer. Specifically, we analyse what happens with the probability mass of such buffer states. Finally, we discuss how our results can be used for dimensioning buffers for multiplexed traffic.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper investigates client-centered techniques for trading download time for energy savings during TCP downloads, in an attempt to reduce the energy' delay product.
Abstract: In mobile devices, the wireless network interface card (WNIC) consumes a significant portion of overall system energy. One way to reduce energy consumed by a mobile device is to transition its WNIC to a lower-power sleep mode when data is not being received or transmitted. This paper investigates client-centered techniques for trading download time for energy savings during TCP downloads, in an attempt to reduce the energy' delay product. Effectively saving WNIC energy during a TCP download is difficult because TCP streams tend to be smooth, leaving little potential sleep time. The basic idea behind our technique is that the client increases the amount of time that can be spent in sleep mode by shaping the traffic. In particular, the client convinces the server to send data in predictable bursts, trading lower WNIC energy cost for increased transmission time. Our technique does not rely on any assistance from the server, a proxy, or IEEE 802.11b power-saving mode. Results show that in Internet experiments our scheme outperforms baseline TCP by 64% in the best case, with an average improvement of 19%.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: An algorithm for design and on the fly modification of the schedule of a wireless ad hoc network for provision of fair service guarantees under topological changes and it is proved that, from any initial condition, the algorithm finds the max-min fair rate allocation in the fluid model.
Abstract: This paper proposes an algorithm for design and on the fly modification of the schedule of a wireless ad hoc network for provision of fair service guarantees under topological changes. The primary objective is to derive a distributed coordination method for schedule construction and modification for any wireless ad-hoc network operating under a schedule where transmissions at each slot are explicitly specified over a time period of length T. We first introduce a fluid model of the system where the conflict avoidance requirements of neighboring links are relaxed while the aspect of local channel sharing is captured. In this model we propose an algorithm where the nodes asynchronously re-adjust the rates allocated to their adjacent links using only local information. We prove that, from any initial condition, the algorithm finds the max-min fair rate allocation in the fluid model. Hence, if the iteration is performed constantly the rate allocation will track the optimal even in regimes of constant topology changes. Then we consider the slotted system and propose a modification method that applies directly on the slotted schedule, emulating the effect of the rate re-adjustment iteration of the fluid model. Through extensive experiments in networks with both fixed and time varying topologies we show that the latter algorithm achieves balanced rate allocations in the actual slotted system that are very close to the max-min fair rates. The experiments also show that the algorithm is very robust on topology variations, with very good tracking properties of the max-min fair rate allocation.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: The energy consumption of forward error correction protocols as used to improve quality-of-service (QoS) for wireless computing devices is addressed and adaptive software mechanisms that attempt to manage these tradeoffs in the presence of highly dynamic wireless environments are developed.
Abstract: This paper addresses the energy consumption of forward error correction (FEC) protocols as used to improve quality-of-service (QoS) for wireless computing devices. The paper also characterizes the effect on energy consumption and QoS of the power saving mode in 802.11 wireless local area networks (WLANs). Experiments are described in which FEC-encoded audio streams are multicast to mobile computers across a WLAN. Results of these experiments quantify the tradeoffs between improved QoS, due to FEC, and additional energy consumption caused by receiving and decoding redundant packets. Two different approaches to FEC are compared relative to these metrics. The results of this study enable the development of adaptive software mechanisms that attempt to manage these tradeoffs in the presence of highly dynamic wireless environments.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper presents the first wide-area performance evaluation of an algorithm designed to bound maximum content access latency, as opposed to optimizing an average performance metric.
Abstract: This paper investigates the performance of a content distribution network designed to provide bounded content access latency. Content can be divided into multiple classes with different configurable per-class delay bounds. The network uses a simple distributed algorithm to dynamically select a subset of its proxy servers for different classes such that a global per-class delay bound is achieved on content access. The content distribution algorithm is implemented and tested on PlanetLab, a world-wide distributed Internet testbed. Evaluation results demonstrate that despite Internet delay variability, subsecond delay bounds (of 200-500ms) can be guaranteed with a very high probability at only a moderate content replication cost. The distribution algorithm achieves a 4 to 5 fold reduction in the number of response-time violations compared to prior content distribution approaches that attempt to minimize average latency. This paper presents the first wide-area performance evaluation of an algorithm designed to bound maximum content access latency, as opposed to optimizing an average performance metric.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper proposes a methodology that circumvents limitations by employing arbitrary packet traces and successively matching the encoded speech sample with all possible trace fragments for continuous perceptual evaluation of VoIP traffic carried over various QoS-enabled transmission technologies.
Abstract: With the increasing deployment of real-time Internet services, evaluating the user perception of quality of service (QoS) has gained rapidly increasing importance. In the case of Voice over IP (VoIP), the standard approach of listening-only tests for subjectively assessing a limited number of speech samples, which are supposed to be representative for selected network conditions, does in no way reflect the huge variability of packet loss patterns that may originate from the underlying network. Performing tests by employing objective (instrumental) evaluation methods in a live testbed environment is usually extensive and does not deliver reproducible results, moreover the measurement granularity is bounded by the length of the test speech samples. In this paper, we propose a methodology that circumvents these limitations by employing arbitrary packet traces and successively matching the encoded speech sample with all possible trace fragments. This approach allows for continuous perceptual evaluation of VoIP traffic carried over various QoS-enabled transmission technologies. Results based on traces from testbed measurements reflecting different Web-like cross traffic situations for both the G.729 and iLBC codecs validate our approach and allow interesting insights into the dependence of perceived VoIP quality on underlying technological conditions.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper proposes a simple scheme, BoundedRandomDrop (BRD), that supports multiple service classes and focuses on loss differentiation, as the steadily rising speed of Internet links is progressively limiting the impact of delay differentiation.
Abstract: Today's Internet carries an ever broadening range of application traffic with different requirements. This has stressed its original, one-class, best-effort model, and has been one of the main drivers behind the many efforts aimed at introducing QoS. Those efforts have, however, experienced only limited success because their added complexity often conflict with the scalability requirements of the Internet. This has motivated many proposals that try to offer service differentiation while keeping complexity low. This paper shares similar goals and proposes a simple scheme, BoundedRandomDrop (BRD), that supports multiple service classes. BRD focuses on loss differentiation, as although both losses and delay are important performance parameters, the steadily rising speed of Internet links is progressively limiting the impact of delay differentiation. BRD offers strong loss differentiation capabilities with minimal added cost. BRD does not require traffic profiles or admission controls. It guarantees each class losses that, when feasible, are no worse than a specified bound, and enforces differentiation only when required to meet those bounds. In addition, BRD is implemented using a single FIFO queue and a simple random dropping mechanism. The performance of BRD is investigated for a broad range of traffic mixes and shown to consistently achieve its design goals.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A new measurement mechanism is presented, facilitated by the steady introduction of IPv6 in network nodes and hosts, which exploits native features of the protocol to provide support for performance measurements at the network (IP) layer.
Abstract: Measurement-based performance evaluation of network traffic is becoming very important, especially for networks trying to provide differentiated levels of service quality to the different application flows. The nonidentical response of flows to the different types of network-imposed performance degradation raises the need for ubiquitous measurement mechanisms, able to measure numerous performance properties, and being equally applicable to different applications and transports. This paper presents a new measurement mechanism, facilitated by the steady introduction of IPv6 in network nodes and hosts, which exploits native features of the protocol to provide support for performance measurements at the network (IP) layer. IPv6 Extension Headers have been used to carry the triggers involving the measurement activity and the measurement data in-line with the payload data itself, providing a high level of probability that the behaviour of the real user traffic flows is observed. End-to-end one-way delay, jitter, loss, and throughput have been measured for applications operating on top of both reliable and unreliable transports, over different-capacity IPv6 network configurations. We conclude that this technique could form the basis for future Internet measurements that can be dynamically deployed where and when required in a multiservice IP environment.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: As an aggregate-based work-conserving scheduling algorithm, CAS incurs lower scheduling and state-maintenance overheads at routers than per-flow scheduling, and provides tighter end-to-end (e2e) delay bounds than the "vanilla" GR aggregate scheduling that relies on FIFO queueing within an aggregate.
Abstract: This paper proposes a novel coordinated aggregate scheduling (CAS) algorithm that combines both EOF (earliest-deadline-first) scheduling and rate-based fair queueing. CAS uses guaranteed rate (GR) scheduling (Goyal, P et al., 1995) for traffic aggregates at the inter-aggregate level, but employs EDF-like scheduling at the intra-aggregate level. Computation of the deadline D/sub N/ of a packet at an intermediate node N is coordinated between the node N and its upstream nodes, and D/sub N/ is related to the packet's guaranteed rate clock (GRC) value at the flow-aggregation node. CAS provides tighter end-to-end (e2e) delay bounds than the "vanilla" GR aggregate scheduling that relies on FIFO queueing within an aggregate. Our in-depth simulation results demonstrate CAS's superior performance. Moreover, as an aggregate-based work-conserving scheduling algorithm, CAS incurs lower scheduling and state-maintenance overheads at routers than per-flow scheduling. These salient features make CAS very attractive for use in Internet core networks.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A decay function model of request scheduling algorithms for the resource configuration and allocation problem is presented and it is shown that optimal fixed-time scheduling can effectively reduce the server workload variance and guarantee service deadlines with high robustness on the Internet.
Abstract: Server-side resource configuration and allocation for QoS guarantee is a challenge in performance-critical Internet applications. To overcome the difficulties caused by the high-variability and burstiness of Internet traffic, this paper presents a decay function model of request scheduling algorithms for the resource configuration and allocation problem. Under the decay function model, request scheduling is modelled as a transfer-function based filter system that has an input process of requests and an output process of server load. Unlike conventional queueing network models that rely on mean-value analysis for input renewal or Markovian processes, this decay function model works for general time-series based or measurement based processes and hence facilitates the study of statistical correlations between the request traffic, server load, and QoS of requests. Based on the model, we apply filter design theories in signal processing in the optimality analysis of various scheduling algorithms. We reveal a relationship between the server capacity, scheduling policy, service deadline, and other request properties in a formalism and present the optimality condition with respect to the second moments of request properties for an important class of fixed-time scheduling policies. Simulation results verify the relationship and show that optimal fixed-time scheduling can effectively reduce the server workload variance and guarantee service deadlines with high robustness on the Internet.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A new backward preemption policy for connection preemption is introduced that reduces the number of redundant preemptions and can be integrated into the MPLS-TE (multiprotocol label switching-traffic engineering) framework.
Abstract: Connection preemption plays an important role in multiclass QoS-aware networks. Most of the preemption research has focussed on the algorithms for choosing the appropriate set of preempted connections with minimal cost, in either a centralized or decentralized manner. In this paper, we introduce a new backward preemption policy for connection preemption. This policy reduces the number of redundant preemptions. We show how it can be integrated into the MPLS-TE (multiprotocol label switching-traffic engineering) framework. The policy can also be applied to other types of network control protocols with a two-way signaling process for setting up constraint-based routes with resource reservation. The proposed policy is also independent of the preemption decision algorithm and can work in both centralized and decentralized approaches. We examine the performance of the proposed approach by simulation studies.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This framework may be used to better understand QoS and to define and implement solutions for QoS management functions and is a step towards a better capitalization of the experiments and results on the specification of the QoS to define standard languages and models necessary for efficient and global deployment of QoS over the future networks.
Abstract: Many QoS management functions (such as routing) are dependent on the type and quality of QoS specification. Moreover, QoS may be provided along a path traversing various administrative domains where different QoS solutions exist. In consequence, the QoS specification should be clear, useful for allocating resources and verifiable. It also should be described with a common (formal) language. No work has given complete formal operators to manipulate and reason about QoS. The aim of this paper is the proposal of a generic framework for formalizing the concept of QoS and the associated operations. This framework may be used to better understand QoS and to define and implement solutions for QoS management functions. This framework is a step towards a better capitalization of the experiments and results on the specification of the QoS to define standard languages and models necessary for efficient and global deployment of QoS over the future networks.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper explores node mobility in the search for the optimal packet delivery routes subject to the QoS criteria such as delay and energy consumption by adopting a deterministic model in a noninterfering mobile ad hoc network (MANET).
Abstract: A mobile wireless network experiences random variations due to node mobility, which may potentially be exploited for more cost-effective communications. The pioneering work by Grossglauser and Tse (2001) first demonstrated that a network under sufficient amount of (random) mobility could provide a larger scaling rate of throughput capacity than a static network, at the cost of significant and potentially unbounded end-to-end delay. Subsequent works have addressed the issue of the capacity gain under bounded delay. In this paper, we take a rather different approach in that we explore node mobility in the search for the optimal packet delivery routes subject to the QoS criteria such as delay and energy consumption. This is obtained by adopting a deterministic model in a noninterfering mobile ad hoc network (MANET). We present polynomial time algorithms for finding these optimal routes. Specifically, in a system without power control capability, where the transmission range of each node is fixed, we seek optimal routes with minimal end-to-end delivery time or lowest total power consumption along the path, respectively. For both formulations, we propose hop-expansion based algorithms that carry out the computations inductively over the number of hops. In a system with power control, we seek the minimum energy route, subject to certain end-to-end delay constraint. For this optimization, we propose a layered algorithm that performs the computations inductively over the discrete time periods.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: It is shown that the two traffic controls can coexist without starvation, and the proposed scheme might thus provide a first step towards differentiated services end-to-end.
Abstract: This paper proposes a scheme for quality of service differentiation for single-service networks that is based on the use of two separate forms of traffic control at the transport layer: Streams are controlled by means of probe-based admission control and elastic flows are controlled by TCP. The controls allow separation of traffic into two distinct service classes. The stream class is designed to provide a consistent quality for interactive audiovisual communication, as favored by human perception. It is responsive to load variations as an aggregate through blocking of sessions, while TCP is responsive on the flow level. Streams can be isolated against disturbances from probes and TCP flows by means of error-control coding. We show that the two traffic controls can coexist without starvation, and the proposed scheme might thus provide a first step towards differentiated services end-to-end.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A simple mechanism to monitor throughput of distinct services across EGPRS networks, which enables a proactive and timely supervision of service level agreements and quality and shows good agreement between throughput measurements in the network and the corresponding measured values at the application layer in the mobile terminals.
Abstract: GPRS/EDGE networks provide an opportunity for operators to offer new services to their potential and existing subscribers. Unlike circuit switched services, EGPRS services are dependent on a multitude of factors to guarantee a timely content delivery. To ensure that the contracted QoS is sustained, it is not sufficient to overprovision resources, fn addition, operators need to have tools for monitoring the QoS experienced by diverse packet flows in different network segments. This paper presents theoretically, and confirms experimentally, a simple mechanism to monitor throughput of distinct services across EGPRS networks, which enables a proactive and timely supervision of service level agreements and quality. The approach is based on the treatment class concept, which maps services with different performance requirements onto distinct QoS profiles. By means of treatment classes, defined by a subset of the bearer attributes, i.e. bit rates, priorities and traffic classes, it is possible to formulate metrics to measure the performance of distinct services within the network domains, without any visibility of the content carried by upper layer protocols. Experimental results show good agreement between throughput measurements in the network and the corresponding measured values at the application layer in the mobile terminals.

Proceedings ArticleDOI
09 Jun 2004
TL;DR: This paper adapts Class Based Queueing to schedule critical real-time flows mixed with other kinds of traffic, as is necessary in a DiffServ environment, and uses network calculus technique to set up a bound that seems realistic.
Abstract: Class Based Queueing (CBQ) is a packet scheduling discipline that enables hierarchical link-sharing. Compared to other algorithms, it is modular and intuitive in a first approach, and so is implemented and used nowadays. In this paper, we adapt the discipline to schedule critical real-time flows mixed with other kinds of traffic, as is necessary in a DiffServ environment This requires that some guarantees must be provided deterministically, particularly on queueing delay bounds. Yet theoretical delay bounds for CBQ were never expressed in a general case and with end-to-end derivation, because the nesting of mechanisms makes it hard to predict a worst case scenario. Here we study some cases where an analysis is possible, focusing on two variants of CBQ, and we use network calculus technique to set up a bound that seems realistic. We then show simulations to check the precision of our results.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper elaborates on a paradigm for quality-of-service (QoS) that is local, i.e., it does not depend on multinode cooperation and individuate flows that occupy a large fraction of a buffer and segregate those flows into a separate queue to maintain short queuing delays.
Abstract: This paper elaborates on a paradigm for quality-of-service (QoS) that is local, i.e., it does not depend on multinode cooperation. In order to maintain short queuing delays, we individuate flows that occupy a large fraction of a buffer and segregate those flows into a separate queue. The algorithm is provably fair and can avoid all packet reorderings. We show through extensive simulations that state requirements are minimal, and that other flows will benefit from short queuing delays while aggressive flows can still maintain high throughput.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This work presents a mechanism for bandwidth allocation based on differential pricing that produces allocations that provably satisfy a variant of the classical notion of max-min fairness and ensures that flows with higher QoS requirements need to pay at higher rates to increase their likelihood of being served.
Abstract: Equitable bandwidth allocation is essential when QoS requirements and purchasing power vary among users. To this end, we present a mechanism for bandwidth allocation based on differential pricing. In our model, the QoS vs. cost trade-off induces a minimum acceptable allocation, a maximum acceptable allocation, and a unique optimal allocation for each user. We analyze the fairness and truthfulness properties of our mechanism from a game-theoretic perspective. We show that it produces allocations that provably satisfy a variant of the classical notion of max-min fairness. It ensures that flows with higher QoS requirements need to pay at higher rates to increase their likelihood of being served. Furthermore, the Nash equilibrium induced by our mechanism leads to allocations that are comparable to "socially optimal" allocations; hence users gain very little by being untruthful.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This work proposes the concept of movement contracts, where a mobile terminal specifies its movement to the network, which in return provides a better QoS as long as the provided specification is sufficiently accurate.
Abstract: Resource management for individual flows can significantly improve quality of service (QoS) in mobile cellular networks. However, its efficiency depends on the availability of information about the movement of mobile terminals. Movement prediction can potentially provide this information, but is costly if performed by the network and usually assumes a certain movement model, which may not adequately reflect each individual user's behavior. Instead, we propose the concept of movement contracts, where a mobile terminal specifies its movement to the network, which in return provides a better QoS as long as the provided specification is sufficiently accurate. We describe approaches to specify the spatial and temporal aspects of movement with a parsimonious parameter set and evaluate these approaches through simulations. We find that movement contracts can significantly reduce both session blocking and handover dropping probability simultaneously, whereas existing approaches have to make a tradeoff between these.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: It is argued that suites of policies need to be studied, in order to understand their impacts on the overall file distribution performance, and a system model is defined for the analysis of these policies.
Abstract: This paper proposes to study the impact of a suite of policies on the performance of a multifile distribution system that integrates CDN and P2P techniques. One of the policies is the peer contribution policy that decides the limited data rate and data volume to be contributed by each peer. The peer contribution policy is critical to maintaining the system's overall file distribution capacity without unfairly overloading the individual peers. In our previous work, an analytical framework for the modeling of a hybrid CDN-P2P architecture under a file-specific peer contribution policy is presented. In this paper, we focus on a different scenario where multiple files are being distributed and the peer contribution policy is file-independent. We argue that suites of policies need to be studied, in order to understand their impacts on the overall file distribution performance. The policies include: (1) file-independent peer contribution policy, (2) file request admission policy, (3) supplier selection policy, and (4) file replacement policy. We define a system model for the analysis of these policies. Based on the model, we also propose possible definitions of the policies.