scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2001"


Book ChapterDOI
06 Jun 2001
TL;DR: In this paper, the authors propose a hybrid approach that uses both globally exchanged link state metrics and locally collected path state metrics for proportioning traffic among the selected paths, and compare the performance of their approach with that of global optimal proportioning and show that the proposed approach yields near-optimal performance using only a few paths.
Abstract: Multipath routing schemes distribute traffic among multiple paths instead of routing all the traffic along a single path. Two key questions that arise in multipath routing are how many paths are needed and how to select these paths. Clearly, the number and the quality of the paths selected dictate the performance of a multipath routing scheme. We address these issues in the context of the proportional routing paradigm where the traffic is proportioned among a few "good" paths instead of routing it all along the "best" path. We propose a hybrid approach that uses both globally exchanged link state metrics -- to identify a set of good paths, and locally collected path state metrics -- for proportioning traffic among the selected paths. We compare the performance of our approach with that of global optimal proportioning and show that the proposed approach yields near-optimal performance using only a few paths. We also demonstrate that the proposed scheme yields much higher throughput with much smaller overhead compared to other schemes based on link state updates.

106 citations


Book ChapterDOI
John Wilkes1
06 Jun 2001
TL;DR: Rome as mentioned in this paper is an information model that the Storage Systems Program at HP Labs developed to address the need to represent storage system QoS in many guises: the goals (service level requirements) for the storage system, predictions for the design that results, enforcement constraints for the runtime system to guarantee, and observations made of the system as it runs.
Abstract: The design and operation of very large-scale storage systems is an area ripe for application of automated design and management techniques - and at the heart of such techniques is the need to represent storage system QoS in many guises: the goals (service level requirements) for the storage system, predictions for the design that results, enforcement constraints for the runtime system to guarantee, and observations made of the system as it runs. Rome is the information model that the Storage Systems Program at HP Laboratories has developed to address these needs. We use it as an "information bus" to tie together our storage system design, configuration, and monitoring tools. In 5 years of development, Rome is now on its third iteration; this paper describes its information model, with emphasis on the QoS-related components, and presents some of the lessons we have learned over the years in using it.

83 citations


Book ChapterDOI
06 Jun 2001
TL;DR: A novel algorithm for buffer management and packet scheduling is presented for providing loss and delay differentiation for traffic classes at a network router, without assuming admission control or policing.
Abstract: A novel algorithm for buffer management and packet scheduling is presented for providing loss and delay differentiation for traffic classes at a network router. The algorithm, called JoBS (Joint Buffer Management and Scheduling), provides delay and loss differentiation independently at each node, without assuming admission control or policing. The novel capabilities of the proposed algorithm are that (1) scheduling and buffer management decisions are performed in a single step, and (2) both relative and (whenever possible) absolute QoS requirements of classes are supported. Numerical simulation examples, including results for a heuristic approximation, are presented to illustrate the effectiveness of the approach and to compare the new algorithm to existing methods for loss and delay differentiation.

71 citations


Book ChapterDOI
06 Jun 2001
TL;DR: This paper analyzes and compares four different mechanisms for providing QoS in IEEE 802.11 wireless LANs and shows that PCF performs badly, and that Blackburst has the best performance with regard to the above metrics.
Abstract: This paper analyzes and compares four different mechanisms for providing QoS in IEEE 802.11 wireless LANs. We have evaluated the IEEE 802.11 mode for service differentiation (PCF), Distributed Fair Scheduling, Blackburst, and a scheme proposed by Deng et al. using the ns-2 simulator. The evaluation covers medium utilization, access delay, and the ability to support a large number of high priority mobile stations. Our simulations show that PCF performs badly, and that Blackburst has the best performance with regard to the above metrics. An advantage with the Deng scheme and Distributed Fair Scheduling is that they are less constrained, with regard to the characteristics of high priority traffic, than Blackburst is.

59 citations


Book ChapterDOI
06 Jun 2001
TL;DR: This paper extends prior work on edge provisioning to interior nodes and core networks including algorithms for: (i) dynamic node provisioning and (ii) dynamic core provisioning, demonstrating through analysis and simulation that the model is capable of delivering capacity provisioning in an efficient manner.
Abstract: Efficient network provisioning mechanisms supporting service differentiation and automatic capacity dimensioning are important for the realization of a differentiated service Internet. In this paper, we extend our prior work on edge provisioning [7] to interior nodes and core networks including algorithms for: (i) dynamic node provisioning and (ii) dynamic core provisioning. The dynamic node provisioning algorithm prevents transient violations of service level agreements by self-adjusting per-scheduler service weights and packet dropping thresholds at core routers, reporting persistent service level violations to the core provisioning algorithm. The dynamic core provisioning algorithm dimensions traffic aggregates at the network ingress taking into account fairness issues not only across different traffic aggregates, but also within the same aggregate whose packets take different routes in a core IP network. We demonstrate through analysis and simulation that our model is capable of delivering capacity provisioning in an efficient manner providing quantitative delay-bounds withdifferen tiated loss across per-aggregate service classes.

44 citations


Book ChapterDOI
06 Jun 2001
TL;DR: The results show that PMP in a single-provider scenario can be profitable to the provider, both when users must use the system and when they may opt out.
Abstract: As the diversity of Internet applications increases, so does the need for a variety of quality-of-service (QoS) levels on the network. The Paris Metro Pricing (PMP) strategy uses pricing as a tool to implement network resource allocation for QoS assurance; PMP is simple, self-regulating, and does not require significant communications or bandwidth overhead. In this paper, we develop an analytic model for PMP. We first assume that the network service provider is a single constrained monopolist and users must participate in the network; we model the resultant consumer behavior and the provider's profit. We then relax the restriction that users must join the network, allowing them to opt-out, and derive the critical QoS thresholds for a profit-maximizing service provider. Our results show that PMP in a single-provider scenario can be profitable to the provider, both when users must use the system and when they may opt out.

44 citations


Book ChapterDOI
06 Jun 2001
TL;DR: The proposed multiple QoS path computation algorithm searches for maximally disjoint multiple paths such that the impact of link/node failures becomes significantly reduced, and the use of multiple paths renders QoS services more robust in unreliable network conditions.
Abstract: The paper presents approaches for fault tolerance and load balancing in QoS provisioning using multiple alternate paths. The proposed multiple QoS path computation algorithm searches for maximally disjoint (i.e., minimally overlapped) multiple paths such that the impact of link/node failures becomes significantly reduced, and the use of multiple paths renders QoS services more robust in unreliable network conditions. The algorithm is not limited to finding fully disjoint paths. It also exploits partially disjoint paths by carefully selecting and retaining common links in order to produce more options. Moreover, it offers the benefits of load balancing in normal operating conditions by deploying appropriate call allocation methods according to traffic characteristics. In all cases, all the computed paths must satisfy given multiple QoS constraints. Simulation experiments with IP Telephony service illustrate the fault tolerance and load balancing features of the proposed scheme.

37 citations


Book ChapterDOI
06 Jun 2001
TL;DR: A simple analytical model is developed and extensive trace-driven simulations are performed to explore the efficacy of aggregation under a broad class of factors, finding that a simple single-time-scale model with random noise can capture the essential behavior of surprisingly complex scenarios.
Abstract: The IETF's Integrated Services (IntServ) architecture together with reservation aggregation provide a mechanism to support the quality-of-service demands of real-time flows in a scalable way, i.e., without requiring that each router be signaled with the arrival or departure of each new flow for which it will forward data. However, reserving resources in "bulk" implies that the reservation will not precisely match the true demand. Consequently, if the flows' demanded bandwidth varies rapidly and dramatically, aggregation can incur significant performance penalties of under-utilization and unnecessarily rejected flows. On the other hand, if demand varies moderately and at slower time scales, aggregation can provide an accurate and scalable approximation to IntServ. In this paper, we develop a simple analytical model and perform extensive trace-driven simulations to explore the efficacy of aggregation under a broad class of factors. Example findings include (1) a simple single-time-scale model with random noise can capture the essential behavior of surprisingly complex scenarios; (2) with a twoorder-of-magnitude separation between the dominant time scale of demand and the time scale of signaling and moderate levels of secondary noise, aggregation achieves performance that closely approximates that of IntServ.

35 citations


Book ChapterDOI
01 Jan 2001
TL;DR: In this paper, the concept of differentiated reliability (DiR) is introduced and applied to provide multiple reliability degrees (classes) in the same network layer using a common protection mechanism, i.e., path switching.
Abstract: Current optical networks typically offer two degrees of service reliability: full protection in presence of a single fault in the network, and no protection at all. This situation reflects the historical duality that has its roots in the once divided telephone and data environment. The circuit oriented service required protection, i.e., provisioning of readily available spare resources to replace working resources in case of a fault. The datagram oriented service relied upon restoration, i.e., dynamic search for and reallocation of affected resources via such actions as routing table updates. The current development trend, however, is gradually driving the design of networks towards a unified solution that will jointly support traditional voice and data services as well as a variety of novel multimedia applications. The growing importance of concepts, such Quality of Service (QoS) and Differentiated Services that provide varying levels of service performance in the same network evidences this trend. Consistently with this pattern, the novel concept of Differentiated Reliability (DiR) is formally introduced in the paper and applied to provide multiple reliability degrees (classes) in the same network layer using a common protection mechanism, i.e., path switching. According to the DiR concept, each connection in the layer under consideration is guaranteed a minimum reliability degree, defined as the Maximum Failure Probability allowed for that connection. The reliability degree chosen for a given connection is thus determined by the application requirements, and not by the actual network topology, design constraints, robustness of the network components, and span of the connection. An efficient algorithm is proposed to design the Wavelength Division Multiplexing (WDM) layer of a DiR ring.

34 citations


Book ChapterDOI
Baochun Li1
06 Jun 2001
TL;DR: This paper proposes a fully distributed and adaptive algorithm to provide statistical QoS guarantees with respect to accessibility of services in an ad-hoc network, and theoretically derive the lower and upper bounds of service efficiency based on a novel model for group mobility.
Abstract: Ad-hoc wireless networks consist of mobile nodes interconnected by multi-hop wireless paths. Unlike conventional wireless networks, ad-hoc networks have no fixed network infrastructure or administrative support. Because of the dynamic nature of the network topology and limited bandwidth of wireless channels, Quality-of-Service (QoS) provisioning is an inherently complex and difficult issue. In this paper, we propose a fully distributed and adaptive algorithm to provide statistical QoS guarantees with respect to accessibility of services in an ad-hoc network. In this algorithm, we focus on the optimization of a new QoS parameter of interest, service efficiency, while keeping protocol overheads to the minimum. To achieve this goal, we first theoretically derive the lower and upper bounds of service efficiency based on a novel model for group mobility, followed by extensive simulation results to verify the effectiveness of our algorithm.

32 citations


Book ChapterDOI
06 Jun 2001
TL;DR: This paper defines a loss-rate estimator based on average drop distances (ADDs), and shows that a PLR dropper using the ADD estimator can be implemented efficiently and gives more predictable PLR differentiation than the LHT estimator.
Abstract: Recent extensions to the Internet architecture allow assignment of different levels of drop precedence to IP packets. This paper examines differentiation predictability and implementation complexity in creation of proportional lossrate (PLR) differentiation between drop precedence levels. PLR differentiation means that fixed loss-rate ratios between different traffic aggregates are provided independent of traffic loads. To provide such differentiation, running estimates of loss-rates can be used as feedback to keep loss-rate ratios fixed at varying traffic loads. In this paper, we define a loss-rate estimator based on average drop distances (ADDs). The ADD estimator is compared with an estimator that uses a loss history table (LHT) to calculate loss-rates. We show, through simulations, that the ADD estimator gives more predictable PLR differentiation than the LHT estimator. In addition, we show that a PLR dropper using the ADD estimator can be implemented efficiently.

Book ChapterDOI
06 Jun 2001
TL;DR: This paper systematically derives a quantitative model how to set the parameters of the RED queue management algorithm as a function of the scenario parameters bottleneck bandwidth, round-trip-time, and number of TCP flows.
Abstract: This paper systematically derives a quantitative model how to set the parameters of the RED queue management algorithm as a function of the scenario parameters bottleneck bandwidth, round-trip-time, and number of TCP flows. It is shown that proper setting of RED parameters is a necessary condition for stability, i.e. to ensure convergence of the queue size to a desired equilibrium state and to limit oscillation around this equilibrium. The model provides the correct parameter settings, as illustrated by simulations and measurements with FTP and Web-like TCP flows in scenarios with homogeneous and heterogeneous round trip times.

Book ChapterDOI
01 Jan 2001
TL;DR: This paper designs and implements a novel architecture and admission control algorithm termed Egress Admission Control, and describes the implementation of the scheme on a network of prototype routers enhanced with ingress-egress path monitoring and edge admission control.
Abstract: While the IntServ solution to Internet QoS can achieve a strong service model that guarantees flow throughputs and loss rates, it places excessive burdens on high-speed core routers to signal, schedule, and manage state for individual flows. Alternatively, the DiffServ solution achieves scalability via aggregate control, yet cannot ensure a particular QoS to individual flows. To simultaneously achieve scalability and a strong service model, we have designed and implemented a novel architecture and admission control algorithm termed Egress Admission Control. In our approach, the available service on a network path is passively monitored, and admission control is performed only at egress nodes, incorporating the effects of cross traffic with implicit measurements rather than with explicit signaling. In this paper, we describe our implementation of the scheme on a network of prototype routers enhanced with ingress-egress path monitoring and edge admission control. We report the results of testbed experiments and demonstrate the feasibility of an edge-based architecture for providing IntServ-like services in a scalable way.

Book ChapterDOI
06 Jun 2001
TL;DR: Qualitative and experimental research demonstrates that future network service must be based on an old principle: service and its associate cost must represent value in terms of the contribution it makes to customers' goals.
Abstract: To create acceptable levels of Quality of Service (QoS), designers need to be able to predict users' behaviour in response to different levels of QoS. However, predicting behaviour requires an understanding of users' requirements for specific tasks and contexts. This paper reports qualitative and experimental research that demonstrates that future network service must be based on an old principle: service and its associate cost must represent value in terms of the contribution it makes to customers' goals. Human Computer Interaction (HCI) methods can be applied to identify users' goals and associated QoS requirements. Firstly, we used a qualitative approach to establish the mental concepts that users apply when assessing network services and charges. The subsequent experimental study shows that users' require certain types of feedback at the user interface to predict future levels of quality. Price alone cannot be used to regulate demand for QoS.

Book ChapterDOI
01 Jan 2001
TL;DR: The proposed methodology stems from a Markovian model of a single TCP source, and eventually considers the superposition and interaction of several such sources using standard queueing analysis techniques, and allows the evaluation of such performance indices as throughput, queueing delay and segment loss of TCP flows.
Abstract: In this paper, we outline a methodology that can be applied to model the behavior of TCP Reno flows. The proposed methodology stems from a Markovian model of a single TCP source, and eventually considers the superposition and interaction of several such sources using standard queueing analysis techniques. Our approach allows the evaluation of such performance indices as throughput, queueing delay and segment loss of TCP flows. The results obtained through our model are validated by means of simulation, under different traffic settings.

Book ChapterDOI
06 Jun 2001
TL;DR: This work presents the novel concept of a conditionally guaranteed budget (CGB) for media processing in software, and a feasible extension of the budget scheduler with CGBs is briefly described.
Abstract: Media processing in software enables consumer terminals to become open and flexible. Because consumer products are heavily resource constrained, this processing is required to be cost-effective. Our QoS approach aims at cost-effective media processing in software. QoS resource management is based on multilevel control, corresponding to different time-horizons, and resource allocation below worst-case using periodic budgets provided by a budget scheduler.Multilevel control combined with budgets below worst-case gives rise to a problem related to user focus. Upon a sudden increase in load of an application with user focus, its output will have a quality dip. To resolve this user focus problem, we present the novel concept of a conditionally guaranteed budget (CGB). A feasible extension of our budget scheduler with CGBs is briefly described.

Book ChapterDOI
06 Jun 2001
TL;DR: PRTP-ECN is a protocol designed to be both TCP-friendly and to better comply with the QoS requirements of applications with soft real-time constraints, achieved by trading reliability for better jitter characteristics and improved throughput.
Abstract: The introduction of multimedia in the Internet imposes new QoS requirements on existing transport protocols. Since neither TCP nor UDP comply with these requirements, a common approach today is to use RTP/UDP and to relegate the QoS responsibility to the application. Even though this approach has many advantages, it also entails leaving the responsibility for congestion control to the application. Considering the importance of efficient and reliable congestion control for maintaining stability in the Internet, this approach may prove dangerous. Improved support at the transport layer is therefore needed. In this paper, a partially reliable transport protocol, PRTP-ECN, is presented. PRTP-ECN is a protocol designed to be both TCP-friendly and to better comply with the QoS requirements of applications with soft real-time constraints. This is achieved by trading reliability for better jitter characteristics and improved throughput. A simulation study of PRTP-ECN has been conducted. The outcome of this evaluation suggests that PRTPECN can give applications that tolerate a limited amount of packet loss significant reductions in interarrival jitter and improvements in throughput as compared to TCP. The simulations also verified the TCP-friendly behavior of PRTP-ECN.

Book ChapterDOI
06 Jun 2001
TL;DR: This work presents and evaluates two experimental extensions to RSVP in terms of protocol specification and implementation, and aims at developing an integrated protocol suite, initially in the framework set by RSVP.
Abstract: We present and evaluate two experimental extensions to RSVP in terms of protocol specification and implementation. These extensions are targeted at apparent shortcomings of RSVP to carry out lightweight signalling for end systems. Instead of specifying new protocols, our approach in principle aims at developing an integrated protocol suite, initially in the framework set by RSVP. This work is based on our experience on implementing and evaluating the basic RSVP specification. The extensions will be incorporated in the next public release of our open source software.

Book ChapterDOI
06 Jun 2001
TL;DR: This paper proposes small modifications to the standard Internet resource reservation protocol, RSVP, so that initial resource reservations and re-reservations due to terminal mobility can often be done locally in an access network.
Abstract: Guaranteed QoS for multimedia applications is based on reserved resources in each intermediate node on the whole end-to-end path. This can be achieved more effectively for stationary nodes than for mobile nodes. Many multimedia applications become useless if the continuity is disturbed due to end-to-end or slow re-reservations of resources each time a mobile node moves so that its point-of-presence in the IP network changes. Additionally, due to lack of QoS support from the correspondent node, mobile nodes would need a way to reserve at least local resources, especially wireless link resources. This paper proposes small modifications to the standard Internet resource reservation protocol, RSVP, so that initial resource reservations and re-reservations due to terminal mobility can often be done locally in an access network. This is clearly a significant improvement to the current RSVP.

Book ChapterDOI
06 Jun 2001
TL;DR: It is shown that the optimal marking strategy depends on the level of congestion on the reverse path and the strategy leading to optimal overall performance is to copy the mark from the respective data packet into returned acknowledgement packets, provided that the affected service class is appropriately provisioned.
Abstract: In the context of networks offering Differentiated Services (DiffServ), we investigate the effect of acknowledgment treatment on the throughput of TCP connections. We carry out experiments on a testbed offering three classes of service (Premium, Assured and Best-Effort), and different levels of congestion on the data and acknowledgment path. We apply a full factorial statistical design and deduce that treatment of TCP data packets is not sufficient and that acknowledgment treatment on the reverse path is a necessary condition to reach the targeted performance in DiffServ efficiently. We find that the optimal marking strategy depends on the level of congestion on the reverse path. In the practical case where Internet Service Providers cannot obtain such information in order to mark acknowledgment packets, we show that the strategy leading to optimal overall performance is to copy the mark from the respective data packet into returned acknowledgement packets, provided that the affected service class is appropriately provisioned.

Book ChapterDOI
06 Jun 2001
TL;DR: A novel scheduling algorithm, Duplicate Scheduling with Deadlines (DSD), which allows interactive, adaptive applications, that mark their packets green, to receive a low bounded delay at the expense of maybe less throughput.
Abstract: We present a novel scheduling algorithm, Duplicate Scheduling with Deadlines (DSD). This algorithm implements the ABE service [5] which allows interactive, adaptive applications, that mark their packets green, to receive a low bounded delay at the expense of maybe less throughput. ABE retains the best-effort context by protecting flows that value higher throughput more than low bounded delay, whose packets are marked blue. DSD optimises green traffic performance while satisfying the constraint that blue traffic must not be adversely affected. Using a virtual queue, deadlines are assigned to packets upon arrival, and green and blue packets are queued separately. At service time, the deadlines of the packets at the head of the blue and green queues are used to determine which one to serve next. It supports any mixture of TCP, TCP Friendly and non TCP Friendly traffic. We motivate, describe and provide an analysis of DSD, and show simulation results.

Book ChapterDOI
Hans Domjan1, Thomas R. Gross1
06 Jun 2001
TL;DR: A simple extension to a processor management system that allows an application to reserve a share of the processor for a specified interval, targeted at applications with frequently changing resource demands or recurring, though non-periodic resource requests.
Abstract: The benefits of QoS network features are easily lost when the endnodes are managed by a conventional, best-effort operating system. Schedulers of such operating systems provide only rudimentary tools (like priority adjustment) for processor management. We present here a simple extension to a processor management system that allows an application to reserve a share of the processor for a specified interval. The system is targeted at applications with frequently changing resource demands or recurring, though non-periodic resource requests. An example of such an application is a network-aware image search and retrieval system, but other network-aware client-server applications also fall into the same category. The admission control component of the processor management system decides if a resource request can be satisfied. To limit the amount of time spent negotiating with the operating system, the application can present a ranked list of acceptable reservations. The admission controller then picks the best request that can still be satisfied (using the Simplex linear programming algorithm to find the best solution). If there are insufficient resources, the application must deal with the shortage. Any possible adaptation (if the accepted request was not the application's first choice) is left to the application. The processor management system has been implemented for NetBSD and been ported to Linux, and the paper includes an evaluation of its effectiveness. The overhead is low, and although reservations are not guaranteed, in practical settings the application almost always obtains the cycles requested

Book ChapterDOI
06 Jun 2001
TL;DR: It is found that for the business intra-net of the study, integration without diffserv may need considerable over-provisioning depending on the fraction of real-time data in the network, and the first rule of thumb on provisioning a diffserv network for increasing real- time data is found.
Abstract: The question of our study is how to provision a diffserv (differentiated service) intra-net serving three classes of traffic, i.e., voice, real-time data (e.g. stock quotes), and best-effort data. Each class of traffic requires a different level of QoS (Quality of Service) guarantee. For VoIP the primary QoS requirements are delay and loss; for realtime data response-time. Given a network configuration and anticipated workload of a business intra-net, we use ns-2 simulations to determine the minimum capacity requirements that dominate total cost of the intranet. To ensure that it is worthwhile converging different traffic classes or deploying diffserv, we cautiously examine capacity requirements in three sets of experiments: three traffic classes in i) three dedicated networks, ii) one network without diffserv support, and iii) one network with diffserv support. We find that for the business intra-net of our study, integration without diffserv may need considerable over-provisioning depending on the fraction of real-time data in the network. In addition, we observe significant capacity savings in the diffserv case; thus conclude that deploying diffserv is advantageous. The relations we find give rise to, as far as we know, the first rule of thumb on provisioning a diffserv network for increasing real-time data.

Book ChapterDOI
01 Jan 2001
TL;DR: An adaptive protocol is proposed for controlling mobile calls transmitter power and rate cooperatively when previous work has focused on handling them separately.
Abstract: In a CDMA network, resource allocation is critical in order to provide suitable signal quality for each user and achieve channel efficiency. The third-generation mobile communication systems (ITU/IMT-2000) must be designed to support wideband services at bit rates as high as 2 Mbps, with the same quality as fixed networks. Mobiles transmitted power has to be controlled to provide each user a reasonable connection while limiting the interference seen by other users. Transmitted rate has also to be controlled to avoid congestion. An adaptive protocol is proposed for controlling mobile calls transmitter power and rate cooperatively when previous work has focused on handling them separately. The active component of this scheme is called Genetic Algorithm for Mobiles Equilibrium (GAME). Based on an evolutionary computational model, the base station tries to achieve an adequate equilibrium between its users. Thereof, each mobile can send its traffic with a suitable power to support it over the different path losses and interference. In the mean time, its battery life is being preserved while limiting the interference seen by neighbors. A significant enhancement in signal quality and power level has been noticed through several experiments.

Book ChapterDOI
Joachim Charzinski1
06 Jun 2001
TL;DR: Problem of admission control for elastic traffic on a per-TCP-connection basis is problematic in the context of Web traffic because it is the variance in connection volumes rather than the connection arrival rate that causes most overload situations.
Abstract: Admission control for elastic traffic has been advocated in order to maintain performance (i.e. ensure a minimum bandwidth) for each admitted flow and to avoid unnecessary traffic in the network due to retransmissions of packets or even whole transfers after a temporary overload situation. This paper aims at indicating problems of admission control for elastic traffic on a per-TCP-connection basis is problematic in the context of Web traffic: (i) A TCP connection is not equivalent to a transfer. (ii) It is the variance in connection volumes rather than the connection arrival rate that causes most overload situations. (iii) From an application point of view, the target of maintaining performance for admitted flows under high offered load is not met.

Book ChapterDOI
01 Jan 2001
TL;DR: The main goal of the paper is to identify solutions which provide QoS guarantees without requiring per flow processing in the core routers (as is commonly done in IntServ solutions) and which are thus scalable.
Abstract: In this paper we propose a DiffServa rchitecture for the support of real time traffic (e.g., video) with QoS constraints (e.g., bandwidth and delay) over an IP domain. The main goal of the paper is to identify solutions which provide QoS guarantees without requiring per flow processing in the core routers (as is commonly done in IntServ solutions) and which are thus scalable. We propose, and evaluate through simulation, different approaches for call admission control (CAC) and resource allocation. These approaches are all consistent with the Diff-Servmo del, but place different processing and signaling loads on edge and core routers. Paths are computed by means of a QoS routing algorithm, Q-OSPF, and MPLS is used to handle explicit routing and class separation.

Book ChapterDOI
06 Jun 2001
TL;DR: This paper proposes a method to propagate QoS information in bidirectional multicast trees to enable better QoS-aware path selection decisions and proposes an alternative "join point" search strategy that would introduce much less control overhead utilizing the root-based feature of the MASC/BGMP inter-domain multicast architecture.
Abstract: QoS support poses new challenges to multicast routing especially for inter-domain multicast where network QoS characteristics will not be readily available as in intra-domain multicast. Several existing proposals attempt to build QoS-sensitive multicast trees by providing multiple joining paths for a new member using a flooding-based search strategy which has the draw-back of excessive overhead and may not be able to determine which join path is QoS feasible sometimes. In this paper, first we propose a method to propagate QoS information in bidirectional multicast trees to enable better QoS-aware path selection decisions. We then propose an alternative "join point" search strategy that would introduce much less control overhead utilizing the root-based feature of the MASC/BGMP inter-domain multicast architecture. Simulation results show that this strategy is as effective as flooding-based search strategy in finding alternative join points for a new member but with much less overhead.We also discuss extensions to BGMP to incorporate our strategies to enable QoS support.

Book ChapterDOI
01 Jan 2001
TL;DR: This paper presents two enhancements for WRR schedulers which solve the problems of burstiness and superimposition of a hierarchical structure, and defines an implementation of aWRR scheduler that substantially reduces the service burstiness with marginal additional complexity.
Abstract: Because of their minimal complexity, Weighted Round Robin (WRR) schedulers have become a popular solution for providing bandwidth guarantees to IP flows in emerging networks that support differentiated services. The introduction of applications that require flexible bandwidth management puts emphasis on hierarchical scheduling structures, where bandwidth can be allocated not onlyto individual flows, but also to aggregations of those flows. With existing WRR schedulers, the superimposition of a hierarchical structure compromises the simplicity of the basic scheduler. Another undesirable characteristic of existing WRR schedulers is their burstiness in distributing service to the flows. In this paper, we present two enhancements for WRR schedulers which solve these problems. In the first enhancement, we superimpose a hierarchical structure bys implyre defining the way the WRR scheduler computes the timestamps of the flows. This "soft" hierarchyh as negligible complexity, since it does not require any additional scheduling layer, yet is highly effective. The second enhancement defines an implementation of a WRR scheduler that substantiallyre duces the service burstiness with marginal additional complexity.

Book ChapterDOI
01 Jan 2001
TL;DR: End-to-end measurements taken over a low priority probing packet stream is an effective and robust way to guarantee bandwidth and delay for real-time services characterized by fast traffic dynamics, such as Voice over IP.
Abstract: Distributed end-to-end measurement based connection admission control mechanisms have been recently proposed. The goal of these schemes is to provide tight QoScon trol on a per connection basis by means of measurements taken by the edge nodes and priority based forwarding procedure at internal nodes. Since the additional flows handling procedures are implemented at the border routers and the forwarding mechanisms are for flows aggregates only, the approach is fully scalable and compatible with the IETF Differentiated Service proposal. The aim of this paper is to propose specific schemes and to investigate the advantages and limits of the approach by analyzing the basic mechanisms and evaluating its performance. As a results, the paper shows that end-to-end measurements taken over a low priority probing packet stream is an effective and robust way to guarantee bandwidth and delay for real-time services characterized by fast traffic dynamics, such as Voice over IP.

Book ChapterDOI
01 Jan 2001
TL;DR: GRIP is a novel reservation paradigm which can be seamlessly applied to the existing Diffserv (and even legacy) Internet, although a marginal increase in QoS is envisioned in these existing scenarios, as routers in different domain will be upgraded with better measurement-based admission decision criteria.
Abstract: Looking back at many proposals appeared on the scene in these years, a fundamental lesson to be learned is that their success or failure is strictly tied to their backward compatibility with existing infrastructures. In this paper, we consider the problem of providing explicit admission control decisions for QoS aware services. We rely the decision to admit a new flow upon the successful and timely delivery, through the Internet, of probe packets independently generated by the end points. Our solution, called GRIP (Gauge&Gate Realistic Internet Protocol), is fully distributed and scalable, as admission control decisions are taken at the edge network nodes, and no coordination between routers, which are stateless and remain oblivious to individual flows, is required. The performance of GRIP are related to the capability of routers to locally take decisions about the degree of congestion in the network, and suitably block probe packets when congestion conditions are expected. The key message of this paper is that GRIP is a novel reservation paradigm which can be seamlessly applied to the existing Diffserv (and even legacy) Internet, although a marginal increase in QoS is envisioned in these existing scenarios. Indeed, GRIP opens up a future smooth migration path toward gradually improved QoS, as routers in different domain will be upgraded with better measurement-based admission decision criteria. The enabling factor is that router decision criteria are localized and do not involve any coordination. This guarantees that they can be enhanced without losing inter-operability with installed devices.