scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 1998"


Proceedings ArticleDOI
18 May 1998
TL;DR: A new switch scheduling algorithm called joined preferred matching (JPM) is proposed that improves Prahhakar and McKeown's results in two aspects and lays the theoretical foundation for designing scalable high-speed CIOQ switches that can provide same throughput and QoS as OQ switches, but require lower-speed internal memory.
Abstract: Combined input-output queueing switches (CIOQ) have better scaling properties than output queueing (OQ) switches. However, a CIOQ switch may have lower switch throughput and, more importantly, it is difficult to control delay in a CIOQ switch due to the existence of multiple queueing points. In this paper, we study the following problem: can a CIOQ switch be designed to behave identically to an OQ switch? B. Prabhakar and N. McKeown (1997) proposed an algorithm such that a CIOQ switch with an internal speedup of 4 can behave identically to an OQ switch with FIFO as the output queueing discipline. In this paper, we propose a new switch scheduling algorithm called joined preferred matching (JPM) that improves Prahhakar and McKeown's results in two aspects. First, with JPM, the internal speedup needed for a CIOQ switch to achieve exact emulation of an OQ switch is only 2 instead of 4. Second, the result applies to OQ switches that employ a general class of output service disciplines, including FIFO and various fair queueing algorithms. This result lays the theoretical foundation for designing scalable high-speed CIOQ switches that can provide same throughput and QoS as OQ switches, but require lower-speed internal memory.

143 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: It is demonstrated that with relatively simple arbitration algorithms and a speedup that is independent of the switch size, it is possible to ensure delay guarantees which are comparable to those available for output-buffered switches.
Abstract: Investigates some issues related to providing QoS guarantees in input-buffered crossbars with speedup. We show that a speedup of 4 is sufficient to ensure 100% asymptotic throughput with any maximal matching algorithm employed by the arbiter. We present several algorithms which ensure different delay guarantees with a range of speedup values between 2 and 6. We demonstrate that with relatively simple arbitration algorithms and a speedup that is independent of the switch size, it is possible to ensure delay guarantees which are comparable to those available for output-buffered switches.

101 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: The design of the utility-fair allocation scheme and the interaction between the centralized adaptation controller and a set of distributed adaptation handlers, which play a key role in intelligently responding to the time-varying channel capacity experienced over the air-interface are discussed.
Abstract: Adaptive quality-of-service (QoS) techniques can effectively respond to time-varying channel conditions found in wireless networks. In this paper, we assess the state-of-the-art in QoS adaptive wireless systems and argue for new adaptation techniques that are better suited to respond to application-specific adaptation needs. A QoS adaptive data link control model is presented that accounts for application-specific adaptation dynamics that include adaptation time scales and adaptation policies. A centralized adaptation controller employs a novel utility-fair bandwidth allocation scheme that supports the dynamic bandwidth needs of adaptive flows over a range of operating conditions. Three wireless service classes play an integral role in accommodating a wide variety of adaptation strategies. In this paper, we discuss the design of the utility-fair allocation scheme and the interaction between the centralized adaptation controller and a set of distributed adaptation handlers, which play a key role in intelligently responding to the time-varying channel capacity experienced over the air-interface.

98 citations


Proceedings ArticleDOI
01 Mar 1998
TL;DR: This work proposes a task control model to rigorously model the dynamics of an adaptive system using digital control theory and is able to quantitatively analyze the stability and equilibrium of the adaptive applications, while simultaneously providing fairness guarantees to other applications in the system.
Abstract: In a distributed environment where multiple applications compete and share a limited amount of system resources, applications tend to suffer from variations in resource availability and are desired to adapt their behavior to the resource variations of the system. We propose a task control model to rigorously model the dynamics of an adaptive system using digital control theory. With our task control model, we are able to quantitatively analyze the stability and equilibrium of the adaptive applications, while simultaneously providing fairness guarantees to other applications in the system. Our control algorithm has also been extended to those cases where no sufficient task state information is observable. We show that even under these circumstances, our task control model can still be applied and our control algorithms yield stable and responsive behavior.

96 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: This paper describes a platform designed to obtain a basic understanding of how individuals value Internet usage when offered different quality-of-service (QoS) choices, and gives an overview of both the technology employed at INDEX and the goals of the experimental design.
Abstract: The continuing exponential growth of the Internet and the emergence of new time-critical applications have led to the integration of a large number of different services on the Internet. In the process, the question of how to efficiently allocate bandwidth as a scarce resource has become a crucial issue for the continued proliferation of these new services. Future growth depends on the division of services into quality-differentiated market segments and the pricing structure of each segment. Successful growth requires service providers to offer combinations of quality and price that match user needs, but to do this, providers must understand the structure of user demand. Such understanding is lacking at present. This paper describes a platform designed to obtain a basic understanding of how individuals value Internet usage when offered different quality-of-service (QoS) choices. The Internet Demand Experiment (INDEX) has two main objectives: (a) measurement of user demand for Internet access as a function of QoS, pricing structure and application; and (b) the demonstration of an end-to-end system that provides access to a diverse group of users at attractive price-quality combinations. The data being collected is expected to reveal the correlation between user application and service demand, how demand varies with user experience, and to what extent users form discrete market segments. This paper gives an overview of both the technology employed at INDEX and the goals of the experimental design.

72 citations


Book ChapterDOI
01 Sep 1998
TL;DR: A new architecture is proposed that automatically aggregates flows on each link in the network that has no knowledge of individual flows and only the introduction of a packet type with three values (reserved, request or best-effort) which can be encoded on two bits.
Abstract: Current resource reservation architectures for multimedia networks do not scale well for a large number of flows. We propose a new architecture that automatically aggregates flows on each link in the network. Therefore, the network has no knowledge of individual flows. There is no explicit signalling protocol, and the protocol overhead mainly consists an the introduction of a packet type with three values (reserved, request or best-effort) which can be encoded on two bits.

66 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: A distributed microeconomic flow control technique that models the network as competitive markets, in which switches price their link bandwidth based on supply and demand, and users purchase bandwidth so as to maximize their individual quality of service (QoS).
Abstract: Network applications require certain individual performance guarantees that can be provided if enough network resources are available. Consequently, contention for the limited network resources may occur. For this reason, networks use flow control to manage network resources fairly and efficiently. This paper presents a distributed microeconomic flow control technique that models the network as competitive markets. In these markets, switches price their link bandwidth based on supply and demand, and users purchase bandwidth so as to maximize their individual quality of service (QoS). This yields a decentralized flow control method that provides a Pareto optimal bandwidth distribution and high utilization (over 90% in simulation results). Discussions about stability and the Pareto optimal distribution are given, as well as simulation results using actual MPEG-compressed video traffic.

60 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: An end-to-end CAC framework for RC-EDF is formulated and it is shown that, when the traffic mix in the network consists of connections with both stringent and loose delay requirements, RC- EDF can substantially outperform GPS in the number of admitted connections, and can thus achieve much higher network utilization.
Abstract: Among packet scheduling disciplines for providing end-to-end quality-of-service (QoS) guarantees to different applications, two classes of algorithms have received particular attention: those based on generalized processor sharing (GPS) and those based on earliest-deadline first (EDF) scheduling. The powerful properties of GPS-based schemes translate easily into simple call admission control (CAC) procedures. The intense research on GPS has also resulted in very efficient implementation techniques, which have made the cost of these schedulers very affordable. The EDF discipline, in conjunction with per-node traffic shaping [which we refer to as rate-controlled EDF (RC-EDF)] has also been proposed for end-to-end QoS provisioning. However, an appropriate framework for CAC with RC-EDF has not been developed, nor the possible advantages of using RC-EDF in place of GPS have been properly characterized. Furthermore, the implementation complexity of an RC-EDF server is potentially very high, and no technique to reduce costs has been proposed. In this paper, we first formulate an end-to-end CAC framework for RC-EDF that can be implemented in practice. Then, using this framework, we numerically compare the schedulable regions of RC-EDF and GPS and show that, when the traffic mix in the network consists of connections with both stringent and loose delay requirements, RC-EDF can substantially outperform GPS in the number of admitted connections, and can thus achieve much higher network utilization. Finally, we propose a technique to substantially reduce the implementation complexity of a RC-EDF server.

60 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: This research developed Authenticast, a dynamically configurable user-level communication protocol offering variable levels of security throughout the execution, which offers a novel security control abstraction with which tradeoffs in security vs. performance may be made explicit and then utilized with dynamic client-server asymmetries.
Abstract: Focuses on the integrity and protection of information exchanged in high-performance networked computing applications. For these applications, security procedures are often omitted in the interest of performance. Since this may not be acceptable when using public communications media, our research makes explicit and then utilizes the inherent tradeoffs in realizing performance vs. security in communications. Toward this end, we expand the notion of QoS to include the level of security that can be offered within performance and CPU resource availability constraints. To address performance and security tradeoffs in asymmetric and dynamic client-server environments, we developed Authenticast, a dynamically configurable user-level communication protocol offering variable levels of security throughout the execution. Authenticast comprises multiple heuristics to realize dynamic security levels and to decide when and how to apply dynamic security. To demonstrate this protocol, we have implemented a prototype of a high-performance privacy system. This prototype offers a novel security control abstraction with which tradeoffs in security vs. performance may be made explicit and then utilized with dynamic client-server asymmetries. Authenticast uses the "security thermostat" to enable adaptive security processing. The results demonstrate increased scalability and improved performance when adaptive security is applied to the client-server platform with varying numbers of clients and varying resource availabilities at clients.

60 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: This work describes the architecture for a work-conserving server using a combined input-output buffered crossbar switch and describes a mechanism to provide delay bounds for real-time traffic using LOOFA.
Abstract: Describes the architecture for a work-conserving server using a combined input-output buffered crossbar switch. The switch employs a novel algorithm based on output occupancy-the Lowest-Occupancy-Output-First Algorithm (LOOFA)-and a speedup of only 2. A work-conserving switch provides the same throughput performance as an output buffered switch. The work-conserving property of the switch is independent of the switch size and input traffic pattern. We also present a suite of algorithms that can be used in combination with LOOFA. These algorithms determine the fairness and delay properties of the switch. We also describe a mechanism to provide delay bounds for real-time traffic using LOOFA. These delay bounds are achievable without requiring output buffered switch emulation.

58 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: In this article, the authors describe a dynamic QoS resource manager (DQM) which is a middleware application that abstracts these new operating system interfaces so that they can be easily used in contemporary application environments.
Abstract: There is an emerging set of research operating systems that provide specialized support for continuous media and other soft real-time applications. A number of these systems provide QoS scheduling abstractions, some of which may dynamically change the QoS allocations to applications during application execution. The tools and environments that allow application developers to take advantage of these abstractions generally do not exist. This paper describes a dynamic QoS resource manager (DQM), which is a middleware application that abstracts these new operating system interfaces so that they can be easily used in contemporary application environments.

Proceedings ArticleDOI
18 May 1998
TL;DR: The use of aggregation as a technique to reduce the amount of state needed to provide IIS is examined, which allows large-scale deployment of IIS without overloading the routers with state and associated processing.
Abstract: The Internet Integrated Services (IIS) architecture has a fundamental scaling problem in that per-flow state is maintained at all the routers and end-systems supporting a flow. This paper examines the use of aggregation as a technique to reduce the amount of state needed to provide IIS. In our approach, routers at the edge of a region doing aggregation maintain a detailed IIS state, while in the interior of this region, routers maintain a greatly reduced amount of state. Packets are tagged at the network edge with scheduling information that is used in place of the detailed IIS state. The aggregation scheme described allows large-scale deployment of IIS without overloading the routers with state and associated processing.

Proceedings ArticleDOI
18 May 1998
TL;DR: This paper discusses the analysis of an audiovisual desktop video-teleconferencing subjective experiment conducted at the Institute for Telecommunication Sciences, where objective models of the individual audio and video quality are presented.
Abstract: This paper discusses the analysis of an audiovisual desktop video-teleconferencing subjective experiment conducted at the Institute for Telecommunication Sciences. Objective models of the individual audio and video quality are presented. Also discussed is an objective model of the audiovisual quality based upon the results of the individual objective audio and video quality models. Finally, a subjective model of audiovisual quality based upon users' ratings of the audio and video quality is discussed.

Proceedings ArticleDOI
18 May 1998
TL;DR: A charging model that can be embedded in the RSVP (Resource ReSerVation Protocol) architecture is described that is open and flexible in that it imposes little or no restrictions to the pricing policy of network providers or the usage behaviour of end-users.
Abstract: Charging mechanisms are needed to protect an integrated services network from arbitrary resource reservations and to create a funding mechanism to extend network capacity at the most desired locations at the expense of those users that actually use these resources. In this paper, we describe a charging model that can be embedded in the RSVP (Resource ReSerVation Protocol) architecture. Our model is open and flexible in that it imposes little or no restrictions to the pricing policy of network providers or the usage behaviour of end-users. At the same time, it provides mechanisms to enable fine-grained charging of network communication. After a user-centric identification of requirements for charging mechanisms, a formal framework is presented to model the prices and payments. We present protocol elements and an implementation rationale to realize our charging model. Furthermore, we identify potential problems that are inherent to RSVP with regard to precise charging and we point out future research issues towards a realistic charging architecture.

Proceedings ArticleDOI
18 May 1998
TL;DR: Presents an agent-based architecture for resource reservations that provides scalable per-link resource reservations in agents and low per-packet overhead in routers.
Abstract: Presents an agent-based architecture for resource reservations. For each domain in the network, there is an agent that is responsible for admission control. The architecture provides scalable per-link resource reservations in agents and low per-packet overhead in routers. The key ideas are the following. First, reservations from different sources to the same destination are aggregated as their paths merge toward the destination. Second, an agent in charge of resources at the final destination can generalize reservations for specific end-points so that they are valid for any end-point in the destination domain, thereby allowing more aggregation. Third, agents can do bulk reservations in advance with neighboring agents, thereby allowing aggregation over time. Fourth, agents are responsible for setting up policing points at edge routers for checking commitments. Agents can minimize the per-packet policing overhead in routers by varying the granularity of policing over time.

Proceedings ArticleDOI
18 May 1998
TL;DR: It is shown that the selection of a quality of service (QoS) becomes a difficult task for the user when he is faced with different prices for different QoSs and the concept of a QoS architecture focusing on this problem is developed in order to show how the agent fits into aQoS framework.
Abstract: Shows that the selection of a quality of service (QoS) becomes a difficult task for the user when he is faced with different prices for different QoSs. Even if the user is only facing a best-effort service, he might be unable to determine the minimal-cost selection of bandwidth. This article shows that the user needs support to get the best service regarding his personal situation. The mechanisms we use for such a personalized support tool (intelligent agent) are described. In addition, the concept of a QoS architecture focusing on this problem is developed in order to show how the agent fits into a QoS framework. The context where this investigation takes place is the INDEX (INternet Demand EXperiment) project, a testbed for examining the user's demand and willingness to pay for different QoSs.

Proceedings ArticleDOI
18 May 1998
TL;DR: This work presents a network architecture and a preliminary implementation that explicitly support the notion of application-oriented QoS for complex network services and provides the ability to deal with heterogeneous networks and hierarchical resource management.
Abstract: Addresses a dilemma raised by recent advances in networking technology, which provide support both for a rich variety of qualities of service (QoSs) and for applications that connect many end-points. Together these features encourage the development of complex multi-party applications that use a diverse set of data types. This raises a two-fold problem: how do application designers choose and specify the many QoS parameters that drive the ultimate performance of their applications; and how does the network efficiently manage its resources to support such a rich application mix? Our approach to this problem is to allow applications to be built around value-added services that encapsulate a variety of simpler resources. This enables both the specification of QoS in terms meaningful to applications, and global optimization of resource allocation across multiple streams and data types. We present a network architecture and a preliminary implementation that explicitly support the notion of application-oriented QoS for complex network services. The key concept is that of service brokers, which applications and service providers use to identify the network resources needed to meet QoS and cost objectives. Service brokers can incorporate a detailed understanding of an application domain, allowing them to make intelligent tradeoffs and to interact with applications and service providers at a high level. They can be hierarchical, in the sense that one broker can invoke the services of another broker. Finally, they provide the ability to deal with heterogeneous networks and hierarchical resource management.

Proceedings ArticleDOI
18 May 1998
TL;DR: In this paper, the authors propose a service-dependent charging policy for packet-switching networks, where charges vary with the type of service and with the quality of service (i.e. with the QoS parameters and with their respective values).
Abstract: In an integrated-services packet-switching network (e.g. the Internet of the future, which is expected to offer real-time as well as non-real-time services), the charging policy must be service-dependent, but how should charges vary with the type of service and with the quality of service (i.e. with the QoS parameters and with their respective values)? We start from a list of the properties the policy should have, and derive from it a formula that satisfies most of them. We also briefly discuss the evaluation of the coefficients in the formula and the experiments that could be run for its validation.

Proceedings ArticleDOI
18 May 1998
TL;DR: The design and implementation of RSVP (Resource ReSerVation Protocol) support for resource reservations over IP-in-IP tunnels are reported, and the experience from this effort that revealed a number of issues related to making resource reservations for aggregate data flows are revealed.
Abstract: Among its various uses, IP-in-IP (Internet Protocol) tunneling is a simple way to aggregate the data flows from multiple sources to multiple destinations into one flow, to cross part of the Internet. In this paper, we report our design and implementation of RSVP (Resource ReSerVation Protocol) support for resource reservations over IP-in-IP tunnels, and our experience from this effort that revealed a number of issues related to making resource reservations for aggregate data flows. First, aggregation and de-aggregation go in pairs; thus, the exit point of the tunnel must have adequate information to be able to de-multiplex the aggregate tunnel reservation back to reservations for individual flows. Second, if multiple reserved sessions exist over one tunnel, the two tunnel end-points need mechanisms to synchronize on which end-to-end reservation is bound to which tunnel reservation. On the other hand, mapping all reservations of the same traffic class into one tunnel session can substantially simplify the protocol. Furthermore, one must also properly map error reports from the aggregate reservation back to the ends of individual flows.

Proceedings ArticleDOI
F. Brichet1, A. Simonian
18 May 1998
TL;DR: New conservative upper bounds are derived enabling us to use the simple admission criteria associated with a Gaussian distribution for the bit rate offered to a link, andExponentially weighted moving average schemes seem to offer fair effectiveness in terms of precision and convergence speed.
Abstract: Addresses the admission control problem for leaky bucket controlled sources when assuming fluid flows and employing bufferless multiplexing. We have derived new conservative upper bounds enabling us to use the simple admission criteria associated with a Gaussian distribution for the bit rate offered to a link. The derived upper bounds are exact and no longer rely on the central limit theorem, which required a very large number of sources. Necessary parameters involved in these bounds can be derived from declared or measured values by means of pre-calculated tables. Exponentially weighted moving average schemes seem to offer fair effectiveness in terms of precision and convergence speed.

Proceedings ArticleDOI
Zheng Wang1
18 May 1998
TL;DR: The case for proportional fair sharing as a method of general bandwidth allocation on the Internet is presented and some of the related proposals for providing differentiated services are examined.
Abstract: In this paper, we look at the lessons we have learnt from our work on end-to-end per-session reservation, and we present the case for proportional fair sharing as a method of general bandwidth allocation on the Internet. Finally, we also examine some of the related proposals for providing differentiated services.

Proceedings ArticleDOI
18 May 1998
TL;DR: The requirements for QoS adaptation mechanisms and QoS-based distributed resource management, together with the approaches to QoS monitoring and adaptation, are discussed in the context of the recently developed Distributed Resource Management Architecture (DRMA).
Abstract: New and more advanced applications are supporting end-to-end quality-of-service (QoS) guarantees through the configuration and management of distributed resources. As an effect of the sharing of static resources across multiple concerns, coupled with the use of dynamically changing resources such as mobile communications, the general availability of resources in a distributed environment is variable and potentially unpredictable. In this paper, we discuss the requirements for QoS adaptation mechanisms and QoS-based distributed resource management, together with our approaches to QoS monitoring and adaptation, in the context of our recently developed Distributed Resource Management Architecture (DRMA).

Proceedings ArticleDOI
18 May 1998
TL;DR: The framework for the User Service Assistant (USA), a QoS management framework that should work independently of applications and available resources, and that should react to user input rather than trying to predict the users' perception of quality is laid out.
Abstract: The rapid deployment of interactive and multimedia applications, and the increased mobility of computers leads to the need for new technical solutions in computing systems. The Internet comprises a heterogeneous set of networks with very different characteristics, especially considering the increased usage of wireless networks. Even the end systems are architecturally very different, and these factors combined lead to the unreliable and unpredictable performance of networked applications. One of the problems today is the question of how to manage resources and thus provide users with control over the behavior of applications, known as quality-of-service (QoS) management. Much work has been done within this area, but all proposed schemes have one thing in common, a high level of complexity, which so far has prevented any of them from being fully implemented. We propose a new approach towards QoS management, in our User Service Assistant (USA). A QoS management framework should work independently of applications and available resources, and that it should react to user input rather than trying to predict the users' perception of quality. We focus on the feasibility of implementing USA, and we realized it with an experimental application. This paper first lays out the framework, describes the differences between USA and other proposed schemes, and then describes the implementation of the framework. After that, we describe extensions to the implementation we propose to evaluate next, in order to fully assess the model. Finally, we discuss the framework and its implications.

Proceedings ArticleDOI
18 May 1998
TL;DR: The Qualis architecture, how it is integrated into the Globus architecture, and how it addresses QoS in a metacomputing environment are presented.
Abstract: General computing over a widely distributed set of heterogeneous machines-typically called metacomputing-offers definite advantages. The notion of quality of service (QoS) for metacomputing is very important. This paper presents Qualis, the QoS component for the Globus metacomputing system. We present the Qualis architecture, how it is integrated into the Globus architecture, and how it addresses QoS in a metacomputing environment.

Proceedings ArticleDOI
A. Eriksson1, C. Gehrmann
18 May 1998
TL;DR: A lightweight resource reservation protocol for unicast Internet traffic that can be reserved on a per-connection basis without introducing connection states in the network is described.
Abstract: A lightweight resource reservation protocol for unicast Internet traffic is described The key feature of the protocol is that resources can be reserved on a per-connection basis without introducing connection states in the network Issues addressed in the paper include robustness against lost signalling messages, route changes and robustness against theft-of-service attacks

Proceedings ArticleDOI
18 May 1998
TL;DR: It is argued that measured maximal rate envelopes of the aggregate traffic flow can be used to design an MBAC algorithm that successfully controls the admissible region subject to the applications' QoS constraints, for the case of heterogeneous and highly bursty traffic flows, buffered multiplexers, and both moderate and large numbers of traffic flows.
Abstract: Measurement-based admission control (MBAC) offers an attractive means for satisfying the quality-of-service (QoS) requirements of delay-sensitive multimedia applications, without requiring an advanced and detailed traffic characterization of each individual traffic flow. In this paper, we argue that measured maximal rate envelopes of the aggregate traffic flow can be used to design an MBAC algorithm that successfully controls the admissible region subject to the applications' QoS constraints, for the case of heterogeneous and highly bursty traffic flows, buffered multiplexers, and both moderate and large numbers of traffic flows.

Proceedings ArticleDOI
J. Hall1, P. Mars
18 May 1998
TL;DR: This work proposes a novel scheduling scheme based on stochastic learning automata which is capable of satisfying a variety of delay requirements in a dynamic traffic environment, and shows via simulation that the scheme outperforms several existing scheduling algorithms.
Abstract: Considers the problem of scheduling packets in a multiplexer, the aim being to provide sufficient service to each traffic stream such that their objectives are just satisfied, thus maximising the resources available for other streams. We propose a novel scheduling scheme based on stochastic learning automata which is capable of satisfying a variety of delay requirements in a dynamic traffic environment, and we show via simulation that the scheme outperforms several existing scheduling algorithms.

Proceedings ArticleDOI
M. Asawa1
18 May 1998
TL;DR: A scalable service level monitoring methodology to assess user satisfaction without injecting any measurement traffic is developed and results of a real-world experiment demonstrate that, with careful data analysis, passive measurements can effectively detect service problems.
Abstract: Internet service providers are increasingly trying to differentiate themselves in terms of the service performance that they provide to their users. In this paper, we have developed a scalable service level monitoring methodology to assess user satisfaction without injecting any measurement traffic. Specifically, we suggest Web throughput as a service level metric, outline possible ways to measure it and discuss the advantages of passive observations of actual user activity. We further propose a statistical data analysis method that analyzes passive throughput measurements and quantifies user satisfaction/dissatisfaction and the confidence that the provider may have in the collected data, i.e. data reliability. The proposed technique is based on the premise that the service provider is interested in continuously monitoring the service levels being offered to a majority of the users over a long enough time. We present results of a real-world experiment that demonstrates that, with careful data analysis, passive measurements can effectively detect service problems. Our experiments also indicate that, for 90% of the time, the results of reliable passive measurements agree with those of random active measurements. Unlike active measurements, passive measurements do not generate additional traffic in the network, and hence are preferred. The underlying approach may also provide a communication vehicle between service sales/marketing and operations/capacity planning aspects of service provisioning.

Proceedings ArticleDOI
18 May 1998
TL;DR: A new traffic characterization, called g-regularity, is proposed, to characterise a marked point process to provide deterministic quality-of-service (QoS) guarantees in telecommunication networks with variable-length packets.
Abstract: Proposes a direct, simple and general treatment for providing deterministic quality-of-service (QoS) guarantees in telecommunication networks with variable-length packets. The traffic in such networks is modelled by marked point processes that consist of two sequences of variables: the arrival times and the packet lengths. We propose a new traffic characterization, called g-regularity, to characterise a marked point process. Based on the new traffic characterizations, we introduce two basic network elements: (i) traffic regulators that generate g-regular marked point processes, and (ii) g-servers that provide QoS for marked point processes. Network elements can be joined by concatenation, "filter bank summation" and feedback to form a composite network element. We illustrate the use of the framework by various examples that include G/G/1 queues, VirtualClock, guaranteed rate servers in tandem, segmentation and reassembly, jitter control, dampers, window flow control, and queues with service curve-based earliest deadlines.

Proceedings ArticleDOI
18 May 1998
TL;DR: The proposed flow aggregation algorithm arranges similar QoS requirements of clients into a single QoS requirement, by which the required number of video streams that the video server needs to prepare can be decreased and the total amount of the required bandwidth is reduced.
Abstract: Proposes flow aggregation algorithms for multicast video transport. Because of the heterogeneities of network/client environments and users' preferences on the perceived video quality, various QoS (quality of service) requirements must be simultaneously guaranteed, even for a single video source in a multicast connection. It is easy (but ineffective) to provide many video streams according to each user's request. Our flow aggregation algorithm arranges similar QoS requirements of clients into a single QoS requirement, by which the required number of video streams that the video server needs to prepare can be decreased. Then, the total amount of the required bandwidth is reduced by sharing the same video stream among a number of clients. Our algorithm has two variants: one is suitable for sender-initiated multicast connections; the other is suitable for receiver-initiated multicast connections. The proposed algorithms are evaluated and compared through simulation. Then we show that the server-initiated flow aggregation (an ideal case in our approach) is more effective, but the receiver-initiated flow aggregation can also offer a reasonably effective mechanism. Simplified versions of the above two algorithms are also considered in order to save computational cost. These are also evaluated in order to investigate their effectiveness.