scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A generalized processor sharing approach to flow control in integrated services networks: the multiple node case

Abhay Parekh1, Robert G. Gallager1
01 Jun 1993-IEEE ACM Transactions on Networking (IEEE Press)-Vol. 1, Iss: 2, pp 137-150
TL;DR: Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments.
Abstract: Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers. The inherent flexibility of the service discipline is exploited to analyze broad classes of networks. When only a subset of the sessions are leaky bucket constrained, we give succinct per-session bounds that are independent of the behavior of the other sessions and also of the network topology. However, these bounds are only shown to hold for each session that is guaranteed a backlog clearing rate that exceeds the token arrival rate of its leaky bucket. A much broader class of networks, called consistent relative session treatment (CRST) networks is analyzed for the case in which all of the sessions are leaky bucket constrained. First, an algorithm is presented that characterizes the internal traffic in terms of average rate and burstiness, and it is shown that all CRST networks are stable. Next, a method is presented that yields bounds on session delay and backlog given this internal traffic characterization. The links of a route are treated collectively, yielding tighter bounds than those that result from adding the worst-case delays (backlogs) at each of the links in the route. The bounds on delay and backlog for each session are efficiently computed from a universal service curve, and it is shown that these bounds are achieved by "staggered" greedy regimes when an independent sessions relaxation holds. Propagation delay is also incorporated into the model. Finally, the analysis of arbitrary topology GPS networks is related to Packet GPS networks (PGPS). The PGPS scheme was first proposed by Demers, Shenker and Keshav (1991) under the name of weighted fair queueing. For small packet sizes, the behavior of the two schemes is seen to be virtually identical, and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments. >

Summary (4 min read)

1 Introduction

  • This paper focuses on a central problem in the control of congestion in high speed integrated services networks.
  • Traditionally, the flexibility of data networks has been traded off with the performance guarantees given to the users.
  • A major part of their work is to analyze networks of arbitrary topology using these specialized servers, and to show how the analysis leads to implementable schemes for guaranteeing worst-case packet delay.
  • An important advantage of using leaky buckets is that this allows one to separate the packet delay into two components–delay in the leaky bucket and delay in the network.
  • The first of these components is independent of the other active sessions and can be estimated by the user, if the statistical characterization of the incoming data is sufficiently simple (See Section 6.3 of [1] for an example).

2 An Outline

  • Generalized Processor Sharing (GPS) is defined and explained in Section 3.
  • The authors propose a virtual clock implementation of PGPS in the next subsection.
  • Having established PGPS as a desirable service discipline scheme the authors turn their attention to the rate enforcement function in Section 6.
  • The leaky bucket is described and proposed as a desirable strategy for admission control.
  • The authors then proceed with an analysis, in Sections 7 and Section 8, of a single GPS server system in which the sessions are constrained by leaky buckets.

3 GPS Multiplexing

  • The choice of an appropriate service discipline at the nodes of the network is key to providing effective flow control.
  • This flexibility should not compromise the fairness of the scheme, i.e. a few classes of users should not be able to degrade service to other classes, to the extent that performance guarantees are violated.
  • In Section 4 the authors will present a packet-based multiplexing discipline that is an excellent approximation to GPS even when the packets are of variable length.
  • Then as long as ρi ≤ gi, the session can be guaranteed a throughput of ρi, independent of the demands of the other sessions.
  • When φ1 = φ2, and both sessions are backlogged, Figure 3.1: An example of generalized processor sharing.

4 Packet-by-Packet GPS

  • A problem with GPS is that it is an idealized discipline that does not transmit packets as entities.
  • The next packet to depart under GPS may not have arrived at time τ , and since the server has no knowledge of when this packet will arrive, there is no way for the server to be both work conserving and to serve the packets in increasing order of Fp.
  • Now suppose the server becomes free at time τ , i.e. it has just finished transmitting a packet at time τ .
  • Let us call this scheme PGPS for packet-by-packet generalized processor sharing.
  • Let pk be the kth packet in the busy period to depart under PGPS and let its length be Lk.

4.1 A Virtual Time Implementation of PGPS

  • In Section 4 the authors described PGPS but did not provide an efficient way to implement it.
  • In this section the authors will use the concept of Virtual Time to track the progress of GPS that will lead to a practical implementation of PGPS.
  • The authors interpretation of virtual time is a generalization of the one considered in [4] for uniform processor sharing.
  • In the following the authors assume that the server works at rate 1.
  • First, the virtual time finishing times can be determined at the packet arrival time.

5 Comparing PGPS to other schemes

  • Under weighted round robin, every session i, has an integer weight, wi associated with it.
  • If the system is heavily loaded in the sense that almost every slot is utilized, the packet may have to wait almost N slot times to be served, where N is the number of sessions sharing the server.
  • Zhang proposes an interesting scheme called virtual clock multiplexing [13].
  • PGPS uses the links more efficiently and flexibly and can provide comparable worst-case end-to-end delay bounds.
  • Stop-and-go queueing may provide significantly better bounds on jitter.

6 Leaky Bucket

  • Tokens or permits are generated at a fixed rate, ρ, and packets can be released into the network only after removing the required number of tokens from the token bucket.
  • There is no bound on the number of packets that can be buffered, but the token bucket contains at most σ bits worth of tokens.
  • This model for incoming traffic is essentially identical to the one recently proposed by Cruz [2], [3], and it has also been used in various forms to represent the inflow of parts into manufacturing systems by Kumar [9].
  • The arrival constraint is attractive since it restricts the traffic in terms of average rate (ρ), peak rate (C), and burstiness (σ and C).
  • The authors assume that the session starts out with a full bucket of tokens.

7 Analysis

  • The session traffic is constrained as in (7).the authors.
  • The server is work conserving (i.e. it is never idle if there is work in the system), and operates at the fixed rate of 1.
  • The session i delay at time τ is denoted by Di(τ), and is the amount of time that session i flow arriving at time τ spends in the system before departing.
  • The authors are interested in computing the maximum delay over all time, and over all arrival functions that are consistent with (7).
  • Similarly, the authors define the maximum backlog for session i, Q∗i : Q∗i = max the server in terms of additional parameters so that Si ∼ (σouti , ρouti , Couti ).

7.1 Preliminaries

  • Thus στi is the sum of the number of tokens left in the bucket and the session i backlog at the server at time τ .
  • Since session delay is bounded by the length of the largest possible system busy period, the session delays are bounded as well.
  • Since the system is stable, ρouti = ρi, and σouti is bounded for each session i.
  • Since the system is stable, any session i backlog must be cleared.
  • Then the amount served at this time must be σi +.

7.2 Greedy Sessions

  • Thus it takes lτi Ci−ρi time units to deplete the tokens in the bucket.
  • After this, the rate will be limited by the token arrival rate, ρi. Figure 7.2 depicts the arrival function Inspection of the figure Figure 7.2: A session i arrival function that is greedy from time τ .
  • Under generalized processor sharing, for every session i: D∗i , Q ∗ i and σ out i are achieved (not necessarily at the same time) when every session is greedy starting at time zero.
  • This is an intuitively pleasing and satisfying result.
  • It seems reasonable that if a session sends as much traffic as possible at all times, it is going to impede the progress of packets arriving from the other sessions.

7.3 An All-greedy GPS system

  • Theorem 3 suggests that in order to compute D∗i , Q ∗ i , and σouti , the authors should examine the dynamics of a system in which all the sessions are greedy starting at time 0, the beginning of a system busy period.
  • Since the system busy period is finite the authors can label the sessions in the order in which their first individual busy periods are terminated.
  • To simplify the presentation, the authors will assume that Ci ≥ 1 for all i—the general case is dealt with in [11].
  • The dynamics of an all-greedy GPS system, also known as It Figure 7.4.
  • Is interesting to note that the universal service curve S(0, t) is identical to the Virtual Clock function, V (t), defined in (5).

9 Picking the φ’s

  • Every session i is characterized by σi, ρi, Ci, di where di is the worst case packet delay that can be tolerated by session i.
  • The following is one possible approach that could be used.
  • Note that any choice of φN+1 ∈ [φminN+1, φmaxN+1] will meet worst case delay guarantees.
  • Picking the extreme points is not advisable since otherwise no more sessions could be accepted after session N + 1.

10 Conclusions

  • The authors presented a fair, flexible and efficient multiplexing scheme called Generalized Processor Sharing that appears to be appropriate for integrated services networks.
  • The authors analyzed the GPS multiplexer when the sources are constrained by leaky buckets, and presented an efficient algorithm to determine worst case delays for a given single server GPS system.
  • A method to add new users to the system was also discussed.
  • Elsewhere [10], the authors have extended this work to PGPS networks of arbitrary topologies.
  • It is hoped that their results in this paper and in the sequel can form the basis for a highly flexible and efficient rate-based flow control scheme for integrated services networks.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

A GENERALIZED PROCESSOR SHARING APPROACH TO FLOW CONTROL
IN INTEGRATED SERVICES NETWORKS—THE SINGLE NODE CASE
ABHAY K. PAREKH and ROBERT G. GALLAGER
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
parekh,gallager@lids.mit.edu
Abstract The problem of allocating network re-
sources to the users of an integrated services network
is investigated in the context of rate based flow con-
trol. The network is assumed to be a virtual cir-
cuit, connection-based packet network. We propose
a highly flexible and efficient multiplexing scheme
called Generalized Processor Sharing (GPS) that al-
lows the network to make worst-case performance
guarantees. A practical packet-by-packet service dis-
cipline that closely approximates Generalized Proces-
sor Sharing is also presented. This allows us to relate
performance results for GPS to the packet-by-packet
scheme in a precise manner.
A single server GPS system is analyzed exactly,
and tight bounds on worst-case packet delay, out-
put burstiness and backlog are derived for each ses-
sion, when the sources are constrained by leaky buck-
ets. The analysis yields a simple resource assignment
scheme that allows the server to make worst case de-
lay and rate guarantees to every session in the sys-
tem. Extensions of this work to arbitrary topology
networks are also discussed.
1 Introduction
This paper focuses on a central problem in the con-
trol of congestion in high speed integrated services
networks. Traditionally, the flexibility of data net-
works has been traded off with the p erformance guar-
antees given to the users. For example, the telephone
network provides good performance guarantees but
poor flexibility, while most packet switched networks
are more flexible, but only provide marginal perfor-
mance guarantees. Integrated services networks must
carry a wide range of traffic types and still be able
to provide performance guarantees to real-time ses-
sions such as voice and video. We will investigate
an approach to reconcile these apparently conflicting
requirements when the short-term demand for link
usage frequently exceeds the usable capacity.
We propose the use of a packet service discipline at
the nodes of the network that is based on a multiplex-
ing scheme called generalized processor sharing. This
service discipline is combined with leaky bucket rate
admission control to provide flexible, efficient and fair
use of the links. A major part of our work is to an-
alyze networks of arbitrary topology using these spe-
cialized servers, and to show how the analysis leads to
implementable schemes for guaranteeing worst-case
packet delay. In this paper, however, we will restrict
our attention to sessions at a single node, and post-
pone the analysis of arbitrary topologies to [10].
The analysis will concentrate on providing guar-
antees on throughput and worst-case packet delay.
While packet delay in the network can be expressed
as the sum of the processing queueing, transmission
and propagation delays, we will focus exclusively on
how to limit queueing delay.
Our approach can be described as a strategy for
rate-based flow control. Under rate-based schemes,
a source’s traffic is parametrized by a set of statis-
tics such as average rate, maximum rate, burstiness
etc., and is assigned a vector of values corresponding
to these parameters. The user also requests a cer-
tain quality of service, that might be characterized,
for example, by tolerance to worst-case or average de-
lay. The network checks to see if a new source can be
accommodated, and if so, it takes actions (such as
reserving transmission links or switching capacity) to
ensure the quality of service desired. Once a source
begins sending traffic, the network ensures that the
agreed upon values of traffic parameters are not vio-
lated.
We will assume that rate admission control is done
through leaky buckets [12]. An important advantage
of using leaky buckets is that this allows one to sep-
arate the packet delay into two components–delay in
the leaky bucket and delay in the network. The first
of these components is independent of the other active
sessions and can be estimated by the user, if the sta-
tistical characterization of the incoming data is suffi-
ciently simple (See Section 6.3 of [1] for an example).
Appeared in Infocom '92
Figures follow the body of the paper

The traffic entering the network has been “shap ed”
by the leaky bucket in a manner that can be suc-
cinctly characterized (we will do this in Section 6),
and so the network can upper bound the second com-
ponent of packet delay through this characterization.
This upper bound is independent of the statistics of
the incoming data, which is helpful in the usual case
where these statistics are either complex or unknown.
From this point on, we will not consider the delay in
the leaky bucket.
2 An Outline
Generalized Processor Sharing (GPS) is defined and
explained in Section 3. In Section 4 we present a
packet-based scheme, PGPS, and show that it closely
approximates GPS. Results obtained in this section
allow us to translate session delay and buffer re-
quirement bounds derived for a GPS server system
to a PGPS server system. We propose a virtual
clock implementation of PGPS in the next subsection.
Then PGPS is compared to some other multiplexing
schemes.
Having established PGPS as a desirable service dis-
cipline scheme we turn our attention to the rate en-
forcement function in Section 6. The leaky bucket
is described and proposed as a desirable strategy for
admission control. We then proceed with an analysis,
in Sections 7 and Section 8, of a single GPS server
system in which the sessions are constrained by leaky
buckets. In Section 9 we outline an algorithm for pro-
viding performance guarantees to a new user without
violating guarantees made to the existing sessions of
the system. Conclusions are in Section 10.
3 GPS Multiplexing
The choice of an appropriate service discipline at the
nodes of the network is key to providing effective flow
control. A good scheme should allow the network
to treat users differently, in accordance with their
desired quality of service. However, this flexibility
should not compromise the fairness of the scheme, i.e.
a few classes of users should not be able to degrade
service to other classes, to the extent that perfor-
mance guarantees are violated. Also, if one assumes
that the demand for high bandwidth services is likely
to keep pace with the increase in usable link band-
width, time and frequency multiplexing are too waste-
ful of the network resources to be considered as candi-
date multiplexing disciplines. Finally, the service dis-
cipline must be analyzable so that performance guar-
antees can be made in the first place. We now present
a flow-based multiplexing discipline called General-
ized Processor Sharing that is efficient, flexible, fair
and analyzable, and that therefore seems very ap-
propriate for integrated services networks. However,
it has the significant drawback of not transmitting
packets as entities. In Section 4 we will present a
packet-based multiplexing discipline that is an excel-
lent approximation to GPS even when the packets are
of variable length.
A Generalized Processor Sharing (GPS) server is
work conserving and operates at a fixed rate r. It is
characterized by positive real numbers φ
1
, φ
2
, ..., φ
N
.
Let S
i
(τ, t) be the amount of session i traffic served
in an interval [τ, t]. Then a GPS server is defined as
one for which
S
i
(τ, t)
S
j
(τ, t)
φ
i
φ
j
, j = 1, 2, ..., N (1)
for any session i that is backlogged in the interval
[τ, t].
Summing over all sessions j:
S
i
(τ, t)
X
j
φ
j
(t τ )rφ
i
and session i is guaranteed a rate of
g
i
=
φ
i
P
j
φ
j
r. (2)
GPS is an attractive multiplexing scheme for a num-
ber of reasons:
Define ρ
i
to be the session i average rate. Then
as long as ρ
i
g
i
, the session can be guaranteed
a throughput of ρ
i
, independent of the demands
of the other sessions.
The delay of an arriving session i bit can be
bounded as a function of the session i queue
length, independent of the queues and arrivals
of the other sessions. Schemes such as FCFS,
LCFS, and Strict Priority do not have this prop-
erty.
By varying the φ
i
’s we have the flexibility of
treating the sessions in a variety of different ways.
For example, when all the φ
i
’s are equal the sys-
tem reduces to uniform pro cessor sharing. As
long as the combined average rate of the sessions
is less than r, any assignment of positive φ
i
’s
yields a stable system.

It is possible to make worst-case network queue-
ing delay guarantees when the sources are con-
strained by leaky buckets. We will present our
results on this later. Thus GPS is particularly at-
tractive for sessions sending real-time traffic such
as voice and video.
Figure 3.1 illustrates generalized processor sharing.
Variable length packets arrive from both sessions on
infinite capacity links and appear as impulses to the
system. For i = 1, 2, let A
i
(0, t) be the amount of
session i traffic that arrives at the system in the in-
terval (0, t], and similarly, let S
i
(0, t) be the amount
of session i traffic that is served in the interval (0, t].
We assume that the server works at rate 1.
When φ
1
= φ
2
, and both sessions are backlogged,
Figure 3.1: An example of generalized processor shar-
ing.
they are each served at rate
1
2
(eg. the interval [1, 6]).
When 2φ
1
= φ
2
, and both sessions are backlogged,
session 1 is served at rate
1
3
and session 2 at rate
2
3
. Notice how increasing the relative weight of φ
2
leads to better treatment of that session in terms of
both backlog and delay. Also, notice that under both
choices of φ
i
, the system is empty at time 13 since the
server is work conserving under GPS.
4 Packet-by-Packet GPS
A problem with GPS is that it is an idealized disci-
pline that does not transmit packets as entities. It
assumes that the server can serve multiple sessions
simultaneously and that the traffic is infinitely divis-
ible. In this section we propose a simple packet-by-
packet transmission scheme that is an excellent ap-
proximation to GPS even when the packets are of
variable length. Our idea is similar to the one used
in [4] to simulate uniform pro cessor sharing. We will
adopt the convention that a packet has arrived only
after its last bit has arrived.
Let F
p
be the time at which packet p will depart
(finish service) under generalized processor sharing.
Then a very good approximation of GPS would be
a work conserving scheme that serves packets in in-
creasing order of F
p
. (By work conserving we mean
that the server is always busy when there are back-
logged packets in the system.) Now suppose that the
server is becomes free at time τ. The next packet to
depart under GPS may not have arrived at time τ,
and since the server has no knowledge of when this
packet will arrive, there is no way for the server to be
both work conserving and to serve the packets in in-
creasing order of F
p
. Now suppose the server becomes
free at time τ, i.e. it has just finished transmitting
a packet at time τ. The server picks the first packet
that would complete service in the GPS simulation if
no additional packets were to arrive after time τ. Let
us call this scheme PGPS for packet-by-packet gener-
alized processor sharing.
Notice that when φ
1
= φ
2
in the example of Figure
3.1, the first packet to complete service under GPS is
the session 1 packet that arrives at time 1. However,
the PGPS server is forced to begin serving the long
session 2 packet at time 0, since there are no other
packets in the system at that time. Thus the session
1 packet arriving at time 1 departs the system at time
4, i.e. 1 time unit later than it would depart under
GPS.
A natural issue to examine at this point is how
much later packets may depart the system under
PGPS relative to GPS. First we present a useful prop-
erty of GPS systems.
Lemma 1 Let p and p
0
be packets in a GPS system
at time τ and suppose that packet p completes service
before packet p
0
if there are no arrivals after time τ.
Then packet p will also complete service before packet
p
0
for any pattern of arrivals after time τ.

Now let
ˆ
F
p
be the time at which packet p departs
under PGPS. We show that
Theorem 1 For all packets p,
ˆ
F
p
F
p
<
L
max
r
,
where L
max
is the maximum packet length, and r is
the rate of the server.
Proof. Since both GPS and PGPS are work con-
serving disciplines, their busy periods coincide i.e. the
GPS server is in a busy period iff the PGPS server is
in a busy period. Hence it suffices to prove the result
for each busy period. Consider any busy period and
let the time that it begins be time zero. Let p
k
be the
k
th
packet in the busy period to depart under PGPS
and let its length be L
k
. Also let t
k
be the time that
p
k
departs under PGPS and u
k
be the time that p
k
departs under GPS. Finally, let a
k
be the time that
p
k
arrives. We now show that
t
k
u
k
+
L
max
r
for k = 1, 2, .... Let m be the largest integer that
satisfies both 0 < m k 1 and u
m
> u
k
. Thus
u
m
> u
k
u
i
for m < i < k. (3)
Then packet p
m
is transmitted before packets
p
m+1
. . . , p
k
under PGPS, but after all these pack-
ets under GPS. If no such integer m exists then set
m = 0. Now for the case m > 0, packet p
m
begins
transmission at t
m
L
m
r
, so from from Lemma 1:
min{a
m+1
, ..., a
k
} > t
m
L
m
r
(4)
Since p
m+1
, ..., p
k1
arrive after t
m
L
m
r
and depart
before p
k
does under GPS:
u
k
1
r
(L
k
+ L
k1
+ L
k2
+ . . . + L
m+1
) + t
m
L
m
r
u
k
t
k
L
m
r
.
If m = 0, then p
k1
, ..., p
1
all leave the GPS server
before p
k
does, and so
u
k
t
k
.
2
Let S
i
(τ, t) and
ˆ
S
i
(τ, t) be the amount of session i
traffic served under GPS and PGPS in the interval
[τ, t]. Then we can use Theorem 1 to show:
Theorem 2 For all times τ and sessions i.
S
i
(0, τ)
ˆ
S
i
(0, τ)
L
max
r
.
When φ = φ
i
for all i, this result reduces to one es-
tablished in [8]. Let
ˆ
Q
i
(τ) and Q
i
(t) be the session i
backlog at time τ under PGPS and GPS respectively.
Then it immediately follows from Theorem 2 that
Corollary 1 For all times τ and sessions i.
ˆ
Q
i
(0, τ) Q
i
(0, τ) L
max
.
Notice that
We can use Theorem 1 and Corollary 1 to trans-
late bounds on GPS worst-case packet delay and
backlog to the corresponding bounds on PGPS.
Variable packet lengths are easily handled by
PGPS. This is not true of service disciplines such
as weighted round robin.
4.1 A Virtual Time Implementation of PGPS
In Section 4 we described PGPS but did not pro-
vide an efficient way to implement it. In this sec-
tion we will use the concept of Virtual Time to track
the progress of GPS that will lead to a practical im-
plementation of PGPS. Our interpretation of virtual
time is a generalization of the one considered in [4]
for uniform processor sharing. In the following we
assume that the server works at rate 1.
Denote as an event each arrival and departure from
the GPS server, and let t
j
be the time at which the
j
th
event occurs (simultaneous events are ordered ar-
bitrarily). Let the time of the first arrival of a busy
period be denoted as t
1
= 0. Now observe that for
each j = 2, 3, ..., the set of sessions that are busy in
the interval (t
j1
, t
j
) is fixed, and we may denote this
set as B
j
. Virtual time V (t) is defined to be zero for
all times when the server is idle. Consider any busy
period, and let the time that it begins be time zero.
Then V (t) evolves as follows:
V (0) = 0
V (t
j1
+ τ ) = V (t
j1
) +
τ
P
iB
j
φ
i
, τ t
j
t
j1
,
j = 2, 3, ... (5)
The rate of change of V , namely
V (t
j
+τ)
τ
, is
1
P
iB
j
φ
i
,
and each backlogged session i receives service at rate

φ
i
V (t
j
+τ)
τ
. Thus, V can be interpreted as increas-
ing at the marginal rate at which backlogged sessions
receive service.
Now suppose that the k
th
session i packet arrives
at time a
k
i
and has length L
k
i
. Then denote the vir-
tual times at which this packet begins and completes
service as S
k
i
and F
k
i
respectively. Defining F
0
i
= 0
for all i, we have
S
k
i
= max{F
k1
i
, V (a
k
i
)}
F
k
i
= S
k
i
+
L
k
i
φ
i
. (6)
There are three attractive properties of the virtual
time interpretation from the standpoint of implemen-
tation. First, the virtual time finishing times can be
determined at the packet arrival time. Second, the
packets are served in order of virtual time finishing
time. Finally, we need only update virtual time when
there are events in the GPS system. However, the
price to paid for these advantages is some overhead
in keeping track of the sets B
j
, which is essential in
the updating of virtual time:
Define Next(t) to be the real time at which the next
packet will depart the GPS system after time t if there
are no more arrivals after time t. Thus the next vir-
tual time update after t will be performed at Next(t)
if there are no arrivals in the interval [t, Next(t)]. Now
suppose a packet arrives at some time, t, and that
the time of the event just prior to t is τ (if there is
no prior event, i.e. if the packet is the first arrival
in a busy period, then set τ = 0). Then, since the
set of busy sessions is fixed b etween events, V (t) may
be computed from (5), and the packet stamped with
its virtual time finishing time. Next(t) and the new
value of B are also computed.
Given this mechanism for updating virtual time,
PGPS is defined as follows: When a packet arrives,
virtual time is updated and the packet is stamp ed
with its virtual time finishing time. The server is work
conserving and serves packets in increasing order of
time-stamp.
5 Comparing PGPS to other schemes
Under weighted round robin, every session i, has an
integer weight, w
i
associated with it. The server polls
the sessions according a precomputed sequence in an
attempt to serve session i at a rate of
w
i
P
j
w
j
. If an
empty buffer is encountered, the server moves to the
next session in the order instantaneously. When an
arriving session i packet just misses its slot in a frame
it cannot be transmitted before the next session i slot.
If the system is heavily loaded in the sense that almost
every slot is utilized, the packet may have to wait
almost N slot times to be served, where N is the
number of sessions sharing the server. Since PGPS
approximates GPS to within one packet transmission
time regardless of the arrival patterns, it is immune
to such effects. PGPS also handles variable length
packets in a much more systematic fashion than does
weighted round robin. However, if N or the packets
sizes are small then it is possible to approximate GPS
well by weighted round robin.
Zhang proposes an interesting scheme called
virtual clock multiplexing [13]. Virtual clock multi-
plexing allows guaranteed rate and (average) delay
for sessions independent of the behavior of other ses-
sions. However, if a session produces a large burst
of data, even while the system is lightly loaded, that
session can be “punished” much later when the other
sessions become active. Under PGPS the delay of a
session i packet can be bounded in terms of the session
i queue size seen by that packet upon arrival, whereas
no such bound is possible under virtual clock mul-
tiplexing because of the punishment feature. Thus,
good worst-case performance can only be guaranteed
under virtual clock multiplexing under stringent ac-
cess control. Also, the additional flexibility of PGPS
may be useful in an integrated services network.
Stop-and-Go Queueing is proposed by Golestani in
[5, 6, 7], and is based on a network-wide time slot
structure. A finite number of connection types are
defined, where a type g connection is characterized
by a fixed frame size of T
g
. Each session i is as-
signed a connection type g. The admission policy
under which delay and buffer size guarantees can be
made is that no more than r
i
T
g
bits may be submitted
during any type g frame. Thus bandwidth is allocated
by peak rates rather than average rates. While this is
a more restrictive admission policy than leaky bucket
(as we shall see in Section 6), it allows for tight con-
trol of jitter in the network. The service discipline
is not work-conserving, but is designed to preserve
the smoothness properties of the admitted traffic. It
has the advantage of being very amenable to analysis.
PGPS uses the links more efficiently and flexibly and
can provide comparable worst-case end-to-end delay
bounds. Since it is work-conserving, PGPS will also
provide better average delay than stop-and-go for a

Citations
More filters
01 Jun 1994
TL;DR: This memo discusses a proposed extension to the Internet architecture and protocols to provide integrated services, i.e., to support real- time as well as the current non-real-time service of IP.
Abstract: This memo discusses a proposed extension to the Internet architecture and protocols to provide integrated services, i.e., to support real- time as well as the current non-real-time service of IP. This extension is necessary to meet the growing need for real-time service for a variety of new applications, including teleconferencing, remote seminars, telescience, and distributed simulation.

3,114 citations

Journal ArticleDOI
TL;DR: This paper first examines the basic problem of QoS routing, namely, finding a path that satisfies multiple constraints, and its implications on routing metric selection, and presents three path computation algorithms for source routing and for hop-by-hop routing.
Abstract: Several new architectures have been developed for supporting multimedia applications such as digital video and audio. However, quality-of-service (QoS) routing is an important element that is still missing from these architectures. In this paper, we consider a number of issues in QoS routing. We first examine the basic problem of QoS routing, namely, finding a path that satisfies multiple constraints, and its implications on routing metric selection, and then present three path computation algorithms for source routing and for hop-by-hop routing.

1,769 citations

Book
06 Jul 2001
TL;DR: The application of Network Calculus to the Internet and basic Min-plus and Max-plus Calculus and Optimal Multimedia Smoothing and Adaptive and Packet Scale Rate Guarantees are studied.
Abstract: Network Calculus.- Application of Network Calculus to the Internet.- Basic Min-plus and Max-plus Calculus.- Min-plus and Max-plus System Theory.- Optimal Multimedia Smoothing.- FIFO Systems and Aggregate Scheduling.- Adaptive and Packet Scale Rate Guarantees.- Time Varying Shapers.- Systems with Losses.

1,666 citations


Cites background from "A generalized processor sharing app..."

  • ...2.1 GPS and Guaranteed Rate Nodes . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 67 2.1.1 Packet Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 67 2.1.2 GPS and a Practical Implementation (PGPS) . . . . . . . . . . . . . . . . . . 68 2.1.3 Guaranteed Rate (GR) Nodes and the Max-Plus Approach .. . . . . . . . . . . ....

    [...]

  • ...In contrast, for PGPS [57] PGPS WF Q , while PGPS is linear in the number of queues in the scheduler....

    [...]

  • ...It is shown in [57] that a better service curve can be obtained for every flow if we know some arrival curve properties for all flows; however the simple property is sufficient to understand the integrated service model....

    [...]

Journal ArticleDOI
TL;DR: This paper describes a new approximation of fair queuing that achieves nearly perfect fairness in terms of throughput, requires only O(1) work to process a packet, and is simple enough to implement in hardware.
Abstract: Fair queuing is a technique that allows each flow passing through a network device to have a fair share of network resources. Previous schemes for fair queuing that achieved nearly perfect fairness were expensive to implement; specifically, the work required to process a packet in these schemes was O(log(n)), where n is the number of active flows. This is expensive at high speeds. On the other hand, cheaper approximations of fair queuing reported in the literature exhibit unfair behavior. In this paper, we describe a new approximation of fair queuing, that we call deficit round-robin. Our scheme achieves nearly perfect fairness in terms of throughput, requires only O(1) work to process a packet, and is simple enough to implement in hardware. Deficit round-robin is also applicable to other scheduling problems where servicing cannot be broken up into smaller units (such as load balancing) and to distributed queues.

1,589 citations

Journal ArticleDOI
TL;DR: In this paper, the authors have developed abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues.
Abstract: Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.

1,535 citations


Cites methods from "A generalized processor sharing app..."

  • ...We define the following model which is motivated by leaky bucket Internet traffic models [Cruz 1991a; Parekh and Gallager 1993]....

    [...]

  • ...One approach to modeling bursty sources is given by the (r, b) token bucket traf.c regulator [Parekh and Gallager 1993; Parekh 1992; Cruz 1991a; Cruz 1991b] used to model bursty traf.c for QoS in Internet....

    [...]

  • ...One approach to modeling bursty sources is given by the (r, b) token bucket traffic regulator [Parekh and Gallager 1993; Parekh 1992; Cruz 1991a; 1991b] used to model bursty traffic for QoS in Internet....

    [...]

  • ...We de.ne the following model that is motivated by leaky-bucket Internet traf.c models [Cruz 1991a; Parekh and Gallager 1993]....

    [...]

References
More filters
Book
01 Jan 1987
TL;DR: Undergraduate and graduate classes in computer networks and wireless communications; undergraduate classes in discrete mathematics, data structures, operating systems and programming languages.
Abstract: Undergraduate and graduate classes in computer networks and wireless communications; undergraduate classes in discrete mathematics, data structures, operating systems and programming languages. Also give lectures to both undergraduate-and graduate-level network classes and mentor undergraduate and graduate students for class projects.

6,991 citations

Journal ArticleDOI
TL;DR: In this article, a fair gateway queueing algorithm based on an earlier suggestion by Nagle is proposed to control congestion in datagram networks, based on the idea of fair queueing.
Abstract: We discuss gateway queueing algorithms and their role in controlling congestion in datagram networks. A fair queueing algorithm, based on an earlier suggestion by Nagle, is proposed. Analysis and s...

2,639 citations

Proceedings ArticleDOI
01 Aug 1989
TL;DR: It is found that fair queueing provides several important advantages over the usual first-come-first-serve queueing algorithm: fair allocation of bandwidth, lower delay for sources using less than their full share of bandwidth and protection from ill-behaved sources.
Abstract: We discuss gateway queueing algorithms and their role in controlling congestion in datagram networks. A fair queueing algorithm, based on an earlier suggestion by Nagle, is proposed. Analysis and simulations are used to compare this algorithm to other congestion control schemes. We find that fair queueing provides several important advantages over the usual first-come-first-serve queueing algorithm: fair allocation of bandwidth, lower delay for sources using less than their full share of bandwidth, and protection from ill-behaved sources.

2,480 citations


"A generalized processor sharing app..." refers background in this paper

  • ...The service discipline is based on Generalized Processor Sharing (GPS) and was first suggested in [3] in the context of managing congestion at gateway nodes....

    [...]

Journal ArticleDOI
TL;DR: A calculus is developed for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy, and burstiness constraints satisfied by the traffic that exits the element are derived.
Abstract: A calculus is developed for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory developed is different from traditional approaches to analyzing delay because the model used to describe the entry of data into the network is nonprobabilistic. It is supposed that the data stream entered into the network by any given user satisfies burstiness constraints. A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies bursting constraints. Under this assumption, bounds are obtained on delay and buffering requirements for the network element; burstiness constraints satisfied by the traffic that exits the element are derived. >

2,049 citations


"A generalized processor sharing app..." refers methods in this paper

  • ...The constraint (3) is identical to the one suggested by Cruz [1]....

    [...]

Journal ArticleDOI
TL;DR: A method to analyze the flow of data in a network consisting of the interconnection of network elements is presented and it is shown how regulator elements connected in series can be used to enforce general burstiness constraints.
Abstract: For pt.I see ibid., vol.37, no.1, p.114-31 (1991). A method to analyze the flow of data in a network consisting of the interconnection of network elements is presented. Assuming the data that enters the network satisfies burstiness constraints, burstiness constraints are derived for traffic flowing between network elements. These derived constraints imply bounds on network delay and buffering requirements. By example, it is shown that the use of regulator elements within the network can reduce maximum network delay. It is also found that such a use of regulator elements can enlarge the throughput region where finite bounds for delay are found. Finally, it is shown how regulator elements connected in series can be used to enforce general burstiness constraints. >

1,007 citations


"A generalized processor sharing app..." refers methods in this paper

  • ...This phenomenon has been noticed by researchers from fields as diverse as manufacturing systems [9, 5], communication systems [2] and VLSI circuit simulation [4]....

    [...]

  • ...Under the Additive Method due to [2], we add the worst case bounds on delay (backlog) for session i at each of the nodes m ∈ P (i) considered in isolation....

    [...]

  • ...Consider the four node example in Figure 1 (which is identical to Example 2 of Cruz [2])....

    [...]

Frequently Asked Questions (9)
Q1. What have the authors contributed in "A generalized processor sharing approach to flow control in integrated services networks—the single node case" ?

The authors propose a highly flexible and efficient multiplexing scheme called Generalized Processor Sharing ( GPS ) that allows the network to make worst-case performance guarantees. Extensions of this work to arbitrary topology networks are also discussed. 

The authors propose a highly flexible and efficient multiplexing scheme called Generalized Processor Sharing ( GPS ) that allows the network to make worst-case performance guarantees. Extensions of this work to arbitrary topology networks are also discussed. 

The admission policy under which delay and buffer size guarantees can be made is that no more than riTg bits may be submitted during any type g frame. 

For the active sessions the authors have φ1, ..., φN , and wish to assign φN+1 so that the new session can be accommodated without violating any of the existing guarantees on throughput and delay. 

In addition to securing the required number of tokens, the traffic is further constrained to leave the bucket at a maximum rate of C > ρ. 

if a session produces a large burst of data, even while the system is lightly loaded, that session can be “punished” much later when the other sessions become active. 

Integrated services networks must carry a wide range of traffic types and still be able to provide performance guarantees to real-time sessions such as voice and video. 

the PGPS server is forced to begin serving the long session 2 packet at time 0, since there are no other packets in the system at that time. 

It is hoped that their results in this paper and in the sequel can form the basis for a highly flexible and efficient rate-based flow control scheme for integrated services networks.