A generalized processor sharing approach to flow control in integrated services networks: the multiple node case
Summary (4 min read)
1 Introduction
- This paper focuses on a central problem in the control of congestion in high speed integrated services networks.
- Traditionally, the flexibility of data networks has been traded off with the performance guarantees given to the users.
- A major part of their work is to analyze networks of arbitrary topology using these specialized servers, and to show how the analysis leads to implementable schemes for guaranteeing worst-case packet delay.
- An important advantage of using leaky buckets is that this allows one to separate the packet delay into two components–delay in the leaky bucket and delay in the network.
- The first of these components is independent of the other active sessions and can be estimated by the user, if the statistical characterization of the incoming data is sufficiently simple (See Section 6.3 of [1] for an example).
2 An Outline
- Generalized Processor Sharing (GPS) is defined and explained in Section 3.
- The authors propose a virtual clock implementation of PGPS in the next subsection.
- Having established PGPS as a desirable service discipline scheme the authors turn their attention to the rate enforcement function in Section 6.
- The leaky bucket is described and proposed as a desirable strategy for admission control.
- The authors then proceed with an analysis, in Sections 7 and Section 8, of a single GPS server system in which the sessions are constrained by leaky buckets.
3 GPS Multiplexing
- The choice of an appropriate service discipline at the nodes of the network is key to providing effective flow control.
- This flexibility should not compromise the fairness of the scheme, i.e. a few classes of users should not be able to degrade service to other classes, to the extent that performance guarantees are violated.
- In Section 4 the authors will present a packet-based multiplexing discipline that is an excellent approximation to GPS even when the packets are of variable length.
- Then as long as ρi ≤ gi, the session can be guaranteed a throughput of ρi, independent of the demands of the other sessions.
- When φ1 = φ2, and both sessions are backlogged, Figure 3.1: An example of generalized processor sharing.
4 Packet-by-Packet GPS
- A problem with GPS is that it is an idealized discipline that does not transmit packets as entities.
- The next packet to depart under GPS may not have arrived at time τ , and since the server has no knowledge of when this packet will arrive, there is no way for the server to be both work conserving and to serve the packets in increasing order of Fp.
- Now suppose the server becomes free at time τ , i.e. it has just finished transmitting a packet at time τ .
- Let us call this scheme PGPS for packet-by-packet generalized processor sharing.
- Let pk be the kth packet in the busy period to depart under PGPS and let its length be Lk.
4.1 A Virtual Time Implementation of PGPS
- In Section 4 the authors described PGPS but did not provide an efficient way to implement it.
- In this section the authors will use the concept of Virtual Time to track the progress of GPS that will lead to a practical implementation of PGPS.
- The authors interpretation of virtual time is a generalization of the one considered in [4] for uniform processor sharing.
- In the following the authors assume that the server works at rate 1.
- First, the virtual time finishing times can be determined at the packet arrival time.
5 Comparing PGPS to other schemes
- Under weighted round robin, every session i, has an integer weight, wi associated with it.
- If the system is heavily loaded in the sense that almost every slot is utilized, the packet may have to wait almost N slot times to be served, where N is the number of sessions sharing the server.
- Zhang proposes an interesting scheme called virtual clock multiplexing [13].
- PGPS uses the links more efficiently and flexibly and can provide comparable worst-case end-to-end delay bounds.
- Stop-and-go queueing may provide significantly better bounds on jitter.
6 Leaky Bucket
- Tokens or permits are generated at a fixed rate, ρ, and packets can be released into the network only after removing the required number of tokens from the token bucket.
- There is no bound on the number of packets that can be buffered, but the token bucket contains at most σ bits worth of tokens.
- This model for incoming traffic is essentially identical to the one recently proposed by Cruz [2], [3], and it has also been used in various forms to represent the inflow of parts into manufacturing systems by Kumar [9].
- The arrival constraint is attractive since it restricts the traffic in terms of average rate (ρ), peak rate (C), and burstiness (σ and C).
- The authors assume that the session starts out with a full bucket of tokens.
7 Analysis
- The session traffic is constrained as in (7).the authors.
- The server is work conserving (i.e. it is never idle if there is work in the system), and operates at the fixed rate of 1.
- The session i delay at time τ is denoted by Di(τ), and is the amount of time that session i flow arriving at time τ spends in the system before departing.
- The authors are interested in computing the maximum delay over all time, and over all arrival functions that are consistent with (7).
- Similarly, the authors define the maximum backlog for session i, Q∗i : Q∗i = max the server in terms of additional parameters so that Si ∼ (σouti , ρouti , Couti ).
7.1 Preliminaries
- Thus στi is the sum of the number of tokens left in the bucket and the session i backlog at the server at time τ .
- Since session delay is bounded by the length of the largest possible system busy period, the session delays are bounded as well.
- Since the system is stable, ρouti = ρi, and σouti is bounded for each session i.
- Since the system is stable, any session i backlog must be cleared.
- Then the amount served at this time must be σi +.
7.2 Greedy Sessions
- Thus it takes lτi Ci−ρi time units to deplete the tokens in the bucket.
- After this, the rate will be limited by the token arrival rate, ρi. Figure 7.2 depicts the arrival function Inspection of the figure Figure 7.2: A session i arrival function that is greedy from time τ .
- Under generalized processor sharing, for every session i: D∗i , Q ∗ i and σ out i are achieved (not necessarily at the same time) when every session is greedy starting at time zero.
- This is an intuitively pleasing and satisfying result.
- It seems reasonable that if a session sends as much traffic as possible at all times, it is going to impede the progress of packets arriving from the other sessions.
7.3 An All-greedy GPS system
- Theorem 3 suggests that in order to compute D∗i , Q ∗ i , and σouti , the authors should examine the dynamics of a system in which all the sessions are greedy starting at time 0, the beginning of a system busy period.
- Since the system busy period is finite the authors can label the sessions in the order in which their first individual busy periods are terminated.
- To simplify the presentation, the authors will assume that Ci ≥ 1 for all i—the general case is dealt with in [11].
- The dynamics of an all-greedy GPS system, also known as It Figure 7.4.
- Is interesting to note that the universal service curve S(0, t) is identical to the Virtual Clock function, V (t), defined in (5).
9 Picking the φ’s
- Every session i is characterized by σi, ρi, Ci, di where di is the worst case packet delay that can be tolerated by session i.
- The following is one possible approach that could be used.
- Note that any choice of φN+1 ∈ [φminN+1, φmaxN+1] will meet worst case delay guarantees.
- Picking the extreme points is not advisable since otherwise no more sessions could be accepted after session N + 1.
10 Conclusions
- The authors presented a fair, flexible and efficient multiplexing scheme called Generalized Processor Sharing that appears to be appropriate for integrated services networks.
- The authors analyzed the GPS multiplexer when the sources are constrained by leaky buckets, and presented an efficient algorithm to determine worst case delays for a given single server GPS system.
- A method to add new users to the system was also discussed.
- Elsewhere [10], the authors have extended this work to PGPS networks of arbitrary topologies.
- It is hoped that their results in this paper and in the sequel can form the basis for a highly flexible and efficient rate-based flow control scheme for integrated services networks.
Did you find this useful? Give us your feedback
Citations
3,114 citations
1,769 citations
1,666 citations
Cites background from "A generalized processor sharing app..."
...2.1 GPS and Guaranteed Rate Nodes . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 67 2.1.1 Packet Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 67 2.1.2 GPS and a Practical Implementation (PGPS) . . . . . . . . . . . . . . . . . . 68 2.1.3 Guaranteed Rate (GR) Nodes and the Max-Plus Approach .. . . . . . . . . . . ....
[...]
...In contrast, for PGPS [57] PGPS WF Q , while PGPS is linear in the number of queues in the scheduler....
[...]
...It is shown in [57] that a better service curve can be obtained for every flow if we know some arrival curve properties for all flows; however the simple property is sufficient to understand the integrated service model....
[...]
1,589 citations
1,535 citations
Cites methods from "A generalized processor sharing app..."
...We define the following model which is motivated by leaky bucket Internet traffic models [Cruz 1991a; Parekh and Gallager 1993]....
[...]
...One approach to modeling bursty sources is given by the (r, b) token bucket traf.c regulator [Parekh and Gallager 1993; Parekh 1992; Cruz 1991a; Cruz 1991b] used to model bursty traf.c for QoS in Internet....
[...]
...One approach to modeling bursty sources is given by the (r, b) token bucket traffic regulator [Parekh and Gallager 1993; Parekh 1992; Cruz 1991a; 1991b] used to model bursty traffic for QoS in Internet....
[...]
...We de.ne the following model that is motivated by leaky-bucket Internet traf.c models [Cruz 1991a; Parekh and Gallager 1993]....
[...]
References
[...]
6,991 citations
2,639 citations
2,480 citations
"A generalized processor sharing app..." refers background in this paper
...The service discipline is based on Generalized Processor Sharing (GPS) and was first suggested in [3] in the context of managing congestion at gateway nodes....
[...]
2,049 citations
"A generalized processor sharing app..." refers methods in this paper
...The constraint (3) is identical to the one suggested by Cruz [1]....
[...]
1,007 citations
"A generalized processor sharing app..." refers methods in this paper
...This phenomenon has been noticed by researchers from fields as diverse as manufacturing systems [9, 5], communication systems [2] and VLSI circuit simulation [4]....
[...]
...Under the Additive Method due to [2], we add the worst case bounds on delay (backlog) for session i at each of the nodes m ∈ P (i) considered in isolation....
[...]
...Consider the four node example in Figure 1 (which is identical to Example 2 of Cruz [2])....
[...]
Related Papers (5)
Frequently Asked Questions (9)
Q2. What are the contributions in "A generalized processor sharing approach to flow control in integrated services networks—the single node case" ?
The authors propose a highly flexible and efficient multiplexing scheme called Generalized Processor Sharing ( GPS ) that allows the network to make worst-case performance guarantees. Extensions of this work to arbitrary topology networks are also discussed.
Q3. What is the admission policy for a g frame?
The admission policy under which delay and buffer size guarantees can be made is that no more than riTg bits may be submitted during any type g frame.
Q4. What is the way to assign a session to a server?
For the active sessions the authors have φ1, ..., φN , and wish to assign φN+1 so that the new session can be accommodated without violating any of the existing guarantees on throughput and delay.
Q5. How is the traffic constrained to leave the bucket?
In addition to securing the required number of tokens, the traffic is further constrained to leave the bucket at a maximum rate of C > ρ.
Q6. What is the effect of a pgps session on the system?
if a session produces a large burst of data, even while the system is lightly loaded, that session can be “punished” much later when the other sessions become active.
Q7. What is the main problem in the control of congestion in high speed integrated services networks?
Integrated services networks must carry a wide range of traffic types and still be able to provide performance guarantees to real-time sessions such as voice and video.
Q8. What is the PGPS server forced to serve?
the PGPS server is forced to begin serving the long session 2 packet at time 0, since there are no other packets in the system at that time.
Q9. What is the purpose of this paper?
It is hoped that their results in this paper and in the sequel can form the basis for a highly flexible and efficient rate-based flow control scheme for integrated services networks.