scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Dimensioning bandwidth for elastic traffic in high-speed data networks

01 Oct 2000-IEEE ACM Transactions on Networking (IEEE Press)-Vol. 8, Iss: 5, pp 643-654
TL;DR: The model is compared with simulations, the accuracy of the asymptotic approximations are examined, the increase in bandwidth needed to satisfy the tail-probability performance objective as compared with the mean objective, and regimes where statistical gain can and cannot be realized are shown.
Abstract: Simple and robust engineering rules for dimensioning bandwidth for elastic data traffic are derived for a single bottleneck link via normal approximations for a closed-queueing network (CQN) model in heavy traffic. Elastic data applications adapt to available bandwidth via a feedback control such as the transmission control protocol (TCP) or the available bit rate transfer capability in asynchronous transfer mode. The dimensioning rules satisfy a performance objective based on the mean or tail probability of the per-flow bandwidth. For the mean objective, we obtain a simple expression for the effective bandwidth of an elastic source. We provide a new derivation of the normal approximation in CQNs using more accurate asymptotic expansions and give an explicit estimate of the error in the normal approximation. A CQN model was chosen to obtain the desirable property that the results depend on the distribution of the file sizes only via the mean, and not the heavy-tail characteristics. We view the exogenous "load" in terms of the file sizes and consider the resulting flow of packets as dependent on the presence of other flows and the closed-loop controls. We compare the model with simulations, examine the accuracy of the asymptotic approximations, quantify the increase in bandwidth needed to satisfy the tail-probability performance objective as compared with the mean objective, and show regimes where statistical gain can and cannot be realized.
Citations
More filters
Proceedings ArticleDOI
S. Ben Fred1, Thomas Bonald1, Alexandre Proutiere1, G. Régnié1, James Roberts1 
27 Aug 2001
TL;DR: The statistics of the realized throughput of elastic document transfers are studied, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows.
Abstract: In this paper we study the statistics of the realized throughput of elastic document transfers, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows. We first discuss the way TCP realizes statistical bandwidth sharing, illustrating essential properties by means of packet level simulations. Mathematical flow level models based on the theory of stochastic networks are then proposed to explain the observed behavior. A notable result is that first order performance (e.g., mean throughput) is insensitive with respect both to the flow size distribution and the flow arrival process, as long as "sessions" arrive according to a Poisson process. Perceived performance is shown to depend most significantly on whether demand at flow level is less than or greater than available capacity. The models provide a key to understanding the effectiveness of techniques for congestion management and service differentiation.

424 citations


Cites background or methods from "Dimensioning bandwidth for elastic ..."

  • ...This is the assumption in the studies of statistical bandwidth sharing by Heyman et al [12] and Berger and Kogan [4]....

    [...]

  • ...Berger and Kogan [4] have further explored this model in an asymptotic heavy traffic regime....

    [...]

Journal ArticleDOI
TL;DR: This paper compares the performance of three usual allocations, namely max-min fairness, proportional fairness and balanced fairness, in a communication network whose resources are shared by a random number of data flows and shows this model is representative of a rich class of wired and wireless networks.
Abstract: We compare the performance of three usual allocations, namely max-min fairness, proportional fairness and balanced fairness, in a communication network whose resources are shared by a random number of data flows. The model consists of a network of processor-sharing queues. The vector of service rates, which is constrained by some compact, convex capacity set representing the network resources, is a function of the number of customers in each queue. This function determines the way network resources are allocated. We show that this model is representative of a rich class of wired and wireless networks. We give in this general framework the stability condition of max-min fairness, proportional fairness and balanced fairness and compare their performance on a number of toy networks.

261 citations


Cites methods from "Dimensioning bandwidth for elastic ..."

  • ...Practical dimensioning rules were developed on this basis by Berger and Kogan [4]....

    [...]

Patent
Lili Qiu1, Paramvir Bahl1, Atul Adya1
13 May 2002
TL;DR: In this article, an improved method and system for optimizing the allocation of bandwidth within a network system is presented, where an access point measures the throughput of the connection between the client device and the network.
Abstract: An improved method and system for optimizing the allocation of bandwidth within a network system is presented. When a client device is engaged in communication with a remote computing device, an access point measures the throughput of the connection between the client device and the network. If the throughput is less than the amount of bandwidth reserved for usage by the client device, the access point adjusts the amount of bandwidth allocated for the client device to an amount equivalent to the measured throughput multiplied by an error variance factor. This process is then repeated periodically for the duration of the communication between the client device and the remote computing device in order to continually adapt the bandwidth allocation of the client device. Optionally, the method and system can be also deployed in client devices instead of the access point

145 citations

Journal ArticleDOI
James Roberts1
TL;DR: A survey of recent results on the performance of a network handling elastic data traffic under the assumption that flows are generated as a random process highlights the insensitivity results allowing a relatively simple expression of performance when bandwidth sharing realizes so-called "balanced fairness".

104 citations


Cites background from "Dimensioning bandwidth for elastic ..."

  • ...Berger and Kogan derive approximations valid when the link is always saturated [8]....

    [...]

Proceedings ArticleDOI
07 Nov 2002
TL;DR: This paper proposes two models for calculating the average bandwidth shares obtained by TCP controlled finite file transfers that arrive randomly and share a single (bottleneck) link, and proposes a simple modification of the PS model that accounts for large propagation delays.
Abstract: This paper is about analytical models for calculating the average bandwidth shares obtained by TCP controlled finite file transfers that arrive randomly and share a single (bottleneck) link. Owing to the complex nature of the TCP congestion control algorithm, a single model does not work well for all combinations of network parameters (i.e., mean file size, link capacity, and propagation delay). We propose two models, develop their analyses, and identify the regions of their applicability. One model is obtained from a detailed analysis of TCP's AIMD adaptive window mechanism; the analysis accounts for session arrivals and departures, and finite link buffers. It is essentially a processor sharing (PS) model with time varying service rate; hence we call it TCP-PS. The other model is a simple modification of the PS model that accounts for large propagation delays; we call this model rate limited-PS (RL-PS). The TCP-PS model analysis accommodates a general file size distribution by approximating it with a mixture of exponentials. The RL-PS model can be used for general file size distributions. We show that the TCP-PS model converges to the standard PS model as the propagation delay approaches zero. We also observe that the PS model provides very poor estimates of throughput unless the propagation delay is very small. We observe that the key parameters affecting the throughput are the bandwidth delay product (BDP), file size distribution, the link buffer and the traffic intensity. Several numerical comparisons between analytical and simulation results are provided. We observe that the TCP-PS model is accurate when the BDP is small compared to the mean file size, and the RL-PS model works well when the BDP is large compared to the mean file size.

74 citations

References
More filters
Book
01 Jan 1987
TL;DR: Undergraduate and graduate classes in computer networks and wireless communications; undergraduate classes in discrete mathematics, data structures, operating systems and programming languages.
Abstract: Undergraduate and graduate classes in computer networks and wireless communications; undergraduate classes in discrete mathematics, data structures, operating systems and programming languages. Also give lectures to both undergraduate-and graduate-level network classes and mentor undergraduate and graduate students for class projects.

6,991 citations

Book
01 Jan 1984
TL;DR: In this article, the authors introduce the concept of random perturbations in Dynamical Systems with a Finite Time Interval (FTI) and the Averaging Principle.
Abstract: 1.Random Perturbations.- 2.Small Random Perturbations on a Finite Time Interval.- 3.Action Functional.- 4.Gaussian Perturbations of Dynamical Systems. Neighborhood of an Equilibrium Point.- 5.Perturbations Leading to Markov Processes.- 6.Markov Perturbations on Large Time Intervals.- 7.The Averaging Principle. Fluctuations in Dynamical Systems with Averaging.- 8.Random Perturbations of Hamiltonian Systems.- 9. The Multidimensional Case.- 10.Stability Under Random Perturbations.- 11.Sharpenings and Generalizations.- References.- Index.

4,070 citations

Journal ArticleDOI
TL;DR: It is found that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson.
Abstract: Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. We evaluate 24 wide area traces, investigating a number of wide area TCP arrival processes (session and connection arrivals, FTP data connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. We find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib interarrivals preserves burstiness over many time scales; and that FTP data connection arrivals within FTP sessions come bunched into "connection bursts", the largest of which are so large that they completely dominate FTP data traffic. Finally, we offer some results regarding how our findings relate to the possible self-similarity of wide area traffic. >

3,915 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a plausible physical explanation for the occurrence of self-similarity in local-area network (LAN) traffic, based on convergence results for processes that exhibit high variability and is supported by detailed statistical analyzes of real-time traffic measurements from Ethernet LANs at the level of individual sources.
Abstract: A number of empirical studies of traffic measurements from a variety of working packet networks have demonstrated that actual network traffic is self-similar or long-range dependent in nature-in sharp contrast to commonly made traffic modeling assumptions. We provide a plausible physical explanation for the occurrence of self-similarity in local-area network (LAN) traffic. Our explanation is based on convergence results for processes that exhibit high variability and is supported by detailed statistical analyzes of real-time traffic measurements from Ethernet LANs at the level of individual sources. This paper is an extended version of Willinger et al. (1995). We develop here the mathematical results concerning the superposition of strictly alternating ON/OFF sources. Our key mathematical result states that the superposition of many ON/OFF sources (also known as packet-trains) with strictly alternating ON- and OFF-periods and whose ON-periods or OFF-periods exhibit the Noah effect produces aggregate network traffic that exhibits the Joseph effect. There is, moreover, a simple relation between the parameters describing the intensities of the Noah effect (high variability) and the Joseph effect (self-similarity). An extensive statistical analysis of high time-resolution Ethernet LAN traffic traces confirms that the data at the level of individual sources or source-destination pairs are consistent with the Noah effect. We also discuss implications of this simple physical explanation for the presence of self-similar traffic patterns in modern high-speed network traffic.

1,593 citations

Book
01 Jan 2000
TL;DR: TCP/IP Illustrated, Volume 1 is a complete and detailed guide to the entire TCP/IP protocol suite - with an important difference from other books on the subject: rather than just describing what the RFCs say the protocol suite should do, this unique book uses a popular diagnostic tool so you may actually watch the protocols in action.
Abstract: TCP/IP Illustrated, Volume 1 is a complete and detailed guide to the entire TCP/IP protocol suite - with an important difference from other books on the subject. Rather than just describing what the RFCs say the protocol suite should do, this unique book uses a popular diagnostic tool so you may actually watch the protocols in action.By forcing various conditions to occur - such as connection establishment, timeout and retransmission, and fragmentation - and then displaying the results, TCP/IP Illustrated gives you a much greater understanding of these concepts than words alone could provide. Whether you are new to TCP/IP or you have read other books on the subject, you will come away with an increased understanding of how and why TCP/IP works the way it does, as well as enhanced skill at developing aplications that run over TCP/IP.

1,384 citations


"Dimensioning bandwidth for elastic ..." refers background in this paper

  • ...The implemented changes in TCP of congestion avoidance, fast retransmit, and fast recovery have this aim, see for example [ 16 ]....

    [...]