scispace - formally typeset
Search or ask a question
Journal ArticleDOI

On the use of fractional Brownian motion in the theory of connectionless networks

01 Aug 1995-IEEE Journal on Selected Areas in Communications (IEEE)-Vol. 13, Iss: 6, pp 953-962
TL;DR: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented, and the notion of ideal free traffic is introduced.
Abstract: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented. Insight into the parameters is obtained by relating the model to an equivalent burst model. Results on a corresponding storage process are presented. The buffer occupancy distribution is approximated by a Weibull distribution. The model is compared with publicly available samples of real Ethernet traffic. The degree of the short-term predictability of the traffic model is studied through an exact formula for the conditional variance of a future value given the past. The applicability and interpretation of the self-similar model are discussed extensively, and the notion of ideal free traffic is introduced. >
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the Girsanov theorem for the functionals of a fractional Brownian motion using the stochastic calculus of variations was proved for the Ito formula.
Abstract: Since the fractional Brownian motion is not a semi-martingale, the usual Ito calculus cannot be used to define a full stochastic calculus. However, in this work, we obtain the Ito formula, the Ito–Clark representation formula and the Girsanov theorem for the functionals of a fractional Brownian motion using the stochastic calculus of variations.

713 citations


Cites background from "On the use of fractional Brownian m..."

  • ...Moreover, it turns out that this property is far from being negligible because of the effects it induces on the expected behavior of the global system [12]....

    [...]

Journal ArticleDOI
TL;DR: A new multiscale modeling framework for characterizing positive-valued data with long-range-dependent correlations (1/f noise) using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, which provides a rapid O(N) cascade algorithm for synthesizing N-point data sets.
Abstract: We develop a new multiscale modeling framework for characterizing positive-valued data with long-range-dependent correlations (1/f noise). Using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the model provides a rapid O(N) cascade algorithm for synthesizing N-point data sets. We study both the second-order and multifractal properties of the model, the latter after a tutorial overview of multifractal analysis. We derive a scheme for matching the model to real data observations and, to demonstrate its effectiveness, apply the model to network traffic synthesis. The flexibility and accuracy of the model and fitting procedure result in a close fit to the real data statistics (variance-time plots and moment scaling) and queuing behavior. Although for illustrative purposes we focus on applications in network traffic modeling, the multifractal wavelet model could be useful in a number of other areas involving positive data, including image processing, finance, and geophysics.

609 citations


Cites background from "On the use of fractional Brownian m..."

  • ...Convincing modeling results have made a strong case for this point of view [74], [75]....

    [...]

Journal ArticleDOI
TL;DR: The number of parameters needed, parameter estimation, analytical tractability, and ability of traffic models to capture marginal distribution and auto-correlation structure of the actual traffic are discussed.
Abstract: Traffic models are at the heart of any performance evaluation of telecommunications networks An accurate estimation of network performance is critical for the success of broadband networks Such networks need to guarantee an acceptable quality of service (QoS) level to the users Therefore, traffic models need to be accurate and able to capture the statistical characteristics of the actual traffic We survey and examine traffic models that are currently used in the literature Traditional short-range and non-traditional long-range dependent traffic models are presented The number of parameters needed, parameter estimation, analytical tractability, and ability of traffic models to capture marginal distribution and auto-correlation structure of the actual traffic are discussed

482 citations

Proceedings ArticleDOI
07 Mar 2004
TL;DR: It is shown that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales, and this traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics.
Abstract: Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poisson-based models. However, since that original data set was collected, both link speeds and the number of Internet-connected hosts have increased by more than three orders of magnitude. Thus, we now revisit the Poisson assumption, by studying a combination of historical traces and new measurements obtained from a major backbone link belonging to a Tier 1 ISP. We show that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales. At multisecond scales, we find a distinctive piecewise-linear nonstationarity, together with evidence of long-range dependence. Combining our observations across both time scales leads to a time-dependent Poisson characterization of network traffic that, when viewed across very long time scales, exhibits the observed long-range dependence. This traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics. It also seems to be in general agreement with recent theoretical models for large-scale traffic aggregation

409 citations


Cites background from "On the use of fractional Brownian m..."

  • ...Although Whitt pointed out that the right time scale must be an increasing function of load placed on a network resource [39], Norros [33] has observed that network traffic sources have the flexibility and intelligence to adapt their transmission policies to the resources currently available in the network....

    [...]

Journal ArticleDOI
TL;DR: It is revealed that the applicability of traffic prediction is limited by the deteriorating prediction accuracy with increasing prediction interval, and quantized reference to the optimal online traffic predictability for network control purposes is provided.

252 citations

References
More filters
Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations

Journal ArticleDOI
TL;DR: It is demonstrated that Ethernet LAN traffic is statistically self-similar, that none of the commonly used traffic models is able to capture this fractal-like behavior, and that such behavior has serious implications for the design, control, and analysis of high-speed, cell-based networks.
Abstract: Demonstrates that Ethernet LAN traffic is statistically self-similar, that none of the commonly used traffic models is able to capture this fractal-like behavior, that such behavior has serious implications for the design, control, and analysis of high-speed, cell-based networks, and that aggregating streams of such traffic typically intensifies the self-similarity ("burstiness") instead of smoothing it. These conclusions are supported by a rigorous statistical analysis of hundreds of millions of high quality Ethernet traffic measurements collected between 1989 and 1992, coupled with a discussion of the underlying mathematical and statistical properties of self-similarity and their relationship with actual network behavior. The authors also present traffic models based on self-similar stochastic processes that provide simple, accurate, and realistic descriptions of traffic scenarios expected during B-ISDN deployment. >

5,567 citations

Book
01 Jan 1982

3,159 citations

Journal ArticleDOI
TL;DR: A relation coupling together the storage requirement, the achievable utilization and the output rate is derived and a lower bound for the complementary distribution function of the storage level is given.
Abstract: A storage model with self-similar input process is studied. A relation coupling together the storage requirement, the achievable utilization and the output rate is derived. A lower bound for the complementary distribution function of the storage level is given.

917 citations