scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Hierarchical approach to building generative networkload models

TL;DR: This work proposes a hierarchical approach for building generative networkload ( workload for networks ) models, based on the Context Free Grammar (CFG), which views the networkload as a sequence that can be generated from the rules of a CFG.
Abstract: Performance evaluation study of computer networks requires a concise description of the workload under which the performance is to be evaluated. The performance evaluation of networks is an important field of study today, because of the increasing usage of computer networks. In the context of network sizing or tuning, it is often necessary to conduct the performance evaluation studies under different load conditions. The repeatability of the experiments for different workload profiles, requires that the workload models generate the workload profiles parametrically. Such a model, should preferably be time-invariant, consistent and generative. We propose a hierarchical approach for building generative networkload ( workload for networks ) models, based on the Context Free Grammar (CFG). We view the networkload as a sequence that can be generated from the rules of a CFG. Our approach combines the established practice of viewing the workload as “consisting of a hierarchy” and the CFG description , to produce a generative networkload model. The time-invariance and generative nature are verified experimentally. The usefulness of the networkload model, in the study of a typical resource management problem of a network, such as the optimal allocation of clients to servers, is illustrated by using the generative model as input descriptor to a queuing network model of a single server network.
Citations
More filters
Book ChapterDOI
03 May 1995
TL;DR: A hierarchical approach to the systematic characterization of the workload of a parallel system, to be kept as modular and flexible as possible, which can be applied to any type of performance studies.
Abstract: Performance evaluation studies are to be an integral part of the design and tuning of parallel applications. We propose a hierarchical approach to the systematic characterization of the workload of a parallel system, to be kept as modular and flexible as possible. The methodology is based on three different, but related, layers: the application, the algorithm, and the routine layer. For each of these layers different characteristics representing functional, sequential, parallel, and quantitative descriptions have been identified. These characteristics are specified in a system independent way to clearly separate between the workload description and the architecture description. Taking also architectural and mapping features into consideration, the hierarchical workload characterization can be applied to any type of performance studies.

38 citations

Book ChapterDOI
17 Nov 1997
TL;DR: An approach for generating a profile of requests submitted to a WWW server which takes explicitly into account the user behavior when surfing the WWW, and presents Probabilistic Attributed Context Free Grammar (PACFG) as a model for translating this user oriented view of the workload.
Abstract: With the World Wide Web (WWW) traffic being the fastest growing portion of load on the internet, describing and characterizing this workload is a central issue for any performance evaluation study. In this paper, we present an approach for generating a profile of requests submitted to a WWW server (GET, POST, ...) which takes explicitly into account the user behavior when surfing the WWW (i.e. navigating through it via a WWW browser). We present Probabilistic Attributed Context Free Grammar (PACFG) as a model for translating from this user oriented view of the workload (namely the conversations made within browser windows) to the methods submitted to the Web servers (respectively to a proxy server). The characterization at this lower level are essential for estimating the traffic on the net and are thus the starting point for evaluations of net traffic.

14 citations

Journal ArticleDOI
TL;DR: Some basic results are derived for the autocorrelation function of semi?Markovian processes and for the effect of state space reduction on autOCorrelation, which show how to represent correlated traffic with given characteristic by an adequate SMP model of limited size.
Abstract: Semi?Markovian processes (SMP) serve as a versatile means to model various traffic types generated by sources or superposed from links in ATM networks. An efficient discrete time analysis method is proposed for SMP/G/1 queueing systems to evaluate ATM switches with non?renewal input. Some basic results are derived for the autocorrelation function of semi?Markovian processes and for the effect of state space reduction on autocorrelation, which show how to represent correlated traffic with given characteristic by an adequate SMP model of limited size.

12 citations


Cites methods from "Hierarchical approach to building g..."

  • ...State dependent source and network load models may be directly transferred into SMP models [21]....

    [...]

Proceedings ArticleDOI
24 Mar 1999
TL;DR: This paper proposes a mechanism which represents the previous state of the system as a string, and uses genetic algorithms as the vehicle to address the learning aspects of the model.
Abstract: Workload characteristics in a modern networking environment are very dynamic. In order to maximize performance continuously, it is natural to explore the possibility of intelligent systems which can take cognizance of the workload dynamics and adapt themselves for future control applications. In this paper, we propose a mechanism which represents the previous state of the system as a string. The user is allowed to define relevant information for better management as substrings. The adaptive workload model, which is called the SVR model (named after the first author's initials), predicts the short-term future as a string in which the information content (conveyed as a substring) reflects the future. We illustrate the applicability of the SVR model through Web traffic generation and ATM bandwidth management. We use genetic algorithms as the vehicle to address the learning aspects of the model.

12 citations


Cites methods from "Hierarchical approach to building g..."

  • ...The most recent of these models [10] uses Probabilistic Attributed Context Free Grammars (PACFGs) to generate a string representing the state of the system in such a way that the distribution of symbols in a generated string follows the probability distribution corresponding to that of the different classes of events that can occur actually in the system....

    [...]

  • ...In fact, the inherently hierarchical nature of such systems has also been captured using such an approach in the form of Context Free Grammars [8, 9]....

    [...]

Book ChapterDOI
20 Sep 1995
TL;DR: A new workload model which captures the representation of parallelism in modern systems, and Probabilistic attributed context free grammar is used to describe, model and generate the workload.
Abstract: Performance evaluation of computing environments requires a description of the workload under which the evaluation is done The model used for workload description should capture all the significant features of the workload Many of the workload models developed has not been able to capture the fact that in modern systems users can open multiple windows and execute commands in parallel Parallel execution of commands has a large impact on the performance of the system In a modern distributed system, parallel execution of commands at a workstation results in more computation at the workstation as well as an increase in the rate of packet arrival on the underlying network Since the representation of parallelism is important, we present a new workload model which captures it Probabilistic attributed context free grammar is used to describe, model and generate the workload

10 citations

References
More filters
Journal ArticleDOI
01 Aug 1993
TL;DR: Several methodologies for the construction of workload models, which are functions of the objective of the study, of the architecture of the system to be analyzed, and of the techniques adopted, are presented.
Abstract: The performance of a system is determined by its characteristics as well as by the composition of the load being processed. Hence, its quantitative description is a fundamental part of all performance evaluation studies. Several methodologies for the construction of workload models, which are functions of the objective of the study, of the architecture of the system to be analyzed, and of the techniques adopted, are presented. A survey of a few applications of these methodologies to various types of systems (i.e., batch, interactive, database, network-based, parallel, supercomputer), is given. >

203 citations

Journal ArticleDOI
TL;DR: The performance of computer systems cannot be seriously and meaningfully measured unless the work- load the system is dealing with during the measurement sessions is carefully selected.
Abstract: The performance of computer systems cannot be seriously and meaningfully measured unless the work- load the system is dealing with during the measurement sessions is carefully selected. Even in the case the real workload is driving the system while measurements are being performed, such questions as determining the dura- tion of a session and the intervals of system activity to be monitored arise.

97 citations

Journal ArticleDOI
TL;DR: A general method of constructing a drive workload representative of a real workload is described, in which a synthetic program in which the characteristics can be varied by varying the appropriate parameters is used.
Abstract: A general method of constructing a drive workload representative of a real workload is described. The real workload is characterized by its demands on the various system resources. These characteristics of the real workload are obtained from the system accounting data. The characteristics of the drive workload are determined by matching the joint probability density of the real workload with that of the drive workload. The drive workload is realized by using a synthetic program in which the characteristics can be varied by varying the appropriate parameters. Calibration experiments are conducted to determine expressions relating the synthetic program parameters with the workload characteristics. The general method is applied to the case of two variables, cpu seconds and number of I/O activities; and a synthetic workload with 88 jobs is constructed to represent a month's workload consisting of about 6000 jobs.

85 citations

Journal ArticleDOI
TL;DR: Three types of simplified models for the system workload are presented and the probabilistic models can be validated with respect to the real workload and are easy to use in a performance evaluation study.
Abstract: A major problem in the evaluation of the performance of a multiprogrammed computer system is the development of an accurate description of its normal workload. This paper formulates the workload characterization problem for a computing environment and presents three types of simplified models for the system workload. The probabilistic models of the workload presented here can be validated with respect to the real workload and are easy to use in a performance evaluation study. The results of a study of the workloads on the Univac 1108 computer at the Computer Science Center of the University of Maryland are also presented.

80 citations

Proceedings ArticleDOI
02 Apr 1991
TL;DR: The goal of this research was to model the workload of a distributed system file server in a UNIX/NFS environment and the resulting model is distribution-driven and generates workload components in real time.
Abstract: The accuracy of the results of any performance study depends largely on the quality of the workload model driving it. Not surprisingly then, workload modelling is an area of great interest to those involved in the study of computer system performance. While a significant amount of research has focussed on the modelling of workloads in a centralized computer system, little has been done in the context of distributed systems. The goal of this research was to model the workload of a distributed system file server in a UNIX/NFS environment. The resulting model is distribution-driven and generates workload components in real time. It runs externally to the system it drives, thus eliminating any interference at the server. The model was validated for different workload intensities to ensure that it provides the flexibility to vary the workload intensity without loss of accuracy.

56 citations