scispace - formally typeset
Search or ask a question
Conference

Measurement and Modeling of Computer Systems 

About: Measurement and Modeling of Computer Systems is an academic conference. The conference publishes majorly in the area(s): Queueing theory & Server. Over the lifetime, 3370 publications have been published by the conference receiving 122674 citations.


Papers
More filters
Proceedings Article
01 Jan 1990
TL;DR: The authors' goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles.

2,593 citations

Proceedings Article
01 Jan 2000
TL;DR: The potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred and the results indicate that the performance penalties are low both from the application and the network perspectives.

2,372 citations

Journal ArticleDOI
15 May 1996
TL;DR: It is shown that the self-similarity in WWW traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network.
Abstract: Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to the self-similarity of network traffic. We present a hypothesized explanation for the possible self-similarity of traffic by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we examine the dependence structure of WWW traffic. While our measurements are not conclusive, we show evidence that WWW traffic exhibits behavior that is consistent with self-similar traffic models. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

2,332 citations

Journal ArticleDOI
11 Oct 2017
TL;DR: This work seeks to provide a theoretical framework for how to design controllers that are decomposed across timescales in this way, and exhibits a design, named Multi-timescale Reflexive Predictive Control (MRPC), which maintains a per-timestep cost within a constant factor of the offline optimal in an adversarial setting.
Abstract: Many real-world control systems, such as the smart grid and software defined networks, have decentralized components that react quickly using local information and centralized components that react slowly using a more global view. This work seeks to provide a theoretical framework for how to design controllers that are decomposed across timescales in this way. The framework is analogous to how the network utility maximization framework uses optimization decomposition to distribute a global control problem across independent controllers, each of which solves a local problem; except our goal is to decompose a global problem temporally, extracting a timescale separation. Our results highlight that decomposition of a multi-timescale controller into a fast timescale, reactive controller and a slow timescale, predictive controller can be near-optimal in a strong sense. In particular, we exhibit such a design, named Multi-timescale Reflexive Predictive Control (MRPC), which maintains a per-timestep cost within a constant factor of the offline optimal in an adversarial setting.

1,777 citations

Proceedings ArticleDOI
01 Jun 1998
TL;DR: This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server and addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream.
Abstract: One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.

1,549 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
202346
202252
202169
202094
2019150
201896