scispace - formally typeset
Proceedings ArticleDOI

Performance comparison of a Web cache simulation framework

25 Mar 2005-Vol. 2, pp 281-284

...read more

Content maybe subject to copyright    Report


Citations
More filters
Journal ArticleDOI

[...]

TL;DR: This paper introduces and studies a caching algorithm that tracks the popularity of objects to make intelligent caching decisions and shows that when its parameters are set equal or close to their optimal values this algorithm outperforms traditional algorithms as LRU (least-recently used) and LFU (le least-frequently used).
Abstract: Due to its native return channel and its ability to easily address each user individually an IPTV system is very well suited to offer on-demand services. Those services are becoming more popular as there is an undeniable trend that users want to watch the offered content when and where it suits them best. Because multicast can no longer be relied upon for such services, as was the case when offering linear-programming TV, this trend risks to increase the traffic unwieldy over some parts of the IPTV network unless caches are deployed in strategic places within it. Since caches are limited in size and the popularity of on-demand content is volatile (i.e., changing over time), it is not straightforward to decide which objects to cache at which moment in time. This paper introduces and studies a caching algorithm that tracks the popularity of objects to make intelligent caching decisions. We will show that when its parameters are set equal or close to their optimal values this algorithm outperforms traditional algorithms as LRU (least-recently used) and LFU (least-frequently used). After a generic study of the algorithm fed by a user demand model that takes the volatility of the objects into account we will discuss two particular cases of an on-demand service, video-on-demand and catch-up TV, for each of which we give guidelines on how to dimension their associated caches.

64 citations


Cites background from "Performance comparison of a Web cac..."

  • [...]

Proceedings ArticleDOI

[...]

01 Dec 2009
TL;DR: This work aims at implementing SEMALRU- A Semantic and Least Recently Used Web cache replacement policy, which tunes the performance of the existing replacement algorithms through analyzing the semantic content of the document and the recency of the documents.
Abstract: The increasing demand for World Wide Web (WWW) services has made document caching a necessity to decrease download times and reduce Internet traffic. This work aims at implementing SEMALRU- A Semantic and Least Recently Used Web cache replacement policy. The basic LRU replacement policy is augmented with Semantic content of web pages to improve the efficiency of the replacement algorithms in terms of Hit Rate and Byte Hit Rate and to minimize the number of replacements made in cache. There are many well-known cache replacement policies based on size, recency, and frequency. This new improvised replacement policy attempts to implement cache replacement based on two parameters namely the semantics of the contents of web pages and the time of last access of the document. SEMALRU evicts documents that are less related to an incoming document or least recently used document which needs to be stored in the cache. This makes sure that only related documents are stored in the cache; hence the contents of the cache represent the documents of interest to the user and then ranked by recency. This policy tunes the performance of the existing replacement algorithms through analyzing the semantic content of the document and the recency of the document. A detailed algorithm to identify unrelated documents and documents that are least recently used has been devised. The policy was tested in a simulated environment with the related and unrelated set of user access pattern. . The parameters pertinent to cache replacement algorithms are computed and the results showing the improvement in the efficiency of the algorithm are furnished.

13 citations

Proceedings ArticleDOI

[...]

03 Jun 2009
TL;DR: It is shown that when services like PLTV and CUTV gain in popularity, the transport capacity required in certain parts of the network risks to grow unwieldy, unless the content is replicated (i.e., cached) in the appropriate places in the network.
Abstract: One of the biggest advantages of interactive television (TV) is that it allows the viewer to watch the content at his or her most convenient time, either by pausing an ongoing broadcast or by selecting to view the content at a time later than the original airing time. The former service, often referred to as Pause Life TV (PLTV), and the latter, often referred to as Catch-Up TV (CUTV), require that an individual unicast flow is set up per user, whereas for the traditional Linear Programming TV (LPTV) the user just tunes in to a multicast flow that can serve many viewers. We first show that when services like PLTV and CUTV gain in popularity, the transport capacity required in certain parts of the network risks to grow unwieldy, unless the content is replicated (i.e., cached) in the appropriate places in the network. Subsequently, we show that a good caching algorithm that tracks the evolving popularity of the content and takes into account the initial popularity, allows keeping the required capacity under control. Finally, we discuss the trade-offs involved in determining the optimal cache location.

10 citations


Additional excerpts

  • [...]

Journal ArticleDOI

[...]

TL;DR: It is demonstrated that with the growing popularity of the CUTV service, the number of simultaneously running unicast flows on the aggregation parts of the network threaten to lead to an unwieldy increase in required bandwidth.
Abstract: The catch-up TV (CUTV) service allows users to watch video content that was previously broadcast live on TV channels and later placed on an on-line video store. Upon a request from a user to watch a recently missed episode of his/her favourite TV series, the content is streamed from the video server to the customer's receiver device. This requires that an individual flow is set up for the duration of the video, and since it is hard to impossible to employ multicast streaming for this purpose (as users seldomly issue a request for the same episode at the same time), these flows are unicast. In this paper, we demonstrate that with the growing popularity of the CUTV service, the number of simultaneously running unicast flows on the aggregation parts of the network threaten to lead to an unwieldy increase in required bandwidth. Anticipating this problem and trying to alleviate it, the network operators deploy caches in strategic places in the network. We investigate the performance of such a caching strategy and the impact of its size and the cache update logic. We first analyse and model the evolution of video popularity over time based on traces we collected during 10 months. Through simulations we compare the performance of the traditional least-recently used and least-frequently used caching algorithms to our own algorithm. We also compare their performance with a "perfect" caching algorithm, which knows and hence does not have to estimate the video request rates. In the experimental data, we see that the video parameters from the popularity evolution law can be clustered. Therefore, we investigate theoretical models that can capture these clusters and we study the impact of clustering on the caching performance. Finally, some considerations on the optimal cache placement are presented.

8 citations


Additional excerpts

  • [...]

Proceedings ArticleDOI

[...]

23 Apr 2012
TL;DR: A novel content caching scheme referred as “Cache Management using Temporal Pattern based Solicitation” (CMTPS), to further minimize both service delays and load in the network for Video on Demand (VoD) applications is proposed.
Abstract: Caching is an effective technique to improve the quality of streaming multimedia services. In this paper, we propose a novel content caching scheme referred as “Cache Management using Temporal Pattern based Solicitation” (CMTPS), to further minimize both service delays and load in the network for Video on Demand (VoD) applications. CMPTS is based on the analysis of clients' requests over passed time intervals to predict the contents that will be solicited in the near future. By means of experimental tests, the CMTPS protocol is evaluated. The obtained results show that CMTPS outperforms LRU, in terms of peak traffic reduction and number of cache hits.

6 citations


Cites methods from "Performance comparison of a Web cac..."

  • [...]


References
More filters
Proceedings Article

[...]

01 Jan 1997
TL;DR: The Hypertext Transfer Protocol is an application-level protocol for distributed, collaborative, hypermedia information systems, which can be used for many tasks beyond its use for hypertext through extension of its request methods, error codes and headers.
Abstract: The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol which can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers [47]. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred.

3,834 citations

Proceedings ArticleDOI

[...]

Anja Feldmann1, Ramón Cáceres2, Fred Douglis2, G. Glass2, Michael Rabinovich2 
21 Mar 1999
TL;DR: This work evaluates through detailed simulations the latency and bandwidth effects of Web proxy caching in heterogeneous bandwidth environments where network speeds between clients and proxies are significantly different than speeds between proxies and servers.
Abstract: Much work on the performance of Web proxy caching has focused on high-level metrics such as hit rates, but has ignored low level details such as "cookies", aborted connections, and persistent connections between clients and proxies as well as between proxies and servers. These details have a strong impact on performance, particularly in heterogeneous bandwidth environments where network speeds between clients and proxies are significantly different than speeds between proxies and servers. We evaluate through detailed simulations the latency and bandwidth effects of Web proxy caching in such environments. We drive our simulations with packet traces from two scenarios: clients connected through slow dialup modems to a commercial ISP, and clients on a fast LAN in an industrial research lab. We present three main results. First, caching persistent connections at the proxy can improve latency much more than simply caching Web data. Second, aborted connections can waste more bandwidth than that saved by caching data. Third, cookies can dramatically reduce hit rates by making many documents effectively uncacheable.

218 citations

Journal ArticleDOI

[...]

Ramón Cáceres1, Fred Douglis1, Anja Feldmann1, Gideon Glass1, Michael Rabinovich1 
01 Dec 1998
TL;DR: In this paper, Trace-driven simulation of the modem pool of a large ISP suggests that "cookies" dramatically affect the cachability of resources; wasted bandwidth due to aborted connections can more than offset the savings from cached documents; and using a proxy to keep from repeatedly opening new TCP connections can reduce latency more than simply caching data.
Abstract: Much work in the analysis of proxy caching has focused on high-level metrics such as hit rates, and has approximated actual reference patterns by ignoring exceptional cases such as connection aborts. Several of these low-level details have a strong impact on performance, particularly in heterogeneous bandwidth environments such as modem pools connected to faster networks. Trace-driven simulation of the modem pool of a large ISP suggests that "cookies" dramatically affect the cachability of resources; wasted bandwidth due to aborted connections can more than offset the savings from cached documents; and using a proxy to keep from repeatedly opening new TCP connections can reduce latency more than simply caching data.

185 citations

Proceedings ArticleDOI

[...]

09 Jul 2003
TL;DR: This paper shows that a simple Markov stationary replacement policy, called the policy C*/sub 0/, minimizes the long-run average metric induced by nonuniform document costs when document eviction is optional, and proposes a framework for operating caching systems with multiple performance metrics by solving a constrained caching problem with a single constraint.
Abstract: Replacement policies for general caching applications and Web caching in particular have been discussed extensively in the literature. Many ad-hoc policies have been proposed that attempt to take adavantage of the retrieval latency of documents, their size, the popularity of references and temporal locality of requested documents. However, the problem of finding optimal replacement policies under these factors has not been pursued in any systematic manner. In this paper, we take a step in that direction: we first show, still under the independent reference model, that a simple Markov stationary replacement policy, called the policy C*/sub 0/, minimizes the long-run average metric induced by nonuniform document costs when document eviction is optional. We then propose a framework for operating caching systems with multiple performance metrics. We do so by solving a constrained caching problem with a single constraint. The resulting constrained optimal replacement policy is obtained by simple randomization between two Markov stationary optimal replacement policies C*/sub 0/ but induced by different costs.

43 citations

Proceedings ArticleDOI

[...]

08 Mar 2004
TL;DR: A proxy-cache platform to check the performance of Web object based on multikey management techniques and algorithms and the proposed framework also offers the response time perceived by users.
Abstract: Proxy caches have become an important mechanism to reduce latencies. Efficient management techniques for proxy caches which exploits Web-objects inherent characteristics are an essential key to reach good performance. One important segment of the replacement algorithms being applied today are the multikey algorithms that use several key or object characteristics to decide which object or objects must be replaced. This feature is not considered in most of the current simulators. In this paper we propose a proxy-cache platform to check the performance of Web object based on multikey management techniques and algorithms. The proposed platform is coded in a modular way, which allows the implementation of new algorithms or policies proposals in an easy and robust manner. In addition to the classical performance metrics like the hit ratio and the byte hit ratio, the proposed framework also offers the response time perceived by users.

16 citations



Trending Questions (1)
How to clear browser cache in Robot Framework?

This paper presents a powerful framework to simulate Web proxy cache systems.