scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A case for delay-conscious caching of Web documents

01 Sep 1997-Vol. 29, pp 997-1005
TL;DR: This paper presents a new, delay-conscious cache replacement algorithm LNC-R-W3 which maximizes a performance metric called delay-savings-ratio and compares it with other existing cache replacement algorithms, namely LRU and LRU-MIN.
Abstract: Caching at proxy servers plays an important role in reducing the latency of the user response, the network delays and the load on Web servers. The cache performance depends critically on the design of the cache replacement algorithm. Unfortunately, most cache replacement algorithms ignore the Web's scale. In this paper we argue for the design of delay-conscious cache replacement algorithms which explicitly consider the Web's scale by preferentially caching documents which require a long time to fetch to the cache. We present a new, delay-conscious cache replacement algorithm LNC-R-W3 which maximizes a performance metric called delay-savings-ratio. Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN.
Citations
More filters
Proceedings ArticleDOI
Kyungbaek Kim1, Daeyeon Park
26 Jun 2001
TL;DR: This work suggests a new algorithm, called Least Popularity Per Byte Replacement (LPPB-R), which uses the popularity value as the long-term measurements of request frequency to make up for the weak point of the previous algorithms in the proxy cache and varies the popularityvalue by changing the impact factor easily to adjust the peformance to needs of the proxy caches.
Abstract: With the recent explosion in usage of the World Wide Web, the problem of caching Web objects has gained considerable importance. The performance of these Web caches is highly affected by the replacement algorithm. Today, many replacement algorithms have been proposed for Web caching and these algorithms use the on-line fashion parameters. Recent studies suggest that the correlation between the on-line fashion parameters and the object popularity in the proxy cache are weakening due to the efficient client caches. We suggest a new algorithm, called Least Popularity Per Byte Replacement (LPPB-R). We use the popularity value as the long-term measurements of request frequency to make up for the weak point of the previous algorithms in the proxy cache and vary the popularity value by changing the impact factor easily to adjust the peformance to needs of the proxy cache. We examine the performance of this and other replacement algorithms via trace driven simulation.

18 citations

Proceedings ArticleDOI
28 Mar 2004
TL;DR: A novel mathematical model is proposed for Web proxy placement for autonomous systems (ASes), as a natural extension of the solution for tree networks, which significantly outperforms the random placement model.
Abstract: Placement of Web proxy servers is an important avenue to save network bandwidth, alleviate server load, and reduce latency experienced by users. The general problem of Web proxy placement is to compute the optimal locations for placing k Web proxies in a network such that the objective concerned is minimized or maximized. We address this problem for tree networks and propose a novel mathematical model for it. In our model, we consider maximizing the overall access gain as our objective and formulate this problem as an optimization problem. The optimal placement is obtained using a computationally efficient dynamic programming-based algorithm. Applying our mathematical model, we also present a solution to Web proxy placement for autonomous systems (ASes), as a natural extension of the solution for tree networks. Our algorithms have been implemented. The simulation results show that our model significantly outperforms the random placement model.

17 citations


Cites background from "A case for delay-conscious caching ..."

  • ...Unfortunately, both do little on improving the overall network performance [14]....

    [...]

Proceedings ArticleDOI
12 Jun 2005
TL;DR: A novel caching policy, Universal Mobile Caching (UMC), is presented, which is suitable for managing object caches in structurally varying environments, and which is self-optimizing for changing workloads.
Abstract: In the context of mobile data access, data caching is fundamental for both performance and functionality. For this reason there have been many studies into developing energy-efficient caching algorithms suitable for specific mobile environments. In this papers, we present a novel caching policy, Universal Mobile Caching (UMC), which is suitable for managing object caches in structurally varying environments, and which is self-optimizing for changing workloads. UMC is based on a simple set of basic criteria which reflect a spectrum of possible caching policies. UMC has demonstrated the ability to provide caching benefits in the on-demand retrieval of web documents for the mobile web, wherein multiple levels of intervening caches can create adverse workloads for other general caching schemes. When considering the energy expended in servicing cache misses, UMC consistently demonstrated savings on the order of 10% to 15%. These energy savings are solely due to local per-node behavior, and do not include the potential reduction of power consumption, to less than half its normal levels, achievable due its enabling more effective multi-hop data transmission.

17 citations


Cites background from "A case for delay-conscious caching ..."

  • ...Simple Caching, Simple Criteria: The Least-Normalized-Cost replacement (LNC) [18] policy inserts the documents into a priority queue with a priority key....

    [...]

  • ...Simple Caching, Simple Criteria: The Least-Normalized-Cost replacement (LNC) [18] policy inserts the documents into a pri­ority queue with a priority key....

    [...]

Journal ArticleDOI
01 May 2012
TL;DR: This study simulates 27 proxy cache replacement strategies and introduces a new performance metric, the object removal rate, which is an indication of CPU usage and disk access at the proxy server, particularly for busy cache servers or servers with lower processing power.
Abstract: The Web has become the most important source of information and communication for the world. Proxy servers are used to cache objects with the goals of decreasing network traffic, reducing user perceived lag and loads on origin servers. In this paper, we focus on the cache replacement problem with respect to proxy servers. Despite the fact that some Web 2.0 applications have dynamic objects, most of the Web traffic has static content with file types such as cascading style sheets, javascript files, images, etc. The cache replacement strategies implemented in Squid, a widely used proxy cache software, are no longer considered 'good enough' today. Squid's default strategy is Least Recently Used. While this is a simple approach, it does not necessarily achieve the targeted goals. We simulate 27 proxy cache replacement strategies and analyze them against several important performance measures. Hit rate and byte hit rate are the most commonly used performance metrics in the literature. Hit rate is an indication of user perceived lag, while byte hit rate is an indication of the amount of network traffic. We also introduce a new performance metric, the object removal rate, which is an indication of CPU usage and disk access at the proxy server. This metric is particularly important for busy cache servers or servers with lower processing power. Our study provides valuable insights for both industry and academia. They are especially important for Web proxy cache system administrators; particularly in wireless ad-hoc networks as the caches on mobile devices are relatively small.

17 citations

Proceedings ArticleDOI
19 Sep 2005
TL;DR: An efficient cache replacement algorithm that considers both the aggregate effect of caching multiple versions of the same multimedia object and cache consistency and a complexity analysis is presented to show the efficiency of the algorithm.
Abstract: In this paper, we address the problem of cache replacement for transcoding proxy caching. First, an efficient cache replacement algorithm is proposed. Our algorithm considers both the aggregate effect of caching multiple versions of the same multimedia object and cache consistency. Second, a complexity analysis is presented to show the efficiency of our algorithm. Finally, some preliminary simulation experiments are conducted to compare the performance of our algorithm with some existing algorithms. The results show that our algorithm outperforms others in terms of the various performance metrics.

15 citations


Cites methods from "A case for delay-conscious caching ..."

  • ...Finally, some preliminary simulation exper­iments are conducted to compare the performance of our algorithm with some existing algorithms....

    [...]

References
More filters
Proceedings ArticleDOI
01 Jun 1993
TL;DR: The LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages, and adapts in real time to changing patterns of access.
Abstract: This paper introduces a new approach to database disk buffering, called the LRU-K method The basic idea of LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statistically estimate the interarrival times of references on a page by page basis Although the LRU-K approach performs optimal statistical inference under relatively standard assumptions, it is fairly simple and incurs little bookkeeping overhead As we demonstrate with simulation experiments, the LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages In fact, LRU-K can approach the behavior of buffering algorithms in which page sets with known access frequencies are manually assigned to different buffer pools of specifically tuned sizes Unlike such customized buffering algorithms however, the LRU-K method is self-tuning, and does not rely on external hints about workload characteristics Furthermore, the LRU-K algorithm adapts in real time to changing patterns of access

1,033 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...We call the resulting cache replacement algorithm for proxy caching on the Web LNC-R-W3....

    [...]

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

ReportDOI
22 Jan 1996
TL;DR: The design and performance of a hierarchical proxy-cache designed to make Internet information systems scale better are discussed, and performance measurements indicate that hierarchy does not measurably increase access latency.
Abstract: This paper discusses the design and performance of a hierarchical proxy-cache designed to make Internet information systems scale better. The design was motivated by our earlier trace-driven simulation study of Internet traffic. We challenge the conventional wisdom that the benefits of hierarchical file caching do not merit the costs, and believe the issue merits reconsideration in the Internet environment. The cache implementation supports a highly concurrent stream of requests. We present performance measurements that show that our cache outperforms other popular Internet cache implementations by an order of magnitude under concurrent load. These measurements indicate that hierarchy does not measurably increase access latency. Our software can also be configured as a Web-server accelerator; we present data that our httpd-accelerator is ten times faster than Netscape's Netsite and NCSA 1.4 servers. Finally, we relate our experience fitting the cache into the increasingly complex and operational world of Internet information systems, including issues related to security, transparency to cache-unaware clients, and the role of file systems in support of ubiquitous wide-area information systems.

853 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

Book
01 Oct 1973
TL;DR: As one of the part of book categories, operating systems theory always becomes the most wanted book.
Abstract: If you really want to be smarter, reading can be one of the lots ways to evoke and realize. Many people who like reading will have more knowledge and experiences. Reading can be a way to gain information from economics, politics, science, fiction, literature, religion, and many others. As one of the part of book categories, operating systems theory always becomes the most wanted book. Many people are absolutely searching for this book. It means that many love to read this kind of book.

670 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

01 Apr 1995
TL;DR: This paper presents a descriptive statistical summary of the traces of actual executions of NCSA Mosaic, and shows that many characteristics of WWW use can be modelled using power-law distributions, including the distribution of document sizes, the popularity of documents as a function of size, and the Distribution of user requests for documents.
Abstract: The explosion of WWW traffic necessitates an accurate picture of WWW use, and in particular requires a good understanding of client requests for WWW documents. To address this need, we have collected traces of actual executions of NCSA Mosaic, reflecting over half a million user requests for WWW documents. In this paper we present a descriptive statistical summary of the traces we collected, which identifies a number of trends and reference patterns in WWW use. In particular, we show that many characteristics of WWW use can be modelled using power-law distributions, including the distribution of document sizes, the popularity of documents as a function of size, the distribution of user requests for documents, and the number of references to documents as a function of their overall rank in popularity (Zipf''s law). In addition, we show how the power-law distributions derived from our traces can be used to guide system designers interested in caching WWW documents. --- Our client-based traces are available via FTP from http://www.cs.bu.edu/techreports/1995-010-www-client-traces.tar.gz http://www.cs.bu.edu/techreports/1995-010-www-client-traces.a.tar.gz

624 citations


"A case for delay-conscious caching ..." refers background or methods in this paper

  • ...The browsers were temporarily adjusted so that all requests were re-directed to a proxy where for each referenced URL, we recorded the size the of requested document and the difference between the time when the request for document Di arrives at the proxy and the time when Di is actually fetched to…...

    [...]

  • ...Several studies of Web reference patterns show that Web clients exhibit a strong preference for accessing small documents [6, 7, 9, 11]....

    [...]

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

18 Jul 1995
TL;DR: This work assesses the potential of proxy servers to cache documents retrieved with the HTTP protocol, and finds that a proxy server really functions as a second level cache, and its hit rate may tend to decline with time after initial loading given a more or less constant set of users.
Abstract: As the number of World-Wide Web users grow, so does the number of connections made to servers. This increases both network load and server load. Caching can reduce both loads by migrating copies of server files closer to the clients that use those files. Caching can either be done at a client or in the network (by a proxy server or gateway). We assess the potential of proxy servers to cache documents retrieved with the HTTP protocol. We monitored traffic corresponding to three types of educational workloads over a one semester period, and used this as input to a cache simulation. Our main findings are (1) that with our workloads a proxy has a 30-50% maximum possible hit rate no matter how it is designed; (2) that when the cache is full and a document is replaced, least recently used (LRU) is a poor policy, but simple variations can dramatically improve hit rate and reduce cache size; (3) that a proxy server really functions as a second level cache, and its hit rate may tend to decline with time after initial loading given a more or less constant set of users; and (4) that certain tuning configuration parameters for a cache may have little benefit.

495 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...The trace contains about 20K requests....

    [...]

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

Trending Questions (1)
How to clear browser cache in Robot Framework?

In this paper we argue for the design of delay-conscious cache replacement algorithms which explicitly consider the Web's scale by preferentially caching documents which require a long time to fetch to the cache.