scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A case for delay-conscious caching of Web documents

01 Sep 1997-Vol. 29, pp 997-1005
TL;DR: This paper presents a new, delay-conscious cache replacement algorithm LNC-R-W3 which maximizes a performance metric called delay-savings-ratio and compares it with other existing cache replacement algorithms, namely LRU and LRU-MIN.
Abstract: Caching at proxy servers plays an important role in reducing the latency of the user response, the network delays and the load on Web servers. The cache performance depends critically on the design of the cache replacement algorithm. Unfortunately, most cache replacement algorithms ignore the Web's scale. In this paper we argue for the design of delay-conscious cache replacement algorithms which explicitly consider the Web's scale by preferentially caching documents which require a long time to fetch to the cache. We present a new, delay-conscious cache replacement algorithm LNC-R-W3 which maximizes a performance metric called delay-savings-ratio. Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes a novel P2P cooperative proxy cache system using an individual-based model that borrows the idea from an ecological system as well as economic systems to manage the cooperative proxies through data and information exchange among individual proxies.
Abstract: Proxy servers have been widely used by institutions to serve their clients behind firewalls. Recently many schemes have been proposed to organize proxy servers into cooperative proxy cache systems. However most of existing proxy cache schemes require manual configuration of the cooperative proxies based on the network architecture. In this paper, we propose a novel P2P proxy caching scheme using an individual-based model. We borrow the ideas from the ecological system as well as the economical system to manage the cooperative proxies through data and information exchange among individual proxies. Proxies automatically configure themselves into a Virtual Proxy Graph. Data caching and data replication among the proxy nodes create artificial life in the proxies. The aggregate effect of caching and replicating actions by individual peer proxies forms a proxy ecology which automatically distributes data to nearest clients and balances workload. Our simulation results show that the proposed proxy caching scheme tremendously improves system performance. In addition, the individual-based design model ensures simplicity and scalability of the cache system.

6 citations


Additional excerpts

  • ...A Least Normalized Cost Replacement algorithm (LCN-R) [46] employs a rational function of the access frequency, the...

    [...]

Book ChapterDOI
20 Oct 2003
TL;DR: This paper considers the object caching problem of determining the optimal number of copies of an object to be placed among en-route caches on the access path from the user to the content server and their locations such that the overall net cost saving is maximized and proposes an object caching model for the case that the network topology is a tree.
Abstract: Web caching is an important technology for saving network bandwidth, alleviating server load, and reducing the response time experienced by users In this paper, we consider the object caching problem of determining the optimal number of copies of an object to be placed among en-route caches on the access path from the user to the content server and their locations such that the overall net cost saving is maximized We propose an object caching model for the case that the network topology is a tree The existing method can be viewed as a special case of ours since it considers only linear topology We formulate our problem as an optimization problem and the optimal placement is obtained by applying our dynamic programming-based algorithm We also present some analysis of our algorithm, which shows that our solution is globally optimal Finally, we describe some numerical experiments to show how optimal placement can be determined by our algorithm

5 citations


Cites methods from "A case for delay-conscious caching ..."

  • ...Determining the appropriate number of copies of an object and placing them in suitable en-route caches are challenging tasks.The interaction effect between object placement and replacement further complicates the problem.Most of the existing work was concentrated on either object placement or replacement at individual caches only and are not optimal [7].Little work has been done on web object caching[ 6 ].In this paper, we consider the ......

    [...]

Proceedings ArticleDOI
27 Nov 2005
TL;DR: This paper proposes an optimal solution for multimedia object caching to minimize the total access cost by considering both transmission cost and transcoding cost and results show that this solution consistently and significantly outperforms comparison solutions in terms of all the performance metrics considered.
Abstract: Multimedia object caching, by which the same multimedia object can be adapted to diverse mobile appliances through the technique of transcoding, is an important technology for improving the scalability of Web services, especially in the environment of mobile networks. The performance of multimedia object caching mainly depends on how the objects are selected to be removed for a new object (multimedia object replacement) and where the different versions of a multimedia object are placed (multimedia object placement). In this paper, we address the problem of cache replacement for multimedia object caching. We first propose an optimal solution for this problem. The performance objective is to minimize the total access cost by considering both transmission cost and transcoding cost. The performance of the proposed solution is evaluated with a set of carefully designed simulation experiments for various performance metrics over a wide range of system parameters. The simulation results show that our solution consistently and significantly outperforms comparison solutions in terms of all the performance metrics considered.

5 citations


Cites background from "A case for delay-conscious caching ..."

  • ...• LNC − R [8]: Least Normalized Cost Replacement (LNC − R) removes the least profitable documents....

    [...]

Book ChapterDOI
03 Sep 2001
TL;DR: A new cache replacement policy for proxy and Web servers, the Size-Adjusted Sliding Window LFU (SSW-LFU), which uses the rates of recent accesses within a sliding window to estimate the probability of future document accesses.
Abstract: Web caching is a scalable and effective way to reduce network traffic and response time. In this study, we propose a new cache replacement policy for proxy and Web servers, the Size-Adjusted Sliding Window LFU (SSW-LFU). In this policy, we use the rates of recent accesses within a sliding window to estimate the probability of future document accesses. In addition, we take into account the variable sizes of documents. Simulations with real-life web access data are conducted to evaluate the performance. SSW-LFU outperformed other algorithms in hit ratio and had comparable byte hit ratio when compared with other algorithms such as LFU, LRU, Size, LRU-Min, etc.

5 citations


Cites background or methods from "A case for delay-conscious caching ..."

  • ...Many replacement algorithms for web caching have been proposed in the literature [1, 2, 6, 7, 8, 12 , 14]....

    [...]

  • ...Least Normalized Cost Replacement [ 12 ]: It employees a rational function to the access frequency the size, and the transfer time....

    [...]

Proceedings ArticleDOI
13 Oct 2003
TL;DR: Wang et al. as discussed by the authors proposed a P2P cooperative proxy cache system using an individual-based model using an ecological system as well as economic systems to manage the cooperative proxies through data and information exchange among individual proxies.
Abstract: In this paper, we propose a novel P2P cooperative proxy cache system using an individual-based model. We borrow the idea from an ecological system as well as economic systems to manage the cooperative proxies through data and information exchange among individual proxies. The data flow among proxy nodes creates artificial life for the cooperative proxies. The proxy servers with artificial life can automatically configure themselves into a virtual proxy graph. The aggregate effect of caching actions by individual peer proxies automatically distributes the Web document closer to the clients and balances the workload. Our simulation results show that the proposed proxy caching scheme tremendously improves the system performance. In addition, the individual-based design model ensures the simplicity and scalability of the cache system.

5 citations

References
More filters
Proceedings ArticleDOI
01 Jun 1993
TL;DR: The LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages, and adapts in real time to changing patterns of access.
Abstract: This paper introduces a new approach to database disk buffering, called the LRU-K method The basic idea of LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statistically estimate the interarrival times of references on a page by page basis Although the LRU-K approach performs optimal statistical inference under relatively standard assumptions, it is fairly simple and incurs little bookkeeping overhead As we demonstrate with simulation experiments, the LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages In fact, LRU-K can approach the behavior of buffering algorithms in which page sets with known access frequencies are manually assigned to different buffer pools of specifically tuned sizes Unlike such customized buffering algorithms however, the LRU-K method is self-tuning, and does not rely on external hints about workload characteristics Furthermore, the LRU-K algorithm adapts in real time to changing patterns of access

1,033 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...We call the resulting cache replacement algorithm for proxy caching on the Web LNC-R-W3....

    [...]

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

ReportDOI
22 Jan 1996
TL;DR: The design and performance of a hierarchical proxy-cache designed to make Internet information systems scale better are discussed, and performance measurements indicate that hierarchy does not measurably increase access latency.
Abstract: This paper discusses the design and performance of a hierarchical proxy-cache designed to make Internet information systems scale better. The design was motivated by our earlier trace-driven simulation study of Internet traffic. We challenge the conventional wisdom that the benefits of hierarchical file caching do not merit the costs, and believe the issue merits reconsideration in the Internet environment. The cache implementation supports a highly concurrent stream of requests. We present performance measurements that show that our cache outperforms other popular Internet cache implementations by an order of magnitude under concurrent load. These measurements indicate that hierarchy does not measurably increase access latency. Our software can also be configured as a Web-server accelerator; we present data that our httpd-accelerator is ten times faster than Netscape's Netsite and NCSA 1.4 servers. Finally, we relate our experience fitting the cache into the increasingly complex and operational world of Internet information systems, including issues related to security, transparency to cache-unaware clients, and the role of file systems in support of ubiquitous wide-area information systems.

853 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

Book
01 Oct 1973
TL;DR: As one of the part of book categories, operating systems theory always becomes the most wanted book.
Abstract: If you really want to be smarter, reading can be one of the lots ways to evoke and realize. Many people who like reading will have more knowledge and experiences. Reading can be a way to gain information from economics, politics, science, fiction, literature, religion, and many others. As one of the part of book categories, operating systems theory always becomes the most wanted book. Many people are absolutely searching for this book. It means that many love to read this kind of book.

670 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

01 Apr 1995
TL;DR: This paper presents a descriptive statistical summary of the traces of actual executions of NCSA Mosaic, and shows that many characteristics of WWW use can be modelled using power-law distributions, including the distribution of document sizes, the popularity of documents as a function of size, and the Distribution of user requests for documents.
Abstract: The explosion of WWW traffic necessitates an accurate picture of WWW use, and in particular requires a good understanding of client requests for WWW documents. To address this need, we have collected traces of actual executions of NCSA Mosaic, reflecting over half a million user requests for WWW documents. In this paper we present a descriptive statistical summary of the traces we collected, which identifies a number of trends and reference patterns in WWW use. In particular, we show that many characteristics of WWW use can be modelled using power-law distributions, including the distribution of document sizes, the popularity of documents as a function of size, the distribution of user requests for documents, and the number of references to documents as a function of their overall rank in popularity (Zipf''s law). In addition, we show how the power-law distributions derived from our traces can be used to guide system designers interested in caching WWW documents. --- Our client-based traces are available via FTP from http://www.cs.bu.edu/techreports/1995-010-www-client-traces.tar.gz http://www.cs.bu.edu/techreports/1995-010-www-client-traces.a.tar.gz

624 citations


"A case for delay-conscious caching ..." refers background or methods in this paper

  • ...The browsers were temporarily adjusted so that all requests were re-directed to a proxy where for each referenced URL, we recorded the size the of requested document and the difference between the time when the request for document Di arrives at the proxy and the time when Di is actually fetched to…...

    [...]

  • ...Several studies of Web reference patterns show that Web clients exhibit a strong preference for accessing small documents [6, 7, 9, 11]....

    [...]

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

18 Jul 1995
TL;DR: This work assesses the potential of proxy servers to cache documents retrieved with the HTTP protocol, and finds that a proxy server really functions as a second level cache, and its hit rate may tend to decline with time after initial loading given a more or less constant set of users.
Abstract: As the number of World-Wide Web users grow, so does the number of connections made to servers. This increases both network load and server load. Caching can reduce both loads by migrating copies of server files closer to the clients that use those files. Caching can either be done at a client or in the network (by a proxy server or gateway). We assess the potential of proxy servers to cache documents retrieved with the HTTP protocol. We monitored traffic corresponding to three types of educational workloads over a one semester period, and used this as input to a cache simulation. Our main findings are (1) that with our workloads a proxy has a 30-50% maximum possible hit rate no matter how it is designed; (2) that when the cache is full and a document is replaced, least recently used (LRU) is a poor policy, but simple variations can dramatically improve hit rate and reduce cache size; (3) that a proxy server really functions as a second level cache, and its hit rate may tend to decline with time after initial loading given a more or less constant set of users; and (4) that certain tuning configuration parameters for a cache may have little benefit.

495 citations


"A case for delay-conscious caching ..." refers methods in this paper

  • ...The trace contains about 20K requests....

    [...]

  • ...Subsequently, we test the performance of LNC-R-W3 experimentally and compare it with the performance of other existing cache replacement algorithms, namely LRU and LRU-MIN....

    [...]

Trending Questions (1)
How to clear browser cache in Robot Framework?

In this paper we argue for the design of delay-conscious cache replacement algorithms which explicitly consider the Web's scale by preferentially caching documents which require a long time to fetch to the cache.