scispace - formally typeset
Journal ArticleDOI

The cache location problem

P. Krishnan, +2 more
- 01 Oct 2000 - 
- Vol. 8, Iss: 5, pp 568-582
TLDR
There is a surprising consistency over time in the relative amount of web traffic from the server along a path, lending a stability to the TERC location solution and these techniques can be used by network providers to reduce traffic load in their network.
Abstract
This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number of caches in the network. We formulate these location problems both for general caches and for transparent en-route caches (TERCs), and identify that, in general, they are intractable. We give optimal algorithms for line and ring networks, and present closed form formulae for some special cases. We also present a computationally efficient dynamic programming algorithm for the single server case. This last case is of particular practical interest. It models a network that wishes to minimize the average access delay for a single web server. We experimentally study the effects of our algorithm using real web server data. We observe that a small number of TERCs are sufficient to reduce the network traffic significantly. Furthermore, there is a surprising consistency over time in the relative amount of web traffic from the server along a path, lending a stability to our TERC location solution. Our techniques can be used by network providers to reduce traffic load in their network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Optimal allocation of cache servers and content files in content distribution networks

TL;DR: This optimization model introduces 0-1 integer programming to determine the optimal allocation of cache servers and content files and aims to maximize the reliability of the whole system subject to restrictions of cost and delay.

Video replica placement strategy for storage cloud-based cdn

Wenle Zhou, +1 more
TL;DR: Two classes of offline algorithms are proposed, one named GUCP(Greedy User Core Preallocation) which effectively solved the load imbalanced problem caused by GS, and the other one named PBP(Popularity Based Placement) which is based on the popularity of content effectively placed replicas while there is no users’ requests information.
Book ChapterDOI

Internet Cache Location and Design of Content Delivery Networks

TL;DR: Modifications of the CLP that account for the effect of client assignment and cache size on cache hit rate are studied to develop new models for cache location that overcome the limitations of the basic model.
Patent

A network entity for programmably arranging an intermediate node for serving communications between a source node and a target node

TL;DR: In this paper, the authors proposed a network entity for programmably arranging an intermediate node for serving communications between a source node and a target node in a communication network comprising of intermediate nodes arranged in a plurality of communication paths.
Book ChapterDOI

Managing traffic demand uncertainty in replica server placement with robust optimization

TL;DR: This paper argues that it is often inappropriate to optimize the performance for only a particular set of traffic demands that is assumed accurate, and proposes a scenario-based robust optimization approach to address the replica server placement problem under traffic demand uncertainty.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Proceedings Article

Hypertext Transfer Protocol -- HTTP/1.1

TL;DR: The Hypertext Transfer Protocol is an application-level protocol for distributed, collaborative, hypermedia information systems, which can be used for many tasks beyond its use for hypertext through extension of its request methods, error codes and headers.
Proceedings ArticleDOI

Web caching and Zipf-like distributions: evidence and implications

TL;DR: This paper investigates the page request distribution seen by Web proxy caches using traces from a variety of sources and considers a simple model where the Web accesses are independent and the reference probability of the documents follows a Zipf-like distribution, suggesting that the various observed properties of hit-ratios and temporal locality are indeed inherent to Web accesse observed by proxies.
Related Papers (5)