scispace - formally typeset
Open AccessJournal ArticleDOI

Competitive paging algorithms

Reads0
Chats0
TLDR
The marking algorithm is developed, a randomized on-line algorithm for the paging problem, which it is proved that its expected cost on any sequence of requests is within a factor of 2Hk of optimum.
About
This article is published in Journal of Algorithms.The article was published on 1991-12-01 and is currently open access. It has received 489 citations till now. The article focuses on the topics: Page replacement algorithm & K-server problem.

read more

Citations
More filters
Posted Content

Paging with Multiple Caches

TL;DR: In this paper, a multiple cache variant of the classical single cache paging problem is studied, referred to as the Multiple Cache Paging (MCP) problem, where each cache can serve exactly one request.

Practical and Theoretical Issues in Prefetching and Caching (CMU-CS-97-181)

TL;DR: An algorithm with competitive ratio O(log k) on (k + 1)point spaces, the rst poly-logarithmic ratio for this problem is presented, and an almost-tight lower bound of ( log k) for any weighted caching problem on at least k+1 points is given.
Journal ArticleDOI

On-Line Paging against Adversarially Biased Random Inputs

TL;DR: In this article, the optimal paging paging ratio for both deterministic and randomized paging strategies is estimated. But they did not determine the optimal ratio for LRU paging strategy, and they also gave an alternate proof of LRU optimality.
Proceedings ArticleDOI

Online Paging with Heterogeneous Cache Slots

TL;DR: The deterministic upper bound is extended to the weighted variant of All-or-One Paging (a generalization of standard Weighted Paging), showing that it is also Θ(k).
References
More filters
Journal ArticleDOI

Amortized efficiency of list update and paging rules

TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Proceedings ArticleDOI

Probabilistic computations: Toward a unified measure of complexity

TL;DR: Two approaches to the study of expected running time of algoritruns lead naturally to two different definitions of intrinsic complexity of a problem, which are the distributional complexity and the randomized complexity, respectively.
Journal ArticleDOI

Competitive snoopy caching

TL;DR: This work presents new on-line algorithms to be used by the caches of snoopy cache multiprocessor systems to decide which blocks to retain and which to drop in order to minimize communication over the bus.
Journal ArticleDOI

Competitive algorithms for server problems

TL;DR: This paper seeks to develop on-line algorithms whose performance on any sequence of requests is as close as possible to the performance of the optimum off-line algorithm.
Proceedings ArticleDOI

Competitive algorithms for on-line problems

TL;DR: This paper presents several general results concerning competitive algorithms, as well as results on specific on-line problems.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions mentioned in the paper "Competitive paging algorithms" ?

In this paper, the authors proposed a method for the analysis of the relationship between computer science degrees and their application in the field of artificial intelligence. 

Karlin et al. [8] have shown that for two servers in a graph that is an isosceles triangle the best competitive factor that can be achieved is a constant that approaches e/(e - 1) z 1.582 as the length of the similar sides go to infinity. 

A randomized on-line algorithm may be viewed as basing its actions on the request sequence (T presented to it and on an infinite sequence p of independent unbiased random bits. 

The marking algorithm is strongly competitive (its competitive factor is Hk) if k = n - 1, but it is not strongly competitive if k < n - 1. 

They showed that LRU running with k servers performs within a factor of k/(k - h + 1) of any off-line algorithm with h 5 k servers and that this is the minimum competitive factor that can be achieved. 

They showed that no deterministic algorithm for the k-server problem can be better than k-competitive, they gave k-competitive algorithms for the case when k = 2 and k = II - 1, and they conjectured that there exists a k-competitive k-server algorithm for any graph. 

The adversary is, however, able to maintain a vector p = (pl, p2,. . . , p,) of probabilities, where pi is the probability that vertex i is not covered by a server. 

In that proof, deterministic on-line algorithms B(l), B(2), . . . , B(m) of type (k, n) were given, and the deterministic on-line algorithm A of type (k, n) was constructed to be &)-competitive against B(i) for each i. 

If the total expected cost ends up exceeding l/u, then an arbitrary request is made to an unmarked vertex, and the subphase is over. 

During this phase exactly the vertices of S were requested, so since A is lazy, the authors know that at least d’ of A’s servers were outside of S during the entire phase. 

Armed with these tools (the marking and the probability vector), the adversary can generate a sequence such that the expected cost of each phase to A is H,,-l, and the cost to the optimum off-line algorithm is 1.