scispace - formally typeset
Open AccessJournal ArticleDOI

Competitive paging algorithms

Reads0
Chats0
TLDR
The marking algorithm is developed, a randomized on-line algorithm for the paging problem, which it is proved that its expected cost on any sequence of requests is within a factor of 2Hk of optimum.
About
This article is published in Journal of Algorithms.The article was published on 1991-12-01 and is currently open access. It has received 489 citations till now. The article focuses on the topics: Page replacement algorithm & K-server problem.

read more

Citations
More filters
Journal ArticleDOI

Online Metric Tracking and Smoothing

TL;DR: This work considers the online smoothing problem, in which a tracker is required to maintain distance no more than Δ≥0 from a time-varying signal f while minimizing its own movement, and describes a natural randomized algorithm that achieves a O(logbΔ)-competitive ratio, where bΔ=maxx∈X|BΔ(x)| is the maximum number of points appearing in any ball of radius Δ.
Posted ContentDOI

On the Incomparability of Cache Algorithms in Terms of Timing Leakage.

TL;DR: It is shown that leak competitiveness is symmetric in the cache algorithms, which implies that no cache algorithm dominates another in terms of leakage via a program's total execution time, in contrast to performance, where it is known that such dominance relationships exist.
Journal ArticleDOI

Analysis of simple randomized buffer management for parallel I/O

TL;DR: Buffer management for a D-disk parallel I/O system is considered in the context of randomized placement of data on the disks and a simple prefetching and caching algorithm PHASE-LRU using bounded lookahead is described and analyzed.
Book ChapterDOI

Improved space bounds for strongly competitive randomized paging algorithms

TL;DR: This paper addresses the conjecture that there exist strongly competitive randomized paging algorithms using o(k) bookmarks, and proposes an algorithm, denoted Partition2, which is a variant of the Partition algorithm in [3], while Partition is unbounded in its space requirements, Partitions2 uses Θ(k/logk)bookmarks.
Journal ArticleDOI

Making Cache Monotonic and Consistent

TL;DR: Monotonic Consistent Caching (MCC) as discussed by the authors is a cache scheme for applications that demand consistency and monotonicity, which requires that a transaction-like request always sees a consistent view of the backend database and observed writes over the cache will not be lost.
References
More filters
Journal ArticleDOI

Amortized efficiency of list update and paging rules

TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Proceedings ArticleDOI

Probabilistic computations: Toward a unified measure of complexity

TL;DR: Two approaches to the study of expected running time of algoritruns lead naturally to two different definitions of intrinsic complexity of a problem, which are the distributional complexity and the randomized complexity, respectively.
Journal ArticleDOI

Competitive snoopy caching

TL;DR: This work presents new on-line algorithms to be used by the caches of snoopy cache multiprocessor systems to decide which blocks to retain and which to drop in order to minimize communication over the bus.
Journal ArticleDOI

Competitive algorithms for server problems

TL;DR: This paper seeks to develop on-line algorithms whose performance on any sequence of requests is as close as possible to the performance of the optimum off-line algorithm.
Proceedings ArticleDOI

Competitive algorithms for on-line problems

TL;DR: This paper presents several general results concerning competitive algorithms, as well as results on specific on-line problems.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions mentioned in the paper "Competitive paging algorithms" ?

In this paper, the authors proposed a method for the analysis of the relationship between computer science degrees and their application in the field of artificial intelligence. 

Karlin et al. [8] have shown that for two servers in a graph that is an isosceles triangle the best competitive factor that can be achieved is a constant that approaches e/(e - 1) z 1.582 as the length of the similar sides go to infinity. 

A randomized on-line algorithm may be viewed as basing its actions on the request sequence (T presented to it and on an infinite sequence p of independent unbiased random bits. 

The marking algorithm is strongly competitive (its competitive factor is Hk) if k = n - 1, but it is not strongly competitive if k < n - 1. 

They showed that LRU running with k servers performs within a factor of k/(k - h + 1) of any off-line algorithm with h 5 k servers and that this is the minimum competitive factor that can be achieved. 

They showed that no deterministic algorithm for the k-server problem can be better than k-competitive, they gave k-competitive algorithms for the case when k = 2 and k = II - 1, and they conjectured that there exists a k-competitive k-server algorithm for any graph. 

The adversary is, however, able to maintain a vector p = (pl, p2,. . . , p,) of probabilities, where pi is the probability that vertex i is not covered by a server. 

In that proof, deterministic on-line algorithms B(l), B(2), . . . , B(m) of type (k, n) were given, and the deterministic on-line algorithm A of type (k, n) was constructed to be &)-competitive against B(i) for each i. 

If the total expected cost ends up exceeding l/u, then an arbitrary request is made to an unmarked vertex, and the subphase is over. 

During this phase exactly the vertices of S were requested, so since A is lazy, the authors know that at least d’ of A’s servers were outside of S during the entire phase. 

Armed with these tools (the marking and the probability vector), the adversary can generate a sequence such that the expected cost of each phase to A is H,,-l, and the cost to the optimum off-line algorithm is 1.