scispace - formally typeset
Search or ask a question
Topic

Paging

About: Paging is a research topic. Over the lifetime, 8033 publications have been published within this topic receiving 107258 citations.


Papers
More filters
Book
01 Jan 1998
TL;DR: This book discusses competitive analysis and decision making under uncertainty in the context of the k-server problem, which involves randomized algorithms in order to solve the problem of paging.
Abstract: Preface 1. Introduction to competitive analysis: the list accessing problem 2. Introduction to randomized algorithms: the list accessing problem 3. Paging: deterministic algorithms 4. Paging: randomized algorithms 5. Alternative models for paging: beyond pure competitive analysis 6. Game theoretic foundations 7. Request - answer games 8. Competitive analysis and zero-sum games 9. Metrical task systems 10. The k-server problem 11. Randomized k-server algorithms 12. Load-balancing 13. Call admission and circuit-routing 14. Search, trading and portfolio selection 15. Competitive analysis and decision making under uncertainty Appendices Bibliography Index.

2,615 citations

Journal ArticleDOI
TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Abstract: In this article we study the amortized efficiency of the “move-to-front” and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes t(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the “least recently used” (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule (Belady's MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance.

2,378 citations

Patent
08 Mar 1999
TL;DR: A location reporting paging communication system comprising space satellites, ground stations and a remote receiving unit adapted to resolve a global position from signals transmitted from a communication transmitter is described in this article.
Abstract: A location reporting paging communication system comprising space satellites, ground stations and a remote receiving unit adapted to resolve a global position from signals transmitted from a communication transmitter. The subscriber in possession of the remote receiving unit updates the paging network with global positioning information. A caller paging a subscriber in possession of the remote receiving unit may request the global location of the remote receiving unit. The paging network could divulge or block such information from a caller depending on the requirements of the subscriber.

1,272 citations

Patent
16 Sep 1998
TL;DR: A location reporting paging communication system comprising space satellites, ground stations and a remote receiving unit adapted to resolve a global position from signals transmitted from a communication transmitter is described in this paper.
Abstract: A location reporting paging communication system comprising space satellites, ground stations and a remote receiving unit adapted to resolve a global position from signals transmitted from a communication transmitter. The subscriber in possession of the remote receiving unit updates the paging network with global positioning information. A caller paging a subscriber in possession of the remote receiving unit may request the global location of the remote receiving unit. The paging network could divulge or block such information from a caller depending on the requirements of the subscriber.

1,162 citations

Proceedings ArticleDOI
11 Oct 2009
TL;DR: A file system and a hardware architecture that are designed around the properties of persistent, byteaddressable memory, which provides strong reliability guarantees and offers better performance than traditional file systems, even when both are run on top of byte-addressable, persistent memory.
Abstract: Modern computer systems have been built around the assumption that persistent storage is accessed via a slow, block-based interface. However, new byte-addressable, persistent memory technologies such as phase change memory (PCM) offer fast, fine-grained access to persistent storage.In this paper, we present a file system and a hardware architecture that are designed around the properties of persistent, byteaddressable memory. Our file system, BPFS, uses a new technique called short-circuit shadow paging to provide atomic, fine-grained updates to persistent storage. As a result, BPFS provides strong reliability guarantees and offers better performance than traditional file systems, even when both are run on top of byte-addressable, persistent memory. Our hardware architecture enforces atomicity and ordering guarantees required by BPFS while still providing the performance benefits of the L1 and L2 caches.Since these memory technologies are not yet widely available, we evaluate BPFS on DRAM against NTFS on both a RAM disk and a traditional disk. Then, we use microarchitectural simulations to estimate the performance of BPFS on PCM. Despite providing strong safety and consistency guarantees, BPFS on DRAM is typically twice as fast as NTFS on a RAM disk and 4-10 times faster than NTFS on disk. We also show that BPFS on PCM should be significantly faster than a traditional disk-based file system.

935 citations


Network Information
Related Topics (5)
Communications system
88.1K papers, 1M citations
80% related
Wireless
133.4K papers, 1.9M citations
79% related
Server
79.5K papers, 1.4M citations
79% related
Wireless network
122.5K papers, 2.1M citations
78% related
Scheduling (computing)
78.6K papers, 1.3M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202318
202262
2021132
2020347
2019368
2018393