scispace - formally typeset
Open AccessJournal ArticleDOI

Second-level buffer cache management

TLDR
This work investigates multiple approaches to effectively manage second-level buffer caches and reports a new local algorithm called multi-queue (MQ) that performs better than nine tested alternative algorithms for second-levels buffer caches, and a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second- level buffer cache hit ratios over corresponding local algorithms.
Abstract
Buffer caches are commonly used in servers to reduce the number of slow disk accesses or network messages. These buffer caches form a multilevel buffer cache hierarchy. In such a hierarchy, second-level buffer caches have different access patterns from first-level buffer caches because accesses to a second-level are actually misses from a first-level. Therefore, commonly used cache management algorithms such as the least recently used (LRU) replacement algorithm that work well for single-level buffer caches may not work well for second-level. We investigate multiple approaches to effectively manage second-level buffer caches. In particular, we report our research results in 1) second-level buffer cache access pattern characterization, 2) a new local algorithm called multi-queue (MQ) that performs better than nine tested alternative algorithms for second-level buffer caches, 3) a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second-level buffer cache hit ratios over corresponding local algorithms, and 4) implementation and evaluation of these algorithms in a real storage system connected with commercial database servers (Microsoft SQL server and Oracle) running industrial-strength online transaction processing benchmarks.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

CLOCK-Pro: an effective improvement of the CLOCK replacement

TL;DR: Inspired by the I/O buffer cache replacement algorithm, LIRS, an improved CLOCK replacement policy is proposed, called CLOCK-Pro, which works in a similar fashion as CLOCK with a VM-affordable cost.
Proceedings ArticleDOI

I-CASH: Intelligently Coupled Array of SSD and HDD

TL;DR: Numerical results on standard benchmarks show that I-CASH reduces the average I/O response time by an order of magnitude compared to existing disk I/ O architectures such as RAID and SSD/HDD storage hierarchy, and provides up to 2.8 speedup over state-of-the-art pure SSD storage.
Proceedings ArticleDOI

DULO: an effective buffer cache management scheme to exploit both temporal and spatial locality

TL;DR: Leveraging the filtering effect of the buffer cache, DULO can influence the I/O request stream by making the requests passed to disk more sequential, significantly increasing the effectiveness ofI/O scheduling and prefetching for disk performance improvements.
Journal ArticleDOI

SSD bufferpool extensions for database systems

TL;DR: This work presents a technique for using solid-state storage as a caching layer between RAM and hard disks in database management systems by caching data that is accessed frequently, disk I/O is reduced.
Proceedings ArticleDOI

WOW: wise ordering for writes - combining spatial and temporal locality in non-volatile caches

TL;DR: A new algorithm, namely, Wise Ordering for Writes (WOW), is proposed for write cache management that effectively combines and balances temporal and spatial locality and has better or comparable peak throughput to the best of CSCAN and LRW across a wide gamut of write cache sizes and workload configurations.
References
More filters
Journal ArticleDOI

Amortized efficiency of list update and paging rules

TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Journal ArticleDOI

A study of replacement algorithms for a virtual-storage computer

TL;DR: One of the basic limitations of a digital computer is the size of its available memory; an approach that permits the programmer to use a sufficiently large address range can accomplish this objective, assuming that means are provided for automatic execution of the memory-overlay functions.
Journal ArticleDOI

Cache Memories

TL;DR: Specific aspects of cache memories investigated include: the cache fetch algorithm (demand versus prefetch), the placement and replacement algorithms, line size, store-through versus copy-back updating of main memory, cold-start versus warm-start miss ratios, mulhcache consistency, the effect of input /output through the cache, the behavior of split data/instruction caches, and cache size.
Proceedings ArticleDOI

Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers

TL;DR: In this article, a hardware technique to improve the performance of caches is presented, where a small fully-associative cache between a cache and its refill path is used to place prefetched data and not in the cache.
Journal ArticleDOI

Evaluation techniques for storage hierarchies

TL;DR: A new and efficient method of determining, in one pass of an address trace, performance measures for a large class of demand-paged, multilevel storage systems utilizing a variety of mapping schemes and replacement algorithms.
Frequently Asked Questions (14)
Q1. What are the contributions mentioned in the paper "Second-level buffer cache management" ?

This paper investigates multiple approaches to effectively manage second-level buffer caches. 

Storage buffer cache hit ratios (%).reload threshold value increases, the number of reloads issignificantly reduced, leading to less contention on disks. 

The bandwidth of data transfers between disk and host memory is about 15 Mbytes/sec and the access latency for random read/writes is about 9 milliseconds. 

In their study, the authors use 8 Kbytes as the cache block size for their access pattern analysis and their experimental evaluation of various algorithms. 

Since an L1 buffer cache evicts a clean block to make space for a new block, the first method needs to demote (send) the evicted block to an L2 buffer cache before replacing it. 

An L2 buffer cache can use a data structure called client content tracking (CCT) table to record current disk blocks (diskID; blockNo) that reside in different memory location of an L1 buffer cache. 

Each entry of the history buffer occupies fewer than 32 bytes so that the memory requirement for the history buffer is quite small, less than 0.5 percent of the L2 buffer cache size. 

MQ also uses a history buffer Qout, similarly to the 2Q algorithm [20], to remember access frequencies of recently evictedblocks for someperiodof time. 

If different L1s access different data (which is typically the case in parallel databases such as Microsoft SQL Server), one method is to divide an L2 buffer cache into multiple partitions, one for each L1. 

The first trace is collected by setting the Oracle buffer cache to be 128 MBytes, whereas the latter is collected with 16 MBytes of Oracle buffer cache. 

For all four traces, the access percentage curves decrease more slowly than the block percentage curves, indicating that a large percentage of accesses are to a small percentage of blocks. 

Another study conducted by Willick et al. demonstrated that the Frequency-Based Replacement (FBR) algorithm performs better for a back-end disk caches than locality-based replacement algorithms such as LRU [47], but this study did not study disk cache access patterns to understand their results. 

Cache replacement policies have been intensively studied in various contexts in the past, including processor caches [40], paged virtual memory systems [42], [6], and disk caches [41]. 

The performance of a local replacement algorithm at L2 buffer caches primarily depends on how well they can satisfy the life time property.