scispace - formally typeset
Proceedings ArticleDOI

Mortar: filling the gaps in data center memory

Reads0
Chats0
TLDR
By expanding and contracting the data store size based on the free memory available, Mortar improves average response time of a web application by up to 35% compared to a fixed size memcached deployment, and improves overall video streaming performance by 45% through prefetching.
Abstract
Data center servers are typically overprovisioned, leaving spare memory and CPU capacity idle to handle unpredictable workload bursts by the virtual machines running on them. While this allows for fast hotspot mitigation, it is also wasteful. Unfortunately, making use of spare capacity without impacting active applications is particularly difficult for memory since it typically must be allocated in coarse chunks over long timescales. In this work we propose re- purposing the poorly utilized memory in a data center to store a volatile data store that is managed by the hypervisor. We present two uses for our Mortar framework: as a cache for prefetching disk blocks, and as an application-level distributed cache that follows the memcached protocol. Both prototypes use the framework to ask the hypervisor to store useful, but recoverable data within its free memory pool. This allows the hypervisor to control eviction policies and prioritize access to the cache. We demonstrate the benefits of our prototypes using realistic web applications and disk benchmarks, as well as memory traces gathered from live servers in our university's IT department. By expanding and contracting the data store size based on the free memory available, Mortar improves average response time of a web application by up to 35% compared to a fixed size memcached deployment, and improves overall video streaming performance by 45% through prefetching.

read more

Citations
More filters
Proceedings ArticleDOI

Welcome to zombieland: practical and energy-efficient memory disaggregation in a datacenter

TL;DR: An effortless way for disaggregating the CPU-memory couple, two of the most important resources in cloud computing, is proposed and can improve the energy efficiency of state-of-the-art consolidation techniques by up to 86%, with minimal additional complexity.
Proceedings Article

LAMA: optimized locality-aware memory allocation for key-value cache

TL;DR: Locality-Aware Memory Allocation (LAMA) as discussed by the authors proposes a locality-aware memory allocation to minimize the miss ratio and the average response time of Memcached requests by analyzing the locality of the requests and then repartitioning the memory.
Journal ArticleDOI

Automatic memory-based vertical elasticity and oversubscription on cloud platforms

TL;DR: A memory management framework for on-premises Clouds that features live migration to safely enable transient oversubscription of physical resources in a CMP, and a memory oversub subscription framework for Cloud Management Platforms is described.
Journal ArticleDOI

Scatter-Gather Live Migration of Virtual Machines

TL;DR: The Scatter-Gather implementation in the KVM/QEMU platform reduces the eviction time by up to a factor of 6 against traditional pre-copy and post-copy while maintaining comparable total migration time when the destination is slower than the source.
References
More filters
Journal ArticleDOI

Xen and the art of virtualization

TL;DR: Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality, considerably outperform competing commercial and freely available solutions.
Book

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines

TL;DR: The architecture of WSCs is described, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base are described.
Journal ArticleDOI

Memory resource management in VMware ESX server

TL;DR: Several novel ESX Server mechanisms and policies for managing memory are introduced, including a ballooning technique that reclaims the pages considered least valuable by the operating system running in a virtual machine, and an idle memory tax that achieves efficient memory utilization.
Proceedings Article

Scaling Memcache at Facebook

TL;DR: This paper describes how Facebook leverages memcached as a building block to construct and scale a distributed key-value store that supports the world's largest social network.
Journal ArticleDOI

Difference engine: harnessing memory redundancy in virtual machines

TL;DR: Difference Engine is built, an extension to the Xen VMM, to support both subpage level sharing and full-page sharing and demonstrate substantial savings across VMs running disparate workloads (up to 65%).