scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Performance and energy modeling for live migration of virtual machines

01 Jun 2013-Cluster Computing (Springer US)-Vol. 16, Iss: 2, pp 249-264
TL;DR: This work constructs application-oblivious models for the cost prediction by using learned knowledge about the workloads at the hypervisor (also called VMM) level and evaluates the models using five representative workloads on a Xen virtualized environment.
Abstract: Live migration of virtual machine (VM) provides a significant benefit for virtual server mobility without disrupting service. It is widely used for system management in virtualized data centers. However, migration costs may vary significantly for different workloads due to the variety of VM configurations and workload characteristics. To take into account the migration overhead in migration decision-making, we investigate design methodologies to quantitatively predict the migration performance and energy consumption. We thoroughly analyze the key parameters that affect the migration cost from theory to practice. We construct application-oblivious models for the cost prediction by using learned knowledge about the workloads at the hypervisor (also called VMM) level. This should be the first kind of work to estimate VM live migration cost in terms of both performance and energy in a quantitative approach. We evaluate the models using five representative workloads on a Xen virtualized environment. Experimental results show that the refined model yields higher than 90% prediction accuracy in comparison with measured cost. Model-guided decisions can significantly reduce the migration cost by more than 72.9% at an energy saving of 73.6%.
Citations
More filters
Journal ArticleDOI
TL;DR: An in-depth study of the existing literature on data center power modeling, covering more than 200 models, organized in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models.
Abstract: Data centers are critical, energy-hungry infrastructures that run large-scale Internet-based services. Energy consumption models are pivotal in designing and optimizing energy-efficient operations to curb excessive energy consumption in data centers. In this paper, we survey the state-of-the-art techniques used for energy consumption modeling and prediction for data centers and their components. We conduct an in-depth study of the existing literature on data center power modeling, covering more than 200 models. We organize these models in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models. Under hardware-centric approaches we start from the digital circuit level and move on to describe higher-level energy consumption models at the hardware component level, server level, data center level, and finally systems of systems level. Under the software-centric approaches we investigate power models developed for operating systems, virtual machines and software applications. This systematic approach allows us to identify multiple issues prevalent in power modeling of different levels of data center systems, including: i) few modeling efforts targeted at power consumption of the entire data center ii) many state-of-the-art power models are based on a few CPU or server metrics, and iii) the effectiveness and accuracy of these power models remain open questions. Based on these observations, we conclude the survey by describing key challenges for future research on constructing effective and accurate data center power models.

741 citations


Cites background or methods from "Performance and energy modeling for..."

  • ...Such predictions can then be used to improve the energy efficiency of the data center, for example by incorporating the model into techniques such as temperature or energy aware scheduling [18], dynamic voltage frequency scaling (DVFS) [19][20][21], resource virtualization [22], improving the algorithms used by the applications [23], switching to low-power states [24], power capping [25], or even completely shutting down unused servers [10][26], etc....

    [...]

  • ...VM live migration is a technology which has attracted considerable interest from data center researchers in recent years [22]....

    [...]

  • ...presented an energy consumption model for VM migration as follows [22],...

    [...]

  • ...[22] VM Considers VM live migration scenario....

    [...]

Journal ArticleDOI
TL;DR: This paper defines MCC, explains its major challenges, discusses heterogeneity in convergent computing and networking, and divides it into two dimensions, namely vertical and horizontal.
Abstract: The unabated flurry of research activities to augment various mobile devices by leveraging heterogeneous cloud resources has created a new research domain called Mobile Cloud Computing (MCC). In the core of such a non-uniform environment, facilitating interoperability, portability, and integration among heterogeneous platforms is nontrivial. Building such facilitators in MCC requires investigations to understand heterogeneity and its challenges over the roots. Although there are many research studies in mobile computing and cloud computing, convergence of these two areas grants further academic efforts towards flourishing MCC. In this paper, we define MCC, explain its major challenges, discuss heterogeneity in convergent computing (i.e. mobile computing and cloud computing) and networking (wired and wireless networks), and divide it into two dimensions, namely vertical and horizontal. Heterogeneity roots are analyzed and taxonomized as hardware, platform, feature, API, and network. Multidimensional heterogeneity in MCC results in application and code fragmentation problems that impede development of cross-platform mobile applications which is mathematically described. The impacts of heterogeneity in MCC are investigated, related opportunities and challenges are identified, and predominant heterogeneity handling approaches like virtualization, middleware, and service oriented architecture (SOA) are discussed. We outline open issues that help in identifying new research directions in MCC.

589 citations

Journal ArticleDOI
TL;DR: The objectives of this study are to highlight the effects of remote resources on the quality and reliability of augmentation processes and discuss the challenges and opportunities of employing varied cloud-based resources in augmenting mobile devices.
Abstract: Recently, Cloud-based Mobile Augmentation (CMA) approaches have gained remarkable ground from academia and industry. CMA is the state-of-the-art mobile augmentation model that employs resource-rich clouds to increase, enhance, and optimize computing capabilities of mobile devices aiming at execution of resource-intensive mobile applications. Augmented mobile devices envision to perform extensive computations and to store big data beyond their intrinsic capabilities with least footprint and vulnerability. Researchers utilize varied cloud-based computing resources (e.g., distant clouds and nearby mobile nodes) to meet various computing requirements of mobile users. However, employing cloud-based computing resources is not a straightforward panacea. Comprehending critical factors (e.g., current state of mobile client and remote resources) that impact on augmentation process and optimum selection of cloud-based resource types are some challenges that hinder CMA adaptability. This paper comprehensively surveys the mobile augmentation domain and presents taxonomy of CMA approaches. The objectives of this study is to highlight the effects of remote resources on the quality and reliability of augmentation processes and discuss the challenges and opportunities of employing varied cloud-based resources in augmenting mobile devices. We present augmentation definition, motivation, and taxonomy of augmentation types, including traditional and cloud-based. We critically analyze the state-of-the-art CMA approaches and classify them into four groups of distant fixed, proximate fixed, proximate mobile, and hybrid to present a taxonomy. Vital decision making and performance limitation factors that influence on the adoption of CMA approaches are introduced and an exemplary decision making flowchart for future CMA approaches are presented. Impacts of CMA approaches on mobile computing is discussed and open challenges are presented as the future research directions.

422 citations


Cites background or methods from "Performance and energy modeling for..."

  • ...Therefore, efforts similar to VMware vMotion [181] and [122], [182] are necessary to optimize VM migration in MCC....

    [...]

  • ...Although energy efficiency is one of the most important challenges of current CMA systems, several efforts such as [53]–[55], [122] are endeavoring to comprehend the energy implications of exploiting cloud-based resources from mobile devices and shrinking their energy overhead....

    [...]

Proceedings ArticleDOI
09 Mar 2011
TL;DR: The CloudNet architecure is presented as a cloud framework consisting of cloud computing platforms linked with a VPN based network infrastructure to provide seamless and secure connectivity between enterprise and cloud data center sites to realize the vision of efficiently pooling geographically distributed data center resources.
Abstract: Virtual machine technology and the ease with which VMs can be migrated within the LAN, has changed the scope of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration of virtual machines to likewise transform the scope of provisioning compute resources from a single data center to multiple data centers spread across the country or around the world. In this paper we present the CloudNet architecure as a cloud framework consisting of cloud computing platforms linked with a VPN based network infrastructure to provide seamless and secure connectivity between enterprise and cloud data center sites. To realize our vision of efficiently pooling geographically distributed data center resources, CloudNet provides optimized support for live WAN migration of virtual machines. Specifically, we present a set of optimizations that minimize the cost of transferring storage and virtual machine memory during migrations over low bandwidth and high latency Internet links. We evaluate our system on an operational cloud platform distributed across the continental US. During simultaneous migrations of four VMs between data centers in Texas and Illinois, CloudNet's optimizations reduce memory migration time by 65% and lower bandwidth consumption for the storage and memory transfer by 19GB, a 50% reduction.

317 citations

Journal ArticleDOI
TL;DR: The proposed consolidation algorithm is based on a migration policy of VNFIs that considers the revenue loss due to QoS degradation that a user suffers due to information loss occurring during the migrations.
Abstract: Network function virtualization foresees the virtualization of service functions and their execution on virtual machines. Any service is represented by a service function chain (SFC) that is a set of VNFs to be executed according to a given order. The running of VNFs needs the instantiation of VNF Instances (VNFIs) that in general are software modules executed on virtual machines. The virtualization challenges include: 1) where to instantiate VNFIs; ii) how many resources to allocate to each VNFI; iii) how to route SFC requests to the appropriate VNFIs in the right sequence; and iv) when and how to migrate VNFIs in response to changes to SFC request intensity and location. We develop an approach that uses three algorithms that are used back-to-back resulting in VNFI placement, SFC routing, and VNFI migration in response to changing workload. The objective is to first minimize the rejection of SFC bandwidth and second to consolidate VNFIs in as few servers as possible so as to reduce the energy consumed. The proposed consolidation algorithm is based on a migration policy of VNFIs that considers the revenue loss due to QoS degradation that a user suffers due to information loss occurring during the migrations. The objective is to minimize the total cost given by the energy consumption and the revenue loss due to QoS degradation. We evaluate our suite of algorithms on a test network and show performance gains that can be achieved over using other alternative naive algorithms.

285 citations


Cites methods from "Performance and energy modeling for..."

  • ...Any VNF is run on a VNF Instance (VNFI) implemented with one Virtual Machine (VM) to which resources (cores, RAM memory,....) are allocated to execute a VNF of a given type (e.g., a virtual firewall, or a load balancer) [9]....

    [...]

  • ...The migration can be performed when the VNFIs are supported by Virtual Machines (VM) but at the price of a information loss when the VMs are moved....

    [...]

  • ...Indeed when any migration happens, the Virtual Machine supporting the VNF instance is not able to carry on its function in a critical period Tdown referred to in the literature as downtime of the Virtual Machine [31]....

    [...]

References
More filters
Journal ArticleDOI
19 Oct 2003
TL;DR: Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality, considerably outperform competing commercial and freely available solutions.
Abstract: Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests.

6,326 citations

Proceedings ArticleDOI
02 May 2005
TL;DR: The design options for migrating OSes running services with liveness constraints are considered, the concept of writable working set is introduced, and the design, implementation and evaluation of high-performance OS migration built on top of the Xen VMM are presented.
Abstract: Migrating operating system instances across distinct physical hosts is a useful tool for administrators of data centers and clusters: It allows a clean separation between hard-ware and software, and facilitates fault management, load balancing, and low-level system maintenance.By carrying out the majority of migration while OSes continue to run, we achieve impressive performance with minimal service downtimes; we demonstrate the migration of entire OS instances on a commodity cluster, recording service downtimes as low as 60ms. We show that that our performance is sufficient to make live migration a practical tool even for servers running interactive loads.In this paper we consider the design options for migrating OSes running services with liveness constraints, focusing on data center and cluster environments. We introduce and analyze the concept of writable working set, and present the design, implementation and evaluation of high-performance OS migration built on top of the Xen VMM.

3,186 citations


"Performance and energy modeling for..." refers background or methods in this paper

  • ...Representative works include XenMotion [10] and Vmotion [25] which were implemented as build-in tools in their virtualization platforms....

    [...]

  • ...5 seconds in the case of a diabolical workload MMuncher [7]....

    [...]

  • ...For most of workloads, we observed that the size of WWS is approximately proportional to the pages dirtied in each pre-copying round....

    [...]

  • ...As Linpack is a both CPU and memory inten­sive workload, it shows a quite large WWS and very high memory dirtying rate, thus should be evicted from the migration candidates....

    [...]

  • ...Consider­ing migration downtime, previous studies demonstrated that it could vary significantly between different workloads, ranging from 60 milliseconds for a Quake 3 game server to 3.5 seconds in the case of a diabolical workload MMuncher [10]....

    [...]

Proceedings ArticleDOI
22 Apr 2001
TL;DR: A series of experiments are described which obtained detailed measurements of the energy consumption of an IEEE 802.11 wireless network interface operating in an ad hoc networking environment, and some implications for protocol design and evaluation in ad hoc networks are discussed.
Abstract: Energy-aware design and evaluation of network protocols requires knowledge of the energy consumption behavior of actual wireless interfaces. But little practical information is available about the energy consumption behavior of well-known wireless network interfaces and device specifications do not provide information in a form that is helpful to protocol developers. This paper describes a series of experiments which obtained detailed measurements of the energy consumption of an IEEE 802.11 wireless network interface operating in an ad hoc networking environment. The data is presented as a collection of linear equations for calculating the energy consumed in sending, receiving and discarding broadcast and point-to-point data packets of various sizes. Some implications for protocol design and evaluation in ad hoc networks are discussed.

1,810 citations


"Performance and energy modeling for..." refers background in this paper

  • ...However, there are several reasons that pose some unique challenges to model the energy consumption of wireless network interface [10]....

    [...]

Proceedings ArticleDOI
16 Oct 2006
TL;DR: This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks that improve over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements.
Abstract: Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks. We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages.

1,561 citations


"Performance and energy modeling for..." refers methods in this paper

  • ...We estimate the model coefficients by running DaCapo [3] benchmark, which consists of a suit of Java applications....

    [...]

  • ...We estimate the model coefficients by running DaCapo [9] benchmark, which consists of a suit of Java applications....

    [...]

  • ...The DaCapo Benchmarks: Java Benchmarking Development and Analysis....

    [...]

Journal ArticleDOI
TL;DR: The cloud heralds a new era of computing where application services are provided through the Internet, but is it the ultimate solution for extending such systems' battery lifetimes?
Abstract: The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?

1,538 citations