scispace - formally typeset
Search or ask a question

Showing papers on "Live migration published in 2017"


Journal ArticleDOI
TL;DR: A conceptual smart pre-copy live migration approach is presented for VM migration that can estimate the downtime after each iteration to determine whether to proceed to the stop-and-copy stage during a system failure or an attack on a fog computing node.
Abstract: Fog computing, an extension of cloud computing services to the edge of the network to decrease latency and network congestion, is a relatively recent research trend. Although both cloud and fog offer similar resources and services, the latter is characterized by low latency with a wider spread and geographically distributed nodes to support mobility and real-time interaction. In this paper, we describe the fog computing architecture and review its different services and applications. We then discuss security and privacy issues in fog computing, focusing on service and resource availability. Virtualization is a vital technology in both fog and cloud computing that enables virtual machines (VMs) to coexist in a physical server (host) to share resources. These VMs could be subject to malicious attacks or the physical server hosting it could experience system failure, both of which result in unavailability of services and resources. Therefore, a conceptual smart pre-copy live migration approach is presented for VM migration. Using this approach, we can estimate the downtime after each iteration to determine whether to proceed to the stop-and-copy stage during a system failure or an attack on a fog computing node. This will minimize both the downtime and the migration time to guarantee resource and service availability to the end users of fog computing. Last, future research directions are outlined.

257 citations


Journal ArticleDOI
TL;DR: This work compares two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration and bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.
Abstract: Major interest is currently given to the integration of clusters of virtualization servers, also referred to as ‘cloudlets’ or ‘edge clouds’, into the access network to allow higher performance and reliability in the access to mobile edge computing services. We tackle the edge cloud network design problem for mobile access networks. The model is such that the virtual machines (VMs) are associated with mobile users and are allocated to cloudlets. Designing an edge cloud network implies first determining where to install cloudlet facilities among the available sites, then assigning sets of access points, such as base stations to cloudlets, while supporting VM orchestration and considering partial user mobility information, as well as the satisfaction of service-level agreements. We present link-path formulations supported by heuristics to compute solutions in reasonable time. We qualify the advantage in considering mobility for both users and VMs as up to 20% less users not satisfied in their SLA with a little increase of opened facilities. We compare two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration, while bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.

203 citations


Journal ArticleDOI
TL;DR: This paper presents a survey and taxonomy for server consolidation techniques in cloud data centers, special attention has been devoted to the parameters and algorithmic approaches used to consolidate VMs onto PMs.
Abstract: Data centers and their applications are exponentially growing. Consequently, their energy consumption and environmental impacts have also become increasingly more important. Virtualization technologies are widely used in modern data centers to ease the management of the data center and to reduce its energy consumption. Data centers that employ virtualization technologies are typically called virtualized or cloud data centers . Virtualization technologies enable virtual machine (VM) live migration, which allows the VMs to be freely moved among physical machines (PMs) with negligible downtime. Thus, several VMs can be packed on a single PM so as to let the PM run in its more energy-efficient working condition. This technique is called server consolidation and is an effective and widely used approach to reduce total energy consumption in data centers. Server consolidation can be done in various ways and by considering various parameters and effects. This paper presents a survey and taxonomy for server consolidation techniques in cloud data centers. Special attention has been devoted to the parameters and algorithmic approaches used to consolidate VMs onto PMs. In this end, we also discuss open challenges and suggest areas for further research.

123 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: ELASTICDOCKER is proposed, the first system powering vertical elasticity of Docker containers autonomously, based on the well-known IBM's autonomic computing MAPE-K principles and outperforms Kubernetes elasticity by 37.63%.
Abstract: Elasticity is the key feature of cloud computing to scale computing resources according to application workloads timely. In the literature as well as in industrial products, much attention was given to the elasticity of virtual machines, but much less to the elasticity of containers. However, containers are the new trend for packaging and deploying microservices-based applications. Moreover, most of approaches focus on horizontal elasticity, fewer works address vertical elasticity. In this paper, we propose ELASTICDOCKER, the first system powering vertical elasticity of Docker containers autonomously. Based on the well-known IBM's autonomic computing MAPE-K principles, ELASTICDOCKER scales up and down both CPU and memory assigned to each container according to the application workload. As vertical elasticity is limited to the host machine capacity, ELASTICDOCKER does container live migration when there is no enough resources on the hosting machine. Our experiments show that ELASTICDOCKER helps to reduce expenses for container customers, make better resource utilization for container providers, and improve Quality of Experience for application end-users. In addition, based on the observed migration performance metrics, the experiments reveal a high efficient live migration technique. As compared to horizontal elasticity, ELASTICDOCKER outperforms Kubernetes elasticity by 37.63%.

115 citations


Journal ArticleDOI
TL;DR: Based on the proposed algorithms, energy consumption can be reduced by up to 28%, and SLA can be improved up to 87% when compared with the benchmark algorithms.
Abstract: Cloud computing has become a significant research area in large-scale computing, because it can share globally distributed resources Cloud computing has evolved with the development of large-scale data centers, including thousands of servers around the world However, cloud data centers consume vast amounts of electrical energy, contributing to high-operational costs, and carbon dioxide emissions Dynamic consolidation of virtual machines (VMs) using live migration and putting idle nodes in sleep mode allows cloud providers to optimize resource utilization and reduce energy consumption However, aggressive VM consolidation may degrade the performance Therefore, an energy-performance tradeoff between providing high-quality service to customers and reducing power consumption is desired In this paper, several novel algorithms are proposed for the dynamic consolidation of VMs in cloud data centers The aim is to improve the utilization of computing resources and reduce energy consumption under SLA constraints regarding CPU, RAM, and bandwidth The efficiency of the proposed algorithms is validated by conducting extensive simulations The results of the evaluation clearly show that the proposed algorithms significantly reduce energy consumption while providing a high level of commitment to the SLA Based on the proposed algorithms, energy consumption can be reduced by up to 28%, and SLA can be improved up to 87% when compared with the benchmark algorithms

89 citations


Journal ArticleDOI
TL;DR: A two-tier virtual machine placement algorithm called crow search based VM placement (CSAVMP) and a queueing structure to manage and schedule a large set of VMs are proposed to reduce the resources wastage and power consumption at the data centers.

86 citations


Proceedings ArticleDOI
12 Oct 2017
TL;DR: VM handoff enables rapid and transparent placement changes to executing code in edge computing use cases where the safety and management attributes of VM encapsulation are important.
Abstract: VM handoff enables rapid and transparent placement changes to executing code in edge computing use cases where the safety and management attributes of VM encapsulation are important. This versatile primitive offers the functionality of classic live migration but is highly optimized for the edge. Over WAN bandwidths ranging from 5 to 25 Mbps, VM handoff migrates a running 8 GB VM in about a minute, with a downtime of a few tens of seconds. By dynamically adapting to varying network bandwidth and processing load, VM handoff is more than an order of magnitude faster than live migration at those bandwidths.

85 citations


Journal ArticleDOI
TL;DR: This paper presents a novel virtual machine consolidation technique to achieve energy–QoS–temperature balance in the cloud data center and certifies that physical machine temperature, SLA, and migration technique together control the energy consumption and QoS in a cloud data Center.
Abstract: Cloud-based data centers consume a significant amount of energy which is a costly procedure. Virtualization technology, which can be regarded as the first step in the cloud by offering benefits like the virtual machine and live migration, is trying to overcome this problem. Virtual machines host workload, and because of the variability of workload, virtual machines consolidation is an effective technique to minimize the total number of active servers and unnecessary migrations and consequently improves energy consumption. Effective virtual machine placement and migration techniques act as a key issue to optimize the consolidation process. In this paper, we present a novel virtual machine consolidation technique to achieve energy–QoS–temperature balance in the cloud data center. We simulated our proposed technique in CloudSim simulation. Results of evaluation certify that physical machine temperature, SLA, and migration technique together control the energy consumption and QoS in a cloud data center.

72 citations


Journal ArticleDOI
TL;DR: A multiple regression algorithm is developed that uses CPU utilization, memory utilization and bandwidth utilization for host overload detection and significantly reduces energy consumption while ensuring a high level of adherence to Service Level Agreements (SLA).

50 citations


Proceedings ArticleDOI
12 Oct 2017
TL;DR: This paper proposes the use of multi-path TCP to both improve VM migration time and network transparency of applications and shows that this approach can reduce migration times by up to 2X while virtually eliminating downtimes for most applications.
Abstract: Edge clouds are emerging as a popular paradigm of computation. In edge clouds, computation and storage can be distributed across a large number of locations, allowing applications to be hosted at the edge of the network close to the end-users. Virtual machine live migration is a key mechanism which enables applications to be nimble and nomadic as they respond to changing user locations and workload. However, VM live migration in edge clouds poses a number of challenges. Migrating VMs between geographically separate locations over slow wide-area network links results in large migration times and high unavailability of the application. This is due to network reconfiguration delays as user traffic is redirected to the newly migrated location. In this paper, we propose the use of multi-path TCP to both improve VM migration time and network transparency of applications. We evaluate our approach in a commercial public cloud environment and an emulated lab based edge cloud testbed using a variety of network conditions and show that our approach can reduce migration times by up to 2X while virtually eliminating downtimes for most applications.

47 citations


Journal ArticleDOI
TL;DR: A novel solution to the VMrB problem is proposed, namely a Pareto-based Multi-Objective VM reBalance solution (MOVMrB), which aims to simultaneously minimize the disequilibrium of both inter-HM and intra-HM loads.

Proceedings ArticleDOI
24 Sep 2017
TL;DR: This work proposes an adaptive machine learning-based model that is able to predict with high accuracy the key characteristics of live migration in dependence of the migration algorithm and the workload running inside the VM.
Abstract: Live migration is one of the key technologies to improve data center utilization, power efficiency, and maintenance. Various live migration algorithms have been proposed; each exhibiting distinct characteristics in terms of completion time, amount of data transferred, virtual machine (VM) downtime, and VM performance degradation. To make matters worse, not only the migration algorithm but also the applications running inside the migrated VM affect the different performance metrics. With service-level agreements and operational constraints in place, choosing the optimal live migration technique has so far been an open question. In this work, we propose an adaptive machine learning-based model that is able to predict with high accuracy the key characteristics of live migration in dependence of the migration algorithm and the workload running inside the VM. We discuss the important input parameters for accurately modeling the target metrics, and describe how to profile them with little overhead. Compared to existing work, we are not only able to model all commonly used migration algorithms but also predict important metrics that have not been considered so far such as the performance degradation of the VM. In a comparison with the state-of-the-art, we show that the proposed model outperforms existing work by a factor 2 to 5.

Proceedings ArticleDOI
26 Jun 2017
TL;DR: This paper presents the first study on the support for live migration of SGX-capable VMs by identifying the security properties that a secure enclave migration process should meet and proposing a software-based solution.
Abstract: The recent commercial availability of Intel SGX (Software Guard eXtensions) provides a hardware-enabled building block for secure execution of software modules in an untrusted cloud. As an untrusted hypervisor/OS has no access to an enclave's running states, a VM (virtual machine) with enclaves running inside loses the capability of live migration, a key feature of VMs in the cloud. This paper presents the first study on the support for live migration of SGX-capable VMs. We identify the security properties that a secure enclave migration process should meet and propose a software-based solution. We leverage several techniques such as two-phase checkpointing and self-destroy to implement our design on a real SGX machine. Security analysis confirms the security of our proposed design and performance evaluation shows that it incurs negligible performance overhead. Besides, we give suggestions on the future hardware design for supporting transparent enclave migration.

Journal ArticleDOI
TL;DR: This paper proposes a traffic-sensitive live VM migration technique that uses a combination of pre-copy and post-copy techniques for the migration of the co-located VMs, and shows that this approach minimizes the network contention for migration, thus reducing the total migration time and the application degradation.

Proceedings ArticleDOI
14 Oct 2017
TL;DR: Rocksteady is presented, a live migration technique for the RAMCloud scale-out in-memory key-value store that balances three competing goals: it migrates data quickly, it minimizes response time impact, and it allows arbitrary, fine-grained splits.
Abstract: Scalable in-memory key-value stores provide low-latency access times of a few microseconds and perform millions of operations per second per server. With all data in memory, these systems should provide a high level of reconfigurability. Ideally, they should scale up, scale down, and rebalance load more rapidly and flexibly than disk-based systems. Rapid reconfiguration is especially important in these systems since a) DRAM is expensive and b) they are the last defense against highly dynamic workloads that suffer from hot spots, skew, and unpredictable load. However, so far, work on in-memory key-value stores has generally focused on performance and availability, leaving reconfiguration as a secondary concern. We present Rocksteady, a live migration technique for the RAMCloud scale-out in-memory key-value store. It balances three competing goals: it migrates data quickly, it minimizes response time impact, and it allows arbitrary, fine-grained splits. Rocksteady migrates 758 MB/s between servers under high load while maintaining a median and 99.9th percentile latency of less than 40 and 250 μs, respectively, for concurrent operations without pauses, downtime, or risk to durability (compared to 6 and 45 μs during normal operation). To do this, it relies on pipelined and parallel replay and a lineagelike approach to fault-tolerance to defer re-replication costs during migration. Rocksteady allows RAMCloud to defer all repartitioning work until the moment of migration, giving it precise and timely control for load balancing.

Journal ArticleDOI
TL;DR: It is demonstrated that an autonomous agent can learn to utilise available resources when peak loads saturate the cloud network and be able to learn optimal scheduling times for live migration while analysing current network traffic demand.
Abstract: Live virtual machine migration can have a major impact on how a cloud system performs, as it consumes significant amounts of network resources such as bandwidth. Migration contributes to an increase in consumption of network resources which leads to longer migration times and ultimately has a detrimental effect on the performance of a cloud computing system. Most industrial approaches use ad-hoc manual policies to migrate virtual machines. In this paper, we propose an autonomous network aware live migration strategy that observes the current demand level of a network and performs appropriate actions based on what it is experiencing. The Artificial Intelligence technique known as Reinforcement Learning acts as a decision support system, enabling an agent to learn optimal scheduling times for live migration while analysing current network traffic demand. We demonstrate that an autonomous agent can learn to utilise available resources when peak loads saturate the cloud network.

Journal ArticleDOI
01 May 2017-Energies
TL;DR: A unified algorithm based on an ant colony system (ACS) , termed the unified ACS (UACS), that works on both conditions and presents competitive performance in terms of energy consumption, the number of VM migrations, and maintaining quality of services (QoS) requirements.
Abstract: Energy efficiency is a significant topic in cloud computing. Dynamic consolidation of virtual machines (VMs) with live migration is an important method to reduce energy consumption. However, frequent VM live migration may cause a downtime of service. Therefore, the energy save and VM migration are two conflict objectives. In order to efficiently solve the dynamic VM consolidation, the dynamic VM placement (DVMP) problem is formed as a multiobjective problem in this paper. The goal of DVMP is to find a placement solution that uses the fewest servers to host the VMs, including two typical dynamic conditions of the assignment of new coming VMs and the re-allocation of existing VMs. Therefore, we propose a unified algorithm based on an ant colony system (ACS), termed the unified ACS (UACS), that works on both conditions. The UACS firstly uses sufficient servers to host the VMs and then gradually reduces the number of servers. With each especial number of servers, the UACS tries to find feasible solutions with the fewest VM migrations. Herein, a dynamic pheromone deposition method and a special heuristic information strategy are also designed to reduce the number of VM migrations. Therefore, the feasible solutions under different numbers of servers cover the Pareto front of the multiobjective space. Experiments with large-scale random workloads and real workload traces are conducted to evaluate the performance of the UACS. Compared with traditional heuristic, probabilistic, and other ACS based algorithms, the proposed UACS presents competitive performance in terms of energy consumption, the number of VM migrations, and maintaining quality of services (QoS) requirements.

Journal ArticleDOI
01 Dec 2017
TL;DR: A decision-theoretic approach is proposed to make live migration decision that takes into account live migration overheads and achieves better performance and higher stability compared to other approaches that do not take into account the uncertainty of long-term predictions and the live migration overhead.
Abstract: Dynamic workloads in cloud computing can be managed through live migration of virtual machines from overloaded or underloaded hosts to other hosts to save energy and/or mitigate performance-related Service Level Agreement (SLA) violations. The challenging issue is how to detect when a host is overloaded to initiate live migration actions in time. In this paper, a new approach to make long-term predictions of resource demands of virtual machines for host overload detection is presented. To take into account the uncertainty of long-term predictions, a probability distribution model of the prediction error is built. Based on the probability distribution of the prediction error, a decision-theoretic approach is proposed to make live migration decision that take into account live migration overheads. Experimental results using the CloudSim simulator and PlanetLab workloads show that the proposed approach achieves better performance and higher stability compared to other approaches that do not take into account the uncertainty of long-term predictions and the live migration overhead.

Journal ArticleDOI
TL;DR: It is shown that an autonomous agent can learn to utilise available network resources such as bandwidth when network saturation occurs at peak times, and be able to schedule a virtual machine migration depending on the current bandwidth usage in a data centre.
Abstract: Live virtual machine migration can have a major impact on how a cloud system performs, as it consumes significant amount of network resources, such as bandwidth. A virtual machine migration occurs when a host becomes over-utilised or under-utilised. In this paper, we propose a network aware live migration strategy that monitors the current demand level of bandwidth when network congestion occurs and performs appropriate actions based on what it is experiencing. The Artificial Intelligence technique that is based on Reinforcement Learning acts as a decision support system, enabling an agent to learn an optimal time to schedule a virtual machine migration depending on the current bandwidth usage in a data centre. We show from our results that an autonomous agent can learn to utilise available network resources such as bandwidth when network saturation occurs at peak times.

Proceedings ArticleDOI
04 Apr 2017
TL;DR: A reinforcement learning algorithm, Megh, for live migration of virtual machines that simultaneously reduces the cost of energy consumption and enhances the performance and is more cost-effective and time-efficient than the MadVM and MMT algorithms.
Abstract: We propose a reinforcement learning algorithm, Megh, for live migration of virtual machines that simultaneously reduces the cost of energy consumption and enhances the performance. Megh learns the uncertain dynamics of workloads as-it-goes. Megh uses a dimensionality reduction scheme to projectthe combinatorially explosive state-action space to a polynomial dimensional space. These schemes enable Megh to be scalable and to work in real-time. We experimentally validate that Megh is more cost-effective and time-efficient than the MadVM and MMT algorithms.

Posted Content
TL;DR: In this paper, the authors presented an optimal tunable-complexity bandwidth manager (TCBM) for the QoS live migration of VMs under a wireless channel from smartphone to access point, which minimizes the migration-induced communication energy under SLA-induced hard constrains on the total migration time, downtime and overall available bandwidth.
Abstract: Live virtual machine migration aims at enabling the dynamic balanced use of the networking/computing physical resources of virtualized data-centers, so to lead to reduced energy consumption. Here, we analytically characterize, prototype in software and test an optimal bandwidth manager for live migration of VMs in wireless channel. In this paper we present the optimal tunable-complexity bandwidth manager (TCBM) for the QoS live migration of VMs under a wireless channel from smartphone to access point. The goal is the minimization of the migration-induced communication energy under service level agreement (SLA)-induced hard constrains on the total migration time, downtime and overall available bandwidth.

Journal ArticleDOI
TL;DR: This paper proposes a new parameter to decide the appropriate time to stop the iterative copy phase based on real-time situation, and uses a Markov model to forecast the memory access pattern and adjusts the memory page transfer order to reduce the invalid transfers.
Abstract: Live migration of virtual machines is an important approach for dynamic resource scheduling in cloud environment. The hybrid-copy algorithm is an excellent algorithm that combines the pre-copy algorithm with the post-copy algorithm to remedy the defects of the pre-copy algorithm and the post-copy algorithm. Currently, the hybrid-copy algorithm only copies all memory pages once in advance. In a write-intensive workload, copy memory pages once may be enough. However, more iterative copy rounds can significantly reduce the page faults in a read-intensive workload. In this paper, we propose a new parameter to decide the appropriate time to stop the iterative copy phase based on real-time situation. We use a Markov model to forecast the memory access pattern. Based on the predicted results and the analysis of the actual situation, the memory page transfer order would be adjusted to reduce the invalid transfers. The novel hybrid-copy algorithm is implemented on the Xen platform. The experimental results demonstrate that our mechanism has good performance both on read-intensive workloads and write-intensive workloads.

Journal ArticleDOI
TL;DR: This work uses the open source codes and PHP web programming to implement a resource management system with power saving method for virtual machines, and has constructed a power efficient virtualization management platform in the cloud.

Journal ArticleDOI
TL;DR: A novel mathematical optimization model to solve the problem of energy efficiency in a cloud data center based on VM migration is introduced and a robust energy efficiency scheduling solution that does not depend on live migration is offered.

Journal ArticleDOI
TL;DR: A Machine Learning based Downtime Optimization (MLDO) approach is proposed which is an adaptive live migration approach based on predictive mechanisms that reduces downtime during live migration over wide area networks for standard workloads.
Abstract: Live virtual machine migration is one of the most promising features of data center virtualization technology. Numerous strategies have been proposed for live migration of virtual machines on local area networks. These strategies work perfectly in their respective domains with negligible downtime. However, these techniques are not suitable to handle live migration over wide area networks and results in significant downtime. In this paper we have proposed a Machine Learning based Downtime Optimization (MLDO) approach which is an adaptive live migration approach based on predictive mechanisms that reduces downtime during live migration over wide area networks for standard workloads. The main contribution of our work is to employ machine learning methods to reduce downtime. Machine learning methods are also used to introduce automated learning into the predictive model and adaptive threshold levels. We compare our proposed approach with existing strategies in terms of downtime observed during the migration process and have observed improvements in downtime of up to 15 %.

Journal ArticleDOI
TL;DR: A memory prediction mechanism is proposed, which can choose the pages to migrate in the iterative pre- copy phase or in the stop-and-copy phase based on related dirty rate, and is able to decide the best time to perform memory migration so as to decrease not only unneeded migrations but also the total migration time.

Journal ArticleDOI
TL;DR: Virtual machine (VM) live migration has been applied to system load balancing in cloud environments for the purpose of minimizing VM downtime and maximizing resource utilization, but the migra ...
Abstract: Virtual machine (VM) live migration has been applied to system load balancing in cloud environments for the purpose of minimizing VM downtime and maximizing resource utilization. However, the migra ...

Journal ArticleDOI
TL;DR: A set of new algorithms for the mapping of VNs on network substrates designed to reduce network energy consumption are introduced with simulation showing the efficacy of the algorithms.
Abstract: Network virtualization facilitates the deployment of new protocols and applications without the need to change the core of the network. One key step in instantiating virtual networks (VNs) is the allocation of physical resources to virtual elements (routers and links), which can be then targeted for the minimization of energy consumption. However, such mappings need to support the quality-of-service requirements of applications. Indeed, the search for an optimal solution for the VN mapping problem is NP-hard, and approximated algorithms must be developed for its solution. The dynamic allocation and deallocation of VNs on a network substrate can compromise the optimality of a mapping designed to minimize energy consumption, since such allocation and deallocation can lead to the underutilization of the network substrate. To mitigate such negative effects, techniques such as live migration can be employed to rearrange already mapped VNs in order to improve network utilization, thus minimizing energy consumption. This paper introduces a set of new algorithms for the mapping of VNs on network substrates designed to reduce network energy consumption. Moreover, two new algorithms for the migration of virtual routers and links are proposed with simulation showing the efficacy of the algorithms.

Posted Content
TL;DR: Comprehensive performance evaluation makes it evident that the proposed dynamic VM consolidation approach outpaces the state-of-the-art offline, migration-aware, multi-objective dynamic Virtual Machine (VM) consolidation algorithm across all performance metrics.
Abstract: Underutilization of computing resources and high power consumption are two primary challenges in the domain of Cloud resource management. This paper deals with these challenges through offline, migration impact-aware, multi-objective dynamic Virtual Machine (VM) consolidation in the context of large-scale virtualized data center environments. The problem is formulated as an NP-hard discrete combinatorial optimization problem with simultaneous objectives of minimizing resource wastage, power consumption, and the associated VM migration overhead. Since dynamic VM consolidation through VM live migrations have negative impacts on hosted applications performance and data center components, a VM live migration overhead estimation technique is proposed, which takes into account pragmatic migration parameters and overhead factors. In order to tackle scalability issues, a hierarchical, decentralized dynamic VM consolidation framework is presented that helps to localize migration-related network traffic and reduce network cost. Moreover, a multi-objective, dynamic VM consolidation algorithm is proposed by utilizing the Ant Colony Optimization (ACO) metaheuristic, with integration of the proposed VM migration overhead estimation technique. Comprehensive performance evaluation makes it evident that the proposed dynamic VM consolidation approach outpaces the state-of-the-art offline, migration-aware dynamic VM consolidation algorithm across all performance metrics by reducing the overall power consumption by up to 47%, resource wastage by up to 64%, and migration overhead by up to 83%.

Patent
03 Mar 2017
TL;DR: In this article, the authors describe methods, systems, and devices for modifying the monitoring of the health of a data center IP endpoint (such as VM) during live migration of the data centre IP endpoint from a source host to a destination host.
Abstract: Methods, systems, and devices are described herein for modifying the monitoring of the health of a data center IP endpoint (such as VM) during live migration of the data center IP endpoint from a source host to a destination host. In one example, the described techniques may include receiving an indication that a virtual machine is going to be live migrated from a source host to a destination host. Next, evaluation of health probe responses originating from the virtual machine may be suspended for a time period. The time period may be selected based on the live migration. The evaluation of the probe responses originating from the virtual machine may be resumed upon completion of the time period. In some cases, a health probe status of the virtual machine may be migrated from the source host to the destination host.