scispace - formally typeset
Search or ask a question

Showing papers on "Live migration published in 2021"


Journal ArticleDOI
TL;DR: An efficient resource management framework is proposed, which anticipates resource utilization of the servers and balances the load accordingly, and facilitates power saving, by minimizing the number of active servers, VM migrations and maximizing the resource utilization.
Abstract: The elasticity of cloud resources allow cloud clients to expand and shrink their demand of resources dynamically over time. However, fluctuations in the resource demands and pre-defined size of virtual machines (VMs) lead to lack of resource utilization, load imbalance and excessive power consumption. To address these issues and to improve the performance of datacenter, an efficient resource management framework is proposed, which anticipates resource utilization of the servers and balances the load accordingly. It facilitates power saving, by minimizing the number of active servers, VM migrations and maximizing the resource utilization. An online resource prediction system, is developed and installed at each VM, to minimize the risk of Service Level Agreement (SLA) violations and performance degradation due to under/overloaded servers. In addition, multi-objective VM placement and migration algorithms are proposed to reduce the network traffic and power consumption within datacenter. The proposed framework is evaluated by executing experiments on three real world workload datasets namely, Google Cluster Dataset, Planet Lab and Bitsbrain traces. The comparison of proposed framework with the state-of-art approaches, reveals its superiority in terms of different performance metrics. The improvement in power saving achieved by OP-MLB framework is upto 85.3% over the Best-Fit approach.

41 citations


Journal ArticleDOI
TL;DR: This article investigates the online dynamic virtual network function (VNF) mapping and scheduling in SAGIN, considering the dynamicity of IoV services and proposes two Tabu search (TS)-based algorithms, i.e., TS-based VNF remapping and rescheduling (TS-MAPSCH) algorithm and TS- based pure VNF rescheduled ( TS-PSCH), to obtain suboptimal solutions to the MILP problem efficiently.
Abstract: Space-air-ground integrated networks (SAGIN) are deemed as a promising solution to support multifarious internet-of-vehicles (IoV) services with diversified quality-of-service (QoS) requirements in future communication networks. Network function virtualization (NFV) and software-defined networking (SDN) are two complementary and promising technologies to reduce the function provisioning cost and coordinate the heterogeneous physical resources in the SAGIN. In this paper, we investigate the online dynamic virtual network function (VNF) mapping and scheduling in SAGIN, considering the dynamicity of IoV services. The VNF live migration, VNF re-instantiation, and VNF rescheduling are enabled to increase the service acceptance ratio and service provider’s profits. Considering the heterogeneity of space, air and ground nodes, we first model the migration cost and additional delay incurred by VNF live migration and re-instantiation. We then formulate the dynamic VNF mapping and scheduling jointly as a mixed-integer linear programming (MILP) problem with specified cost and delay models. We propose two Tabu search (TS)-based algorithms, i.e., TS-based VNF remapping and rescheduling (TS-MAPSCH) algorithm and TS-based pure VNF rescheduling (TS-PSCH) algorithm, to obtain sub-optimal solutions to the MILP problem efficiently. Simulation results show that the proposed solution is very close to the optimum and that the proposed dynamic algorithms outperform existing works with respect to multiple performance metrics including the service provider’s profit, service acceptance ratio, and QoS satisfaction level.

34 citations


Journal ArticleDOI
TL;DR: In this article, a secure and multiobjective VMP (SM-VMP) framework is proposed with an efficient VM migration, which ensures an energyefficient distribution of physical resources among VMs, which emphasizes secure and timely execution of user application by reducing intercommunication delay.
Abstract: To facilitate cost-effective and elastic computing benefits to the cloud users, the energy-efficient and secure allocation of virtual machines (VMs) plays a significant role at the data center. The inefficient VM placement (VMP) and sharing of common physical machines among multiple users leads to resource wastage, excessive power consumption, increased intercommunication cost, and security breaches. To address the aforementioned challenges, a novel secure and multiobjective VMP (SM-VMP) framework is proposed with an efficient VM migration. The proposed framework ensures an energy-efficient distribution of physical resources among VMs, which emphasizes secure and timely execution of user application by reducing intercommunication delay. The VMP is carried out by applying the proposed Whale Optimization Genetic Algorithm (WOGA), inspired by whale evolutionary optimization and nondominated sorting based genetic algorithms. The performance evaluation for static and dynamic VMP and comparison with recent state of the arts observed a notable reduction in shared servers, intercommunication cost, power consumption, and execution time up to 28.81%, 25.7%, 35.9%, and 82.21%, respectively with increased resource utilization up to 30.21%.

33 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a framework that enables migration of containerized virtual EPC components using an open-source migration solution which does not fully support the mobile network protocol stack yet.
Abstract: With the increasing demand for openness, flexibility, and monetization, the Network Function Virtualization (NFV) of mobile network functions has become the embracing factor for most mobile network operators. Early reported field deployments of virtualized Evolved Packet Core (EPC) — the core network (CN) component of 4G LTE and 5G non-standalone mobile networks — reflect this growing trend. To best meet the requirements of power management, load balancing, and fault tolerance in the cloud environment, the need for live migration of these virtualized components cannot be shunned. Virtualization platforms of interest include both Virtual Machines (VMs) and Containers, with the latter option offering more lightweight characteristics. This paper’s first contribution is the proposal of a framework that enables migration of containerised virtual EPC components using an open-source migration solution which does not fully support the mobile network protocol stack yet. The second contribution is an experimental-based comprehensive analysis of live migration in two virtualization technologies — VM and Container — with the additional scrutinization on the container migration approach. The presented experimental comparison accounts for several system parameters and configurations: flavor (image) size, network characteristics, processor hardware architecture model, and the CPU load of the backhaul network components. The comparison reveals that the live migration completion time and also the end-user service interruption time of the virtualized EPC components is reduced approximately by 70% in the container platform when using the proposed framework.

22 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a normalization-based VM consolidation (NVMC) strategy that aims at placing virtual machines in an online manner while minimizing energy consumption, SLA violations, and the number of VM migrations.
Abstract: The cloud computing environments rely heavily on virtualization that enables the physical hardware resources to be shared among cloud users by creating virtual machines (VMs). With an overloaded physical machine, the resource requests by virtual machines may not be fulfilled, which results in Service Level Agreement (SLA) violations. Moreover, the high performance servers in cloud data centers consume large amount of energy. The dynamic VM consolidation techniques use live migration of virtual machines to optimize resource utilization and minimize energy consumption. An excessive migration of virtual machines may however deteriorate application performance due to the overhead incurring at runtime. In this paper, we propose a normalization-based VM consolidation (NVMC) strategy that aims at placing virtual machines in an online manner while minimizing energy consumption, SLA violations, and the number of VM migrations. The proposed strategy uses resource parameters for determining over-utilized hosts in a virtualized cloud environment. The comparative capacity of virtual machines and hosts is incorporated for determining over-utilized hosts, while the cumulative available-to-total ratio (CATR) is used to find under-utilized hosts. For migrating virtual machines to appropriate hosts, the VM placement uses a criteria based on normalized resource parameters of hosts and virtual machines. For evaluating the performance of VM consolidation, we have performed experimentation with a large number of virtual machines using traces from the PlanetLab workloads. The results show that the NVMC approach outperforms other well-known approaches by achieving a significant improvement in energy consumption, SLA violations, and number of VM migrations.

20 citations


Journal ArticleDOI
TL;DR: SLAMIG as mentioned in this paper is a set of algorithms that composes deadline-aware multiple migration grouping algorithm and on-line migration scheduling to determine the sequence of VM/VNF migrations.

20 citations


Book ChapterDOI
18 May 2021
TL;DR: In this paper, the authors propose using WebAssembly to implement lightweight containers and deliver the required portability for liquid IoT applications, which can offer seamless, hassle-free use of multiple devices.
Abstract: Going all the way to IoT with web technologies opens up the door to isomorphic IoT system architectures, which deliver flexible deployment and live migration of code between any device in the overall system. In this vision paper, we propose using WebAssembly to implement lightweight containers and deliver the required portability. Our long-term vision is to use the technology to support developers of liquid IoT applications offering seamless, hassle-free use of multiple devices.

19 citations


Journal ArticleDOI
TL;DR: A cluster-based genetic algorithm is developed that clusters the population of current generation and selects individuals from different groups with reduced crossover operations and is able to outperform the tradition genetic algorithm regarding both accuracy and efficiency.

15 citations


Journal ArticleDOI
15 Apr 2021-Symmetry
TL;DR: In this paper, an Efficient Adaptive Migration Algorithm (EAMA) is proposed for effective migration and placement of VMs on the Physical Machines (PMs) dynamically.
Abstract: The rapid demand for Cloud services resulted in the establishment of large-scale Cloud Data Centers (CDCs), which ultimately consume a large amount of energy. An enormous amount of energy consumption eventually leads to high operating costs and carbon emissions. To reduce energy consumption with efficient resource utilization, various dynamic Virtual Machine (VM) consolidation approaches (i.e., Predictive Anti-Correlated Placement Algorithm (PACPA), Resource-Utilization-Aware Energy Efficient (RUAEE), Memory-bound Pre-copy Live Migration (MPLM), m Mixed migration strategy, Memory/disk operation aware Live VM Migration (MLLM), etc.) have been considered. Most of these techniques do aggressive VM consolidation that eventually results in performance degradation of CDCs in terms of resource utilization and energy consumption. In this paper, an Efficient Adaptive Migration Algorithm (EAMA) is proposed for effective migration and placement of VMs on the Physical Machines (PMs) dynamically. The proposed approach has two distinct features: first, selection of PM locations with optimum access delay where the VMs are required to be migrated, and second, reduces the number of VM migrations. Extensive simulation experiments have been conducted using the CloudSim toolkit. The results of the proposed approach are compared with the PACPA and RUAEE algorithms in terms of Service-Level Agreement (SLA) violation, resource utilization, number of hosts shut down, and energy consumption. Results show that proposed EAMA approach significantly reduces the number of migrations by 16% and 24%, SLA violation by 20% and 34%, and increases the resource utilization by 8% to 17% with increased number of hosts shut down from 10% to 13% as compared to the PACPA and RUAEE, respectively. Moreover, a 13% improvement in energy consumption has also been observed.

14 citations


Journal ArticleDOI
TL;DR: A new model using Geometric Programming that allocates transfer and compression rates to each VM to minimize the total migration time and downtime is presented and it is found that memory compression, along with pre-copy migration, improves the performance of the live migration of multiple virtual machines.

13 citations


Journal ArticleDOI
TL;DR: BBLVisor is introduced, a live migration scheme for bare-metal clouds that utilizes a very thin hypervisor that exposes physical hardware devices to the guest OS directly rather than virtualizing the devices.
Abstract: Live migration allows a running operating system (OS) to be moved to another physical machine with negligible downtime. Unfortunately, live migration is not supported in bare-metal clouds, which lease physical machines rather than virtual machines to offer maximum hardware performance. Since bare-metal clouds have no virtualization software, implementing live migration is difficult. Previous studies have proposed OS-level live migration; however, to prevent user intervention and broaden OS choices, live migration should be OS-independent. In addition, the overhead of live migration mechanisms should be as low as possible. This paper introduces BLMVisor, a live migration scheme for bare-metal clouds. To achieve OS-independent and lightweight live migration, BLMVisor utilizes a very thin hypervisor that exposes physical hardware devices to the guest OS directly rather than virtualizing the devices. The hypervisor captures, transfers, and reconstructs physical device states by monitoring access from the guest OS and controlling the physical devices with effective techniques. To minimize performance degradation, the hypervisor is mostly idle after completing the live migration. A performance evaluation confirmed that the OS performance with BLMVisor is comparable to that of a bare-metal machine.

Proceedings ArticleDOI
TL;DR: In this paper, the authors use Docker and CRIU for checkpointing and suspending long-running blocking functions to reduce latency for communication, less network traffic and increased privacy for data processing.
Abstract: The serverless and functions as a service (FaaS) paradigms are currently trending among cloud providers and are now increasingly being applied to the network edge, and to the Internet of Things (IoT) devices. The benefits include reduced latency for communication, less network traffic and increased privacy for data processing. However, there are challenges as IoT devices have limited resources for running multiple simultaneous containerized functions, and also FaaS does not typically support long-running functions. Our implementation utilizes Docker and CRIU for checkpointing and suspending long-running blocking functions. The results show that checkpointing is slightly slower than regular Docker pause, but it saves memory and allows for more long-running functions to be run on an IoT device. Furthermore, the resulting checkpoint files are small, hence they are suitable for live migration and backing up stateful functions, therefore improving availability and reliability of the system.

Journal ArticleDOI
TL;DR: This study presents a taxonomy comprising resource assignment method, metrics, objective functions, Migration methods, migration methods, algorithmic methods, co-location criteria of VMs, architectures, workload dataset, and evaluation approaches in VM consolidation CCSs.

Journal ArticleDOI
TL;DR: An online prediction method based on map data that does not rely on prior knowledge such as user trajectories is proposed to address the challenge in terms of mobility prediction accuracy and reduces network traffic by 65% while meeting task delay requirements.
Abstract: Mobile edge computing (MEC) pushes computing resources to the edge of the network and distributes them at the edge of the mobile network. Offloading computing tasks to the edge instead of the cloud can reduce computing latency and backhaul load simultaneously. However, new challenges incurred by user mobility and limited coverage of MEC server service arise. Services should be dynamically migrated between multiple MEC servers to maintain service performance due to user movement. Tackling this problem is nontrivial because it is arduous to predict user movement, and service migration will generate service interruptions and redundant network traffic. Service interruption time must be minimized, and redundant network traffic should be reduced to ensure service quality. In this paper, the container live migration technology based on prediction is studied, and an online prediction method based on map data that does not rely on prior knowledge such as user trajectories is proposed to address this challenge in terms of mobility prediction accuracy. A multitier framework and scheduling algorithm are designed to select MEC servers according to moving speeds of users and latency requirements of offloading tasks to reduce redundant network traffic. Based on the map of Beijing, extensive experiments are conducted using simulation platforms and real-world data trace. Experimental results show that our online prediction methods perform better than the common strategy. Our system reduces network traffic by 65% while meeting task delay requirements. Moreover, it can flexibly respond to changes in the user’s moving speed and environment to ensure the stability of offload service.

Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a taxonomy for container migration is presented, and a survey is carried out on the basis of the proposed taxonomy helping to identify the sustainable solution for containers migration.
Abstract: The virtualization of containers is a technique to run multiple processes in an isolated manner. Container gained popularity in improved application management and deployment because of its lightweight environment, flexible deployment, and fine-grained sharing of resources. Organizations are using containers extensively to deploy their increasingly complex workloads resulting from new technologies including online infrastructure, big data, and the Internet of things in controlled clusters or data centers in the private and public cloud. It opens the possibility of saving a container’s entire state and restarting it later. Checkpointing is used to perform a live migration of containers, and it allows the state of a running container to be saved and restarted on the same or a separate host later on. Multiple dumps will handle this checkpointing and restart. It is transparent for running applications and network connections. In this paper, we present a taxonomy for container migration. Further, the survey is carried out on the basis of the proposed taxonomy helping to identify the sustainable solution. Future directions are identified to facilitate the researchers in this field.

Journal ArticleDOI
TL;DR: Sva is an autonomic framework that can combine the virtual dynamic SR-IOV and the virtual machine live migration for virtual network allocations in data centers and exploit the advantages of both techniques to match and even beat the better performance of each individual technology by adapting to the VM workload changes.
Abstract: With the rise of network virtualization, the workloads deployed on data center are dramatically changed to support diverse service-oriented applications, which are in general characterized by the time-bounded service response that in turn puts great burden on the data-center networks. Although there have been numerous techniques proposed to optimize the virtual network allocation in data center, the research on coordinating them in a flexible and effective way to autonomically adapt to the workloads for service time reduction is few and far between. To address these issues, in this article we propose Sova , an autonomic framework that can combine the virtual dynamic SR-IOV (DSR-IOV) and the virtual machine live migration (VLM) for virtual network allocations in data centers. DSR-IOV is a SR-IOV-based virtual network allocation technology, but its operation scope is very limited to a single physical machine, which could lead to the local hotspot issue in the course of computation and communication, likely increasing the service response time. In contrast, VLM is an often-used virtualization technique to optimize global network traffic via VM migration. Sova exploits the software-defined approach to combine these two technologies with reducing the service response time as a goal. To realize the autonomic coordination, the architecture of Sova is designed based on the MAPE-K loop in autonomic computing. With this design, Sova can adaptively optimize the network allocation between different services by coordinating DSR-IOV and VLM in autonomic way, depending on the resource usages of physical servers and the network characteristics of VMs. To this end, Sova needs to monitor the network traffic as well as the workload characteristics in the cluster, whereby the network properties are derived on the fly to direct the coordination between these two technologies. Our experiments show that Sova can exploit the advantages of both techniques to match and even beat the better performance of each individual technology by adapting to the VM workload changes.

Proceedings ArticleDOI
21 Apr 2021
TL;DR: HyperTP as mentioned in this paper is a generic framework which combines in a unified way two approaches: in-place server micro-reboot-based hypervisor transplant (noted InPlaceTP) and live VM migration-based Hypervisor transplant(noted MigrationTP).
Abstract: The vulnerability window of a hypervisor regarding a given security flaw is the time between the identification of the flaw and the integration of a correction/patch in the running hypervisor. Most vulnerability windows, regardless of severity, are long enough (several days) that attackers have time to perform exploits. Nevertheless, the number of critical vulnerabilities per year is low enough to allow an exceptional solution. This paper introduces hypervisor transplant, a solution for addressing vulnerability window of critical flaws. It involves temporarily replacing the current datacenter hypervisor (e.g., Xen) which is subject to a critical security flaw, by a different hypervisor (e.g., KVM) which is not subject to the same vulnerability. We build HyperTP, a generic framework which combines in a unified way two approaches: in-place server micro-reboot-based hypervisor transplant (noted InPlaceTP) and live VM migration-based hypervisor transplant (noted MigrationTP). We describe the implementation of HyperTP and its extension for transplanting Xen with KVM and vice versa. We also show that HyperTP is easy to integrate with the OpenStack cloud computing platform. Our evaluation results show that HyperTP delivers satisfactory performance: (1) MigrationTP takes the same time and impacts virtual machines (VMs) with the same performance degradation as normal live migration. (2) the downtime imposed by InPlaceTP on VMs is in the same order of magnitude (1.7 seconds for a VM with 1 vCPU and 1 GB of RAM) as in-place upgrade of homogeneous hypervisors based on server micro-reboot.

Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a comparative study has been done for various methods of migrating the data from one node server to another node server and the optimization of the live virtual machine in migration techniques.
Abstract: For efficient management of the cloud, there is a requirement of the virtualization in the emergent stage of cloud computing where resources such as memory, servers, virtual machines are shared across the World Wide Web. For getting or availing other facilities such as load balancing, auto-scaling, and fault tolerance, etc., in cloud computing live migration of virtual machine is required. Migrating virtual machines from one node to another without suspending the VMs is an important feature of cloud computing so that users do not have to deal with any kind of service downtime. In this research paper, a comparative study has been done for various methods of migrating the data from one node server to another node server. After that it also recognizes the optimization of the live virtual machine in migration techniques. Therefore, an optimized approach has been identified that will be beneficial in a live migration of virtual machines without affecting the node servers in a cloud computing environment.

Proceedings ArticleDOI
25 Mar 2021
TL;DR: In this paper, an optimization of existing genetic algorithm (GA) that mainly intended for VM resource provisioning and load balancing is introduced. And the proposed OGA_EAVRC considers Population size, Fitness function, Mutation probability, and success rate of resource for optimization.
Abstract: Efficient allocation of achievable virtual resources to the diverse users is a key challenging issue in a controlled and collaborative cloud environment. As well, balancing the load among the resources and mapping these virtual resources to the physical machines is even bigger challenge of present distributed computing arena. Numerous approaches were introduced by different researchers including Genetic Algorithm for dealing with these challenges, but their scope was limited to certain specific performance elements. Hence, there is a need of optimizing the existing research implementations for efficient allocation of virtualized resources in cloud computing environment. Usually, in a typical distributed computing environment like cloud computing, allocation of virtual resource and balancing of workload among them is realized by means of virtual machines live migration. This article introduces an optimization of existing Genetic algorithm (GA) that mainly intended for VM resource provisioning and load balancing. The proposed OGA_EAVRC considers Population size, Fitness function, Mutation probability, and success rate of resource for optimizing the performance through efficient resource allocation. Key objective of this work is to utilize each physical resource effectively and allocated them to end users efficiently. For studying the operational performance of OGA_EAVRC, an event based CloudSim was chosen. Simulation results states that the proposed OGA_EAVRC can efficiently allocates the workload among virtualized resource by reducing VM’s migration among the physical machines.

Proceedings ArticleDOI
16 Apr 2021
TL;DR: In this paper, an extended version of page modification logging (PML) is introduced to track both read and write memory accesses without impacting user VMs. But the performance degradation of write-intensive applications is negatively impacted, with up to 34.9% of performance degradation.
Abstract: Intel page modification logging (PML) is a hardware feature introduced in 2015 for tracking modified memory pages of virtual machines (VMs). Although initially designed to improve VMs checkpointing and live migration, we present in this paper how we can take advantage of this virtualization technology to efficiently estimate the working set size (WSS) of a VM. To this end, we first conduct a study of PML with the Xen hypervisor to investigate its performance impact on VMs and the accuracy of a WSS estimation system that relies on the current version of PML. Our three main findings are as follows. (1) PML reduces by up to 10.18% the time of both VM live migration and checkpointing. (2) PML slightly reduces the negative impact of live migration on application performance by up to 0.95%. (3) A WSS estimation system based on the current version of PML provides inaccurate results. Moreover, our experiments show that write-intensive applications are negatively impacted, with up to 34.9% of performance degradation, when using PML to estimate the WSS of a VM that runs these applications. Based on the aforementioned findings, we introduce page reference logging (PRL), an extended version of PML that allows both read and write memory accesses to be tracked without impacting user VMs, thus more suitable for WSS estimation. We propose a WSS estimation system that leverages PRL and show how it can be used in a data center exploiting memory overcommitment. We implement PRL and the underlying WSS estimation system under Gem5, a popular open-source computer architecture simulator. Evaluation results validate the accuracy of the WSS estimation system and show that PRL does not incur more performance degradation on user’s VMs.

Proceedings ArticleDOI
25 Mar 2021
TL;DR: In this article, the authors focus on the attacks and countermeasures associated with virtualization and find challenges to be addressed with respect to three kinds of attacks: VM side channel attack, hypervisor attacks, and VM live migration attacks.
Abstract: Cloud computing is widely used across industries due to its benefits such as scalability, availability and powerful resource integration. Though it is based on a cost model, it became affordable due to virtualization technology. However, it has caused many security issues both at Hypervisor (HV) level and Virtual Machine (VM) level. Security issues arise due to vulnerabilities that are exploited by adversaries to launch different kinds of attacks leading to deterioration of Quality of Service (QoS) in cloud computing. Apart from virtualization related security issues, cloud computing has evidenced data level and communication level security challenges. This paper focuses on the attacks and countermeasures that are associated with virtualization. Also, the proposed research work provides useful insights on the current state of the art and find challenges to be addressed with respect to three kinds of attacks. They are known as VM side channel attack that involves a class of attacks due to shared usage of hardware by VMs, hypervisor attacks occur due to compromised VMs and attacks targeting VM live migration exploiting dynamic VM allocation schemes. The findings in this paper motivate in further investigation into these specific attacks to improve the state of the art.

Journal ArticleDOI
TL;DR: A new prediction-based model to manage the live migration process of VMs is introduced that dynamically identifies the optimal live migration algorithm for a given performance metric based on a prior diagnosis of the system.
Abstract: Live migration of virtual machines proves to be inexorable in providing load balancing among physical devices and allowing scalability and flexibility in resource allocation. The existing approaches exhibit different policies, distinct performance characteristics, and side effects such as power consumption and performance degradation. Therefore, determining the most optimal live migration algorithm in certain situations remains an open challenge. In this work, a new prediction-based model to manage the live migration process of VMs is introduced. Our adaptive model dynamically identifies the optimal live migration algorithm for a given performance metric based on a prior diagnosis of the system. The model is developed by considering the assumption of different workloads alongside certain resource constraints for any of the currently available migration algorithms. The proposed model consists of an ensemble-learning strategy that involves linear and non-parametric regression methods to predict six live migration key metrics, provided by the operator and/or the user, for each live migration algorithm. Our model allows considering the best combination which is constituted of the algorithm-metric pair to migrate a VM. The experimental results show that the proposed model allows to significantly alleviate the service level agreement violation rate by between $$31\%$$ and $$60\%$$ , along with decreasing the total CPU time required for the prediction process.

Journal ArticleDOI
TL;DR: The virtual CPU scheduling for post-copy (VSCP) framework that reduces the speed of the virtual CPU in post- copy to make a balance between processing speed of pages in the target machine and their transmission speed in the source machine is presented.
Abstract: The live migration of virtual machines among physical machines aims at efficient utilization of resources, load balancing, maintenance, energy management, fault tolerance, sharing resources and mobile computing. There are several methods for the live migration of virtual machines. In the post-copy approach, a virtual machine starts working in a target host, when data that is not present in the target machine is requested, a page fault occurs and the processing stops in the target until the arrival of the requested information. Certainly, the number of stops and lags negatively affects the system downtime. The reduction of physical or virtual CPU frequency is one of the techniques recommended to efficient migration of virtual machines. The modification of the frequency of physical and/or virtual CPU has already been used in the improvement of the pre-copy method to manage the speed of changes in the source machine. This paper presents the virtual CPU scheduling for post-copy (VSCP) framework that reduces the speed of the virtual CPU in post-copy to make a balance between processing speed of pages in the target machine and their transmission speed in the source machine. The VCSP is also compared with the baseline methods of the post-copy-prefetching and post-copy. In the experiments, the system downtime, total migration time, total pages transferred and throughput of system are evaluated. The results indicate that the proposed method improves the system downtime up to 8.17%, total migration time up to 30.33%, total pages transferred up to 54.49% and throughput of system up to 23.65%.

Posted Content
TL;DR: In this paper, a concurrency-aware multiple migration selector that operates based on the maximal cliques and independent sets of the resource dependency graph of multiple migration requests is proposed to maximize the multiple migration performance while achieving the objective of dynamic resource management.
Abstract: By integrating Software-Defined Networking and cloud computing, virtualized networking and computing resources can be dynamically reallocated through live migration of Virtual Machines (VMs). Dynamic resource management such as load balancing and energy-saving policies can request multiple migrations when the algorithms are triggered periodically. There exist notable research efforts in dynamic resource management that alleviate single migration overheads, such as single migration time and co-location interference while selecting the potential VMs and migration destinations. However, by neglecting the resource dependency among potential migration requests, the existing solutions of dynamic resource management can result in the Quality of Service (QoS) degradation and Service Level Agreement (SLA) violations during the migration schedule. Therefore, it is essential to integrate both single and multiple migration overheads into VM reallocation planning. In this paper, we propose a concurrency-aware multiple migration selector that operates based on the maximal cliques and independent sets of the resource dependency graph of multiple migration requests. Our proposed method can be integrated with existing dynamic resource management policies. The experimental results demonstrate that our solution efficiently minimizes migration interference and shortens the convergence time of reallocation by maximizing the multiple migration performance while achieving the objective of dynamic resource management.


Journal ArticleDOI
TL;DR: The pre-copy live migration algorithm is implemented to provide a test environment for live job migration in a high throughput computing system (HTC) and details of the extension into the HTC-Sim simulation framework are presented.

Proceedings ArticleDOI
04 Oct 2021
TL;DR: An overview of current migration techniques, and metrics that can be used for comparison, are presented, and a proof-of-concept implementation and description of a system that enables support for both live migration and failover for containers by extending current container migration techniques are presented.
Abstract: A key aspect of the cloud is its flexibility and abstraction of the underlying hardware. Historically, virtual machines have been the backbone of the cloud industry, allowing cloud providers to offer virtualized multi-tenant solutions. Today, virtualization by lightweight process containers continue to increase in popularity and make up a larger portion of the cloud, often replacing virtual machines, especially in fog and edge computing. Virtualization can enhance flexibility by enabling support for live migration and failover. Live migration is the process of moving a running instance of a virtual machine or container from one host to another and failover ensures that failures will be automatically detected and the instance restarted, possibly on another host.This paper presents an overview of current migration techniques, and metrics that can be used for comparison. We also present a proof-of-concept implementation and description of a system that enables support for both live migration and failover for containers by extending current container migration techniques. It is able to offer this to any OCI-compliant container, and could therefore potentially be integrated into current container frameworks. In addition, measurements are provided and used to compare the proof-of-concept implementation to the pre-copy migration technique. We achieve a downtime equal to, and total migration time lower than that of pre-copy migration at the cost of an increased amount of data needed to be transferred.

Proceedings ArticleDOI
16 Apr 2021
TL;DR: AdaMig as mentioned in this paper dynamically switches migration methods and tunes related parameters by monitoring the run-time statistics from the migration process and the physical host, and detects the tendency that migration cannot converge, it will switch to another migration method to synchronize remaining dirty pages.
Abstract: Live migration is a crucial feature in existing virtualization platforms. Since memory is dirtied rapidly during the execution of a virtual machine (VM), boosting memory migration speed becomes a significant factor in guaranteeing a high-level success ratio and efficiency. However, the statically-configured migration strategy cannot cope with various workloads running in VMs, resulting in frequently aborted migration processes and low success ratio. This paper proposed a one-for-all migration architecture called Adaptive Live Migration (AdaMig) to address these issues. This QEMU-based solution dynamically switches migration methods and tunes related parameters by monitoring the run-time statistics from the migration process and the physical host. Once AdaMig detects the tendency that migration cannot converge, it will switch to another migration method to synchronize remaining dirty pages. During the whole process, AdaMig also dynamically tunes migration parameters according to current resources available in the physical host and migration efficiency. Experimental results reflect that AdaMig improves the success ratio from 26.7% to 93.3% over various workloads, and migration time is reduced by up to 45.5% in comparison with the original solution in QEMU.

Proceedings ArticleDOI
09 Jan 2021
TL;DR: In this article, the authors evaluate the performance of live migration with Docker and KVM and show that KVM outperforms Docker in most of the scenarios, with some critical exceptions where only Docker manages to perform miaration.
Abstract: Live migration is a technology that seamlessly relocates a virtualized service between physical hosts, which allows services to rapidly adapt to environmental changes. Despite the large amount of research, there is still a lack of understanding of its performance. Towards a better understanding of live migration, we build a testbed and use it to migrate a computation-intensive application with docker and KVM. We evaluate the service downtime, migration time, and network usage under different conditions. The results show that KVM outperforms docker in most of the scenarios, with some critical exceptions where only docker manages to perform miaration.

Proceedings ArticleDOI
01 Jul 2021
TL;DR: In this article, a live migration algorithm, Live Migration Annealing (LMA) Virtual Machine Migration that makes use of an evaluation function to perform analysis on the time series data collected over the iterations made during the live migration period is proposed.
Abstract: Virtual machine migration technique is used in cloud computing to increase the reliability and scalability of the cloud computing systems. It helps the service providers to achieve resource efficiency and quality of service. At the time of live migration, the underlying virtual machine continuously works until the entire or part of data is migrated from source to destination. Live migration of Virtual Machine (VM) acts as an important technique that allows the management of resources, maintenance of server and load balancing in cloud data centers, but there is a degradation of performance at the source and destination physical machines. Different live migrations techniques have been proposed, each showing different properties like completion time, amount of data transferred, down time of VM and degradation of performance of VM. In this paper, a live migration algorithm, Live Migration Annealing (LMA) Virtual Machine Migration that makes use of an evaluation function to perform analysis on the time series data collected over the iterations made during the live migration period is proposed. It embodies the concept of exploration and exploitation of knowledge and space. It also takes ideas from the Iterative depth first Search to perform the iterations and Simulated Annealing to find the pages eligible for the live Migration. All the eligible pages undergo a phase called Selection phase before being finally sent to the destination virtual machine which incorporates the idea of Second Chance. Live Migration is only performed if it isn’t happening at the expense of the downtime. An overall decrease in the number of iterations and downtime with least possible live migration time is achieved through the proposed algorithm.