scispace - formally typeset
Search or ask a question

Showing papers on "Temporal isolation among virtual machines published in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors proposed an efficient Azure cloud framework for the utilization of physical server resources at remote VM servers, which is implemented in two phases first by integrating physical servers into virtual ones by creating virtual machines, and then by integrating virtual servers into cloud service providers in a cost-effective manner.
Abstract: Abstract Virtual machines (VMs) are preferred by the majority of organizations due to their high performance. VMs allow for reduced overhead with multiple systems running from the same console at the same time. A physical server is a bare-metal system whose hardware is controlled by the host operating system. A physical server runs on a single instance of OS and application. A virtual server or virtual machine encapsulates the underlying hardware and networking resources. With the existing physical server, it is difficult to migrate the tasks from one platform to another platform or to a datacentre. Centralized security is difficult to setup. But with Hypervisor the virtual machine can be deployed, for instance, with automation. Virtualization cost increases as well as a decrease in hardware and infrastructure space costs. We propose an efficient Azure cloud framework for the utilization of physical server resources at remote VM servers. The proposed framework is implemented in two phases first by integrating physical servers into virtual ones by creating virtual machines, and then by integrating virtual servers into cloud service providers in a cost-effective manner. We create a virtual network in the Azure datacenter using the local host physical server to set up the various virtual machines. Two virtual machine instances, VM1 and VM2, are created using Microsoft Hyper-V with the server Windows 2016 R. The desktop application is deployed and VM performance is monitored using the PowerShell script. Tableau is used to evaluate the physical server functionality of the worksheet for the deployed application. The proposed Physical to Virtual to Cloud model (P2V2C) model is being tested, and the performance result shows that P2V2C migration is more successful in dynamic provisioning than direct migration to cloud platform infrastructure. The research work was carried out in a secure way through the migration process from P2V2C.

9 citations


Journal ArticleDOI
TL;DR: A virtual machine migration process is proposed which will be responsible to minimize the migration which leads to reduce the execution time and is proposed to improve the efficiency of data center.
Abstract: To manage all the operations of data centers resources, virtualization is the effective technique. In virtualization the virtual machine migration is the way by which data center operator can easily adapt the replacement of virtual machine, improves the resource provisioning and any other maintenance function of data center. Despite of this the virtual machine migration scheme is the major challenge to improve the efficiency of data center. This paper proposed a virtual machine migration process which will be responsible to minimize the migration which leads to reduce the execution time.

6 citations


Journal ArticleDOI
TL;DR: In this paper , the authors argue that traditional real-time locking solutions are unsuitable for mixed-criticality applications within the automotive open system architecture (AUTOSAR), and they adopt the server task named resource server for spatial isolation within AUTOSAR limitations.
Abstract: Temporal isolation without consideration of spatial isolation has been attained for mixed-criticality systems, while the need for spatial isolation is urgently required in the automotive industry. Moreover, tasks with different criticality levels sharing the same resources are a common requirement for safety-critical automotive applications. Such tasks are more challenging to spatially isolate due to context sharing to access the same resources. Nevertheless, safety certification cannot be issued without addressing spatial isolation. This paper argues that traditional real-time locking solutions are unsuitable for mixed-criticality applications within the automotive open system architecture (AUTOSAR). We adopted the server task named resource server for spatial isolation within AUTOSAR limitations. We formalized a software component model for reducing design space and proposed the mapping algorithms. Properties of resource servers within AUTOSAR were formally analyzed for blocking delays, task priority assignment, and utilization analysis. Case studies in a powertrain domain of an electric vehicle were carried out to assess the proposed solutions.

3 citations


Journal ArticleDOI
TL;DR: Valve as mentioned in this paper is a general and flexible system that reduces kernel resources competition through limiting usage of system calls, which can effectively enhance the kernel resource isolation for containers with negligible performance overhead.

1 citations


Journal ArticleDOI
TL;DR: This work proposes a newer mathematical model called the technical debt-aware computing model for virtual machine migration (TD4VM), which promotes a holistic approach to dynamic virtual machine adaptation for cloud service providers and addresses existing issues regarding logical aspects ofvirtual machine adaptation in a highly dynamic cloud environment.
Abstract: In the cloud, optimal CPU and memory utilization can lead to low energy consumption, which is an important aspect of green computing. However, constantly changing workloads may contribute to resource over- or underutilization. The former violates the service level agreement’s quality of service constraints. The latter indicates that as workload decreases, virtual machine resource utilization decreases. They introduce difficult decision-making tasks when dynamically adapting (e.g., migrating) a virtual machine in order to maximize its resource utilization over time. To address these challenges, we propose a newer mathematical model called the technical debt-aware computing model for virtual machine migration (TD4VM). The model promotes a holistic approach to dynamic virtual machine adaptation for cloud service providers and addresses existing issues regarding logical aspects of virtual machine adaptation in a highly dynamic cloud environment, which includes a measurement mechanism and estimation guidelines for estimating future debt and utility. Technical debt-aware models make decisions based on VM operating costs, quality, minimizing SLA violations, and incurring technical debt. This approach connects decisions about virtual machine migration that affect overall utility over time. Our method can determine whether a virtual machine should be moved when it is over or underutilized based on its technical debt. The experimental results on a dataset obtained from the Materna-trace-1 demonstrate that the proposed approach outperforms other state-of-the-art methods on a variety of performance metrics. A numerical comparison shows that TD4VM outperforms the other approaches, with VM resource economies of 171.84%, 91.33%, 97.85%, and 93.89% for TD4VM, LRMMT, IQRMC, and IQRMMT, respectively. Additionally, we quantify the debt amassed using TD4VM and state-of-the-art techniques. When compared to LRMMT, IQRMC, and IQRMMT, which cost (in $) 0.77, 0.73, and 0.76, respectively, TD4VM accumulates the minimum debt of 0.17.

1 citations


Journal ArticleDOI
TL;DR: In Niyama, a resource isolation approach that uses a modified version of deadline scheduling to protect latency‐sensitive tasks from CPU bandwidth contention, the use of core reservation and oversubscription in the inter‐node scheduler can be used to offset this drop.
Abstract: Cloud providers place tasks from multiple applications on the same resource pool to improve the resource utilization of the infrastructure. The consequent resource contention has an undesirable effect on latency‐sensitive tasks. In this article, we present Niyama—a resource isolation approach that uses a modified version of deadline scheduling to protect latency‐sensitive tasks from CPU bandwidth contention. Conventionally, deadline scheduling has been used to schedule real‐time tasks with well‐defined deadlines. Therefore, it cannot be used directly when the deadlines are unspecified. In Niyama, we estimate deadlines in intervals and secure bandwidth required for the interval, thereby ensuring optimal job response times. We compare our approach with cgroups: Linux's default resource isolation mechanism used in containers today. Our experiments show that Niyama reduces the average delay in tasks by 3 ×$$ \times $$ –20 ×$$ \times $$ when compared to cgroups. Since Linux's deadline scheduling policy is work‐conserving in nature, there is a small drop in the server‐level CPU utilization when Niyama is used naively. We demonstrate how the use of core reservation and oversubscription in the inter‐node scheduler can be used to offset this drop; our experiments show a 1.3 ×$$ \times $$ –2.24 ×$$ \times $$ decrease in delay in job response time over cgroups while achieving high CPU utilization.

1 citations


Journal ArticleDOI
TL;DR: In this article , a zero-day cross-VM network channel attacks are presented. But the authors focus on the privilege escalation attack in a cross VM cloud environment with Xen hypervisor, where an adversary having limited privileges rights may execute Return-Oriented Programming (ROP), establish a connection with the root domain by exploiting the network channel, and acquire the tool stack (root domain) which it is not authorized to access directly.
Abstract: Cloud providers attempt to maintain the highest levels of isolation between Virtual Machines (VMs) and inter-user processes to keep co-located VMs and processes separate. This logical isolation creates an internal virtual network to separate VMs co-residing within a shared physical network. However, as co-residing VMs share their underlying VMM (Virtual Machine Monitor), virtual network, and hardware are susceptible to cross VM attacks. It is possible for a malicious VM to potentially access or control other VMs through network connections, shared memory, other shared resources, or by gaining the privilege level of its non-root machine. This research presents a two novel zero-day cross-VM network channel attacks. In the first attack, a malicious VM can redirect the network traffic of target VMs to a specific destination by impersonating the Virtual Network Interface Controller (VNIC). The malicious VM can extract the decrypted information from target VMs by using open source decryption tools such as Aircrack. The second contribution of this research is a privilege escalation attack in a cross VM cloud environment with Xen hypervisor. An adversary having limited privileges rights may execute Return-Oriented Programming (ROP), establish a connection with the root domain by exploiting the network channel, and acquiring the tool stack (root domain) which it is not authorized to access directly. Countermeasures against this attacks are also presented

1 citations


Book ChapterDOI
06 Jul 2022
TL;DR: In this paper , a modified version of best-fit decreasing algorithm with respect to virtual machine dynamic migration scheduling model is used to identify the most appropriate migration target host for each VM.
Abstract: AbstractData centres are networking platforms which exhibit virtual machine workload execution in a dynamic manner. As the users’ requests are of enormous magnitude, it manifests as overloaded physical machines resulting in quality of service degradation and SLA violations. This challenge can be negotiated by exercising a better virtual machine allocation by dint of re-allocating a subset of active virtual machines at a suitable destined server by virtual machine migration. It is exhibited as improved resource utilization with enhanced energy efficiency along with addressing the challenge of impending server overloading resulting in downgraded services. The aforesaid twin factors of enhanced energy consumption and enhanced resource utilization can be suitably addressed by combining them together as a single objective function by utilizing cost function based best-fit decreasing heuristic. It enhances the potentials for aggressively migrating large capacity applications like image processing, speech recognition, and decision support systems. It facilitates a seamless and transparent live virtual machine migration from one physical server to another along with taking care of cloud environment resources. The identification of most appropriate migration target host is executed by applying modified version of best-fit decreasing algorithm with respect to virtual machine dynamic migration scheduling model. By executing the selection algorithm, the hotspot hosts in cloud platform are segregated. Subsequently, virtual machine-related resource loads are identified in descending order with respect to hotspots. The resource loads pertaining to non-hotspot hosts are identified in ascending order. Next, the traversing manoeuvring in non-hotspot hosts queue is exercised for identification of the most appropriate host to be reckoned as migration target host.KeywordsData centreLoad balanceVirtual machine migrationCloud computing

Proceedings ArticleDOI
07 Nov 2022
TL;DR: In this article , a QoS-aware power management controller for heterogeneous black-box workloads in public clouds is proposed, which is designed to work without offline profiling or prior knowledge about black box workloads.
Abstract: Energy consumption in cloud data centers has become an increasingly important contributor to greenhouse gas emissions and operation costs. To reduce energy-related costs and improve environmental sustainability, most modern data centers consolidate Virtual Machine (VM) workloads belonging to different application classes, some being latency-critical (LC) and others being more tolerant to performance changes, known as best-effort (BE). However, in public cloud scenarios, the real classes of applications are often opaque to data center operators. The heterogeneous applications from different cloud tenants are usually consolidated onto the same hosts to improve energy efficiency, but it is not trivial to guarantee decent performance isolation among colocated workloads. We tackle the above challenges by introducing Demeter, a QoS-aware power management controller for heterogeneous black-box workloads in public clouds. Demeter is designed to work without offline profiling or prior knowledge about black-box workloads. Through the correlation analysis between network throughput and CPU resource utilization, Demeter automatically classifies black-box workloads as either LC or BE. By provisioning differentiated CPU management strategies (including dynamic core allocation and frequency scaling) to LC and BE workloads, Demeter achieves considerable power savings together with a minimum impact on the performance of all workloads. We discuss the design and implementation of Demeter in this work, and conduct extensive experimental evaluations to reveal its effectiveness. Our results show that Demeter not only meets the performance demand of all workloads, but also responds quickly to dynamic load changes in our cloud environment. In addition, Demeter saves an average of 10.6% power consumption than state of the art mechanisms.

Proceedings ArticleDOI
20 Jul 2022
TL;DR: In this paper , the authors discuss the shortcomings of traditional network isolation methods, and the application of cloud desktop network isolation technology based on VMwarer technology in universities, and expound on the potential of cloud-based network isolation in universities.
Abstract: Network security isolation technology is an important means to protect the internal information security of enterprises. Generally, isolation is achieved through traditional network devices, such as firewalls and gatekeepers. However, the security rules are relatively rigid and cannot better meet the flexible and changeable business needs. Through the double sandbox structure created for each user, each user in the virtual machine is isolated from each other and security is ensured. By creating a virtual disk in a virtual machine as a user storage sandbox, and encrypting the read and write of the disk, the shortcomings of traditional network isolation methods are discussed, and the application of cloud desktop network isolation technology based on VMwarer technology in universities is expounded.

Proceedings ArticleDOI
14 Mar 2022
TL;DR: Shyper as mentioned in this paper proposes an efficient and real-time embedded hypervisor, which supports fine-grained hierarchical resource isolation strategies and introduces several novel VM-exit-less realtime virtualization techniques, which enable users to strike a trade-off between VM's resource utilization and realtime performance.
Abstract: With the development of the IoT, modern embedded systems are evolving to general-purpose and mixed-criticality systems, where virtualization has become the key to guarantee the isolation between tasks with different criticality. Traditional server-based hypervisors (KVM and Xen) are difficult to use in embedded scenarios due to performance and security reasons. As a result, several new hypervisors (Jailhouse and Bao) have been proposed in recent years, which effectively solve the problems above through static partitioning. However, this inflexible resource isolation strategy assumes no resource sharing across guests, which greatly reduces the resource utilization and VM scalability. This prevents themselves from simultaneously fulfilling the differentiated demands from VMs conducting different tasks. This paper proposes an efficient and real-time embedded hypervisor “Shyper”, aiming at providing differentiated services for VMs with different criticality. To achieve that, Shyper supports fine-grained hierarchical resource isolation strategies and introduces several novel “VM-Exit-less” real-time virtualization techniques, which grants users the flexibility to strike a trade-off between VM's resource utilization and real-time performance. In this paper, we also compare Shyper with other mainstream hypervisors (KVM, Jailhouse, etc.) to evaluate its feasibility and effectiveness.

Proceedings ArticleDOI

[...]

07 Nov 2022
TL;DR: In this article , a QoS-aware power management controller for heterogeneous black-box workloads in public clouds is proposed, which is designed to work without offline profiling or prior knowledge about black box workloads.
Abstract: Energy consumption in cloud data centers has become an increasingly important contributor to greenhouse gas emissions and operation costs. To reduce energy-related costs and improve environmental sustainability, most modern data centers consolidate Virtual Machine (VM) workloads belonging to different application classes, some being latency-critical (LC) and others being more tolerant to performance changes, known as best-effort (BE). However, in public cloud scenarios, the real classes of applications are often opaque to data center operators. The heterogeneous applications from different cloud tenants are usually consolidated onto the same hosts to improve energy efficiency, but it is not trivial to guarantee decent performance isolation among colocated workloads. We tackle the above challenges by introducing Demeter, a QoS-aware power management controller for heterogeneous black-box workloads in public clouds. Demeter is designed to work without offline profiling or prior knowledge about black-box workloads. Through the correlation analysis between network throughput and CPU resource utilization, Demeter automatically classifies black-box workloads as either LC or BE. By provisioning differentiated CPU management strategies (including dynamic core allocation and frequency scaling) to LC and BE workloads, Demeter achieves considerable power savings together with a minimum impact on the performance of all workloads. We discuss the design and implementation of Demeter in this work, and conduct extensive experimental evaluations to reveal its effectiveness. Our results show that Demeter not only meets the performance demand of all workloads, but also responds quickly to dynamic load changes in our cloud environment. In addition, Demeter saves an average of 10.6% power consumption than state of the art mechanisms.

Proceedings ArticleDOI
19 Oct 2022
TL;DR: In this paper , the authors analyze and demonstrate the vulnerability in the current GPU pass-through devices, which can breach the user data in the GPU's VRAM, and compromise the guest PCI configuration.
Abstract: Recently, the use of GPU-intensive applications such as machine learning is increasing in the cloud virtual machine environment. To mitigate high virtualization overhead from GPU devices emulation, PCI pass-through technologies are introduced, which allows of direct access to physical device from the guest OS. QEMU, a virtual machine monitor, implements multiple PCI pass-through methods, and VFIO is one of popular options for virtual GPU devices. Although VFIO with PCI pass-through virtualization provides efficient GPU performance on the guest OS, there is a concern for the device isolation between the virtual machine boundary. If an attacker in the host OS can access the GPU used by the virtual machine, the attacker can compromise the integrity and confidentiality of the virtual machine. In this paper, we analyze and demonstrate the vulnerability in the current GPU pass-through devices, which can breach the user data in the GPU's VRAM, and compromise the guest PCI configuration.