scispace - formally typeset
Search or ask a question

Showing papers on "Temporal isolation among virtual machines published in 2015"


Journal ArticleDOI
TL;DR: A hybrid genetic algorithm is presented for the energy-efficient virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center.
Abstract: Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.

162 citations


Patent
15 Jun 2015
TL;DR: In this paper, a virtualized malware detection system is integrated with a virtual machine host including a plurality of virtual machines and a security virtual machine, which is configured to perform a dynamic analysis of an object and monitor for the occurrence of a triggering event.
Abstract: According to one embodiment, a virtualized malware detection system is integrated with a virtual machine host including a plurality of virtual machines and a security virtual machine. Logic within the virtual machines are configured to perform a dynamic analysis of an object and monitor for the occurrence of a triggering event. Upon detection of a triggering event within a virtual machine, the logic within the virtual machine provides the security virtual machine with information associated with the triggering event for further analysis. Based on the further analysis, the object may then be classified as “non-malicious,” or “malicious.”

91 citations


Patent
30 Sep 2015
TL;DR: In this paper, a hierarchy of virtual machine templates is created by instantiating a parent virtual machine template, the parent VM having a guest operating system and a container, and an application to be run in a container is determined, and, in response, a parent VM template is forked to create a child VM template, where the child VM includes a replica of the container.
Abstract: A virtualized computing system supports the execution of a plurality of virtual machines, where each virtual machine supports the execution of applications therein. Each application executes within a container that isolates the application executing therein from other processes executing on the computing system. A hierarchy of virtual machine templates is created by instantiating a parent virtual machine template, the parent virtual machine template having a guest operating system and a container. An application to be run in a container is determined, and, in response, the parent virtual machine template is forked to create a child virtual machine template, where the child virtual machine template includes a replica of the container, and where the guest operating system of the parent virtual machine template overlaps in memory with a guest operating system of the child virtual machine template. The application is then installed in the replica of the container.

78 citations


Proceedings ArticleDOI
04 Mar 2015
TL;DR: This paper analyzes the performance interference suffered by disk-intensive workloads within very noisy-perturbed containers (different hardware components stressed) and exposes a workload-balanced scenario wherein the performance does not suffer any interference.
Abstract: The popularity of Cloud computing due to the increasing number of customers has led Cloud providers to adopt resource-sharing solutions to meet growing demand for infrastructure resources. As the adoption of resource-sharing/consolidation in Cloud computing became arguably a well-established solution, the ability the underlying virtualization systems of preventing performance interferences from customers must also be understood. Virtualization systems based on containers, such as LXC, are the basis of the next-generation of Cloud computing and have become the most popular solution under PaaS/IaaS Cloud platforms with the rise of Docker -- an open platform for developers and sysadmins to build, ship, and run distributed applications. Such platforms have enticed many attentions globally, since they leverage container-based virtualization systems to offer high scalability while low performance overheads, the performance might be solely aggravated if the customers' workloads are consolidated onto the same hardware and the isolation layer does not properly isolate the shared resources. Performance isolation is an inherent concern of such systems due to the nature as they are conceived and is still an unexplored and open research topic, the consequences might influence in the adoption under shared Cloud computing platforms where Quality-of-Service is a crucial factor that cannot be disregarded. In this paper we analyze the performance interference suffered by disk-intensive workloads within very noisy-perturbed containers (different hardware components stressed). Our results show workload combinations whose performance degradation goes up to 38%, but in contrast we expose a workload-balanced scenario wherein the performance does not suffer any interference.

66 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: The extensive system and Input/Output (I/O) performance measurements included in this paper show a slightly better performance for containers in CPU bound workloads and request/response networking; conversely, thanks to their caching mechanisms, hypervisors perform better in most disk I/O operations and TCP streaming benchmark.
Abstract: Virtualization is a mature technology which has shown to provide computing resource and cost optimization while enabling consolidation, isolation and hardware abstraction through the concept of virtual machine. Recently, by sharing the operating system resources and simplifying the deployment of applications, containers are getting a more and more popular alternative to virtualization for specific use cases. As a result, today these two technologies are competing to provide virtual instances for cloud computing, Network Functions Virtualization (NFV), High Performance Computing (HPC), avionic and automotive platforms. In this paper, the performance of the most important open source hypervisor (KVM and Xen) and container (Docker) solutions are compared on the ARM architecture, which is rapidly emerging in the server world. The extensive system and Input/Output (I/O) performance measurements included in this paper show a slightly better performance for containers in CPU bound workloads and request/response networking; conversely, thanks to their caching mechanisms, hypervisors perform better in most disk I/O operations and TCP streaming benchmark.

62 citations


Patent
Atsushi Iwata1, Akio Iijima1
10 Mar 2015
TL;DR: In this paper, the authors propose a VLAN-Tag-based protocol, where the VLAN tag is assigned to a virtual machine and a virtual network identifier representing the virtual network to which it belongs.
Abstract: A server, includes a virtual machine identifier assigning section to assign an identifier of a virtual machine operating on the server; and a network interface to transmit a packet including a Layer 2 header information which includes the identifier of the virtual machine and a first packet field for a VLAN-Tag, wherein the network interface transmits the packet to a packet encapsulate section which encapsulates a second packet field including the Layer 2 header information with a virtual network identifier representing a virtual network to which the virtual machine belongs.

57 citations


Patent
Andrew Babakian1
30 Jun 2015
TL;DR: In this article, the authors propose a method of performing ingress traffic optimization for active/active data centers by creating site-specific grouping constructs for virtual machines that run applications that are advertised to the external networks.
Abstract: A method of performing ingress traffic optimization for active/active data centers. The method creates site-specific grouping constructs for virtual machines that run applications that are advertised to the external networks. The site specific grouping constructs provide an abstraction to decouple virtual machines from traditional networks for common ingress network policies. Each site-specific container includes a list of the virtual machines currently located at the site as well as a unique identifier of the site. Each virtual machine in a container is identified through the abstraction of metadata tag, logical data center objects, or the virtual machine's unique name. The IP address of each virtual machine is retrieved from the guest operating system and a network policy is generated to advertise the IP addresses of the virtual machines to the site's routing peer.

54 citations


Patent
26 Feb 2015
TL;DR: In this article, the authors present methods and systems for maintaining state in a virtual machine when disconnected from graphics hardware, such that the virtual machine is one of a plurality of virtual machines hosted by a hypervisor executing on a computing device.
Abstract: The present disclosure is directed towards methods and systems for maintaining state in a virtual machine when disconnected from graphics hardware. The virtual machine is one of a plurality of virtual machines hosted by a hypervisor executing on a computing device. A control virtual machine may be hosted by a hypervisor executing on a computing device. The control virtual machine may store state information of a graphics processing unit (GPU) of the computing device. The GPU may render an image from a first virtual machine. The control virtual machine may remove, from the first virtual machine, access to the GPU. The control virtual machine may redirect the first virtual machine to a GPU emulation program. The GPU emulation program may render the image from the first virtual machine using at least a portion of the stored state information.

51 citations


Proceedings ArticleDOI
15 Jun 2015
TL;DR: Pisces is presented, a system software architecture that enables the co-existence of multiple independent and fully isolated OS/Rs, or enclaves, that can be customized to address the disparate requirements of next generation HPC workloads.
Abstract: Performance isolation is emerging as a requirement for High Performance Computing (HPC) applications, particularly as HPC architectures turn to in situ data processing and application composition techniques to increase system throughput. These approaches require the co-location of disparate workloads on the same compute node, each with different resource and runtime requirements. In this paper we claim that these workloads cannot be effectively managed by a single Operating System/Runtime (OS/R). Therefore, we present Pisces, a system software architecture that enables the co-existence of multiple independent and fully isolated OS/Rs, or enclaves, that can be customized to address the disparate requirements of next generation HPC workloads. Each enclave consists of a specialized lightweight OS co-kernel and runtime, which is capable of independently managing partitions of dynamically assigned hardware resources. Contrary to other co-kernel approaches, in this work we consider performance isolation to be a primary requirement and present a novel co-kernel architecture to achieve this goal. We further present a set of design requirements necessary to ensure performance isolation, including: (1) elimination of cross OS dependencies, (2) internalized management of I/O, (3) limiting cross enclave communication to explicit shared memory channels, and (4) using virtualization techniques to provide missing OS features. The implementation of the Pisces co-kernel architecture is based on the Kitten Lightweight Kernel and Palacios Virtual Machine Monitor, two system software architectures designed specifically for HPC systems. Finally we will show that lightweight isolated co-kernels can provide better performance for HPC applications, and that isolated virtual machines are even capable of outperforming native environments in the presence of competing workloads.

49 citations


Patent
23 Mar 2015
TL;DR: In this paper, the authors describe a method for determining a shared threat potential for a virtual machine based, at least in part, on a degree of co-location the virtual machine has with a current virtual machine operating on a physical machine.
Abstract: Technologies for virtual machine placement within a data center are described herein. An example method may include determining a shared threat potential for a virtual machine based, at least in part, on a degree of co-location the virtual machine has with a current virtual machine operating on a physical machine, determining a workload threat potential for the virtual machine based, at least in part, on a level of advantage associated with placing the virtual machine on the physical machine, determining a threat potential for the virtual machine based, at least in part, on a combination of the shared threat potential and the workload threat potential, and placing the virtual machine on the physical machine based on the threat potential.

47 citations


Proceedings ArticleDOI
27 Jun 2015
TL;DR: A virtual machine consolidation algorithm with usage prediction (VMCUP) for improving the energy efficiency of cloud data centers and reduces the total migrations and the power consumption of the servers while complying with the service level agreement.
Abstract: Virtual machine consolidation aims at reducing the number of active physical servers in a data center, with the goal to reduce the total power consumption. In this context, most of the existing solutions rely on aggressive virtual machine migration, thus resulting in unnecessary overhead and energy wastage. This article presents a virtual machine consolidation algorithm with usage prediction (VMCUP) for improving the energy efficiency of cloud data centers. Our algorithm is executed during the virtual machine consolidation process to estimate the short-term future CPU utilization based on the local history of the considered servers. The joint use of current and predicted CPU utilization metrics allows a reliable characterization of overloaded and under loaded servers, thereby reducing both the load and the power consumption after consolidation. We evaluate our proposed solution through simulations on real workloads from the Planet Lab and the Google Cluster Data datasets. In comparison with the state of the art, the obtained results show that consolidation with usage prediction reduces the total migrations and the power consumption of the servers while complying with the service level agreement.

Patent
12 Jan 2015
TL;DR: In this article, a first physical computing machine executing a plurality of virtual machines and connected to a network, one or more virtual machine metrics for each virtual machine are calculated and a determination is made that at least one of the plurality of VMs should be migrated to one of other VMs connected to the network.
Abstract: At a first physical computing machine executing a plurality of virtual machines and connected to a network, one or more virtual machine metrics for each virtual machine are calculated. Each virtual machine metric represents a workload of a resource of the first physical computing machine due to the execution of a corresponding virtual machine. Additionally, one or more corresponding physical machine metrics that represent a total workload of the corresponding resource of the first physical computing machine due to the execution of the plurality of virtual machines are also calculated. Based on the one or more physical machine metrics, a determination is made that at least one of the plurality of virtual machines should be migrated to one of a plurality of other physical computing machines connected to the network. A first virtual machine is selected for migration to a selected second physical computing machine.

Proceedings ArticleDOI
09 Nov 2015
TL;DR: This work formalizes the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and proposes a VNF placement heuristic, and presents a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs.
Abstract: Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated mid-dleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.

Proceedings ArticleDOI
07 Jul 2015
TL;DR: Centaur is presented, as a host-side SSD caching solution that uses cache sizing as a control knob to achieve storage performance goals and implements dynamically partitioned per-VM caches with per-partition local replacement to provide both lower cache miss rate and better performance isolation for VM workloads.
Abstract: Host-side SSD caches represent a powerful knob for improving and controlling storage performance and improve performance isolation. We present Centaur, as a host-side SSD caching solution that uses cache sizing as a control knob to achieve storage performance goals. Centaur implements dynamically partitioned per-VM caches with per-partition local replacement to provide both lower cache miss rate, better performance isolation and performance control for VM workloads. It uses SSD cache sizing as a universal knob for meeting a variety of workload-specific goals including per-VM latency and IOPS reservations, proportional share fairness, and aggregate optimizations such as minimizing the average latency across VMs. We implemented Centaur for the VMware ESX hyper visor. With Centaur, times for simultaneously booting 28 virtual desktops improve by 42% relative to a non-caching system and by 18% relative to a unified caching system. Centaur also implements per-VM shares for latency with less than 5% error when running micro benchmarks, and enforces latency and IOPS reservations on OLTP workloads with less than 10% error.

Proceedings ArticleDOI
01 Jun 2015
TL;DR: A prototype for virtual machine scheduling in OpenStack, a widely-used open-source cloud IaaS software, is implemented and its performance overhead, resource requirements to satisfy conflicts, and resource utilization are evaluated.
Abstract: A major concern in the adoption of cloud infrastructure-as-a-service (IaaS) arises from multi-tenancy, where multiple tenants share the underlying physical infrastructure operated by a cloud service provider. A tenant could be an enterprise in the context of a public cloud or a department within an enterprise in the context of a private cloud. Enabled by virtualization technology, the service provider is able to minimize cost by providing virtualized hardware resources such as virtual machines, virtual storage and virtual networks, as a service to multiple tenants where, for instance, a tenant's virtual machine may be hosted in the same physical server as that of many other tenants. It is well-known that separation of execution environment provided by the hypervisors that enable virtualization technology has many limitations. In addition to inadvertent misconfigurations, a number of attacks have been demonstrated that allow unauthorized information flow between virtual machines hosted by a hypervisor on a given physical server. In this paper, we present attribute-based constraints specification and enforcement as a mechanism to mitigate such multi-tenancy risks that arise in cloud IaaS. We represent relevant properties of virtual resources (e.g., virtual machines, virtual networks, etc.) as their attributes. Conflicting attribute values are specified by the tenant or by the cloud IaaS system as appropriate. The goal is to schedule virtual resources on physical resources in a conflict-free manner. The general problem is shown to be NP-complete. We explore practical conflict specifications that can be efficiently enforced. We have implemented a prototype for virtual machine scheduling in OpenStack, a widely-used open-source cloud IaaS software, and evaluated its performance overhead, resource requirements to satisfy conflicts, and resource utilization.

Proceedings ArticleDOI
29 Oct 2015
TL;DR: A workload-aware budget compensation scheduling algorithm for the device-level request scheduler that can guarantee the performance isolation when multiple VMs share an NVMe SSD with different workloads is proposed.
Abstract: Recently, solid state drives (SSDs) are replacing hard disk drives (HDDs) in datacenter storage systems in order to reduce power consumption and improve I/O performance. Additionally, in order to mitigate the performance bottleneck at I/O interface between host and SSD, the PCIe-leveraging NVMe SSD is emerging for datacenter SSDs. The NVMe interface supports the I/O virtualization mechanism called single root I/O virtualization (SR-IOV), which is a device self-virtualization technique for supporting direct paths from virtual machines (VMs) to I/O devices. Multiple virtual machines can share an SR-IOV-supporting physical device without intervention of virtual machine monitor. SR-IOV-supporting SSD should provide a device-level scheduler which can schedule the requests from multiple VMs considering performance isolation and fairness. In this paper, we propose a workload-aware budget compensation scheduling algorithm for the device-level request scheduler. To guarantee the performance isolation, the device-level scheduler estimates the contribution on the garbage collection (GC) cost of each virtual machine in the SSD device. Based on the estimated GC contributions, the budget of each VM is compensated for performance isolation. We experimented the effects of the proposed technique with an SSD simulator. The experiments showed that the scheduler can guarantee the performance isolation when multiple VMs share an NVMe SSD with different workloads.

Patent
20 Feb 2015
TL;DR: In this paper, a data management system may manage the extraction and storage of virtual machine snapshots, provide near instantaneous restoration of a virtual machine or one or more files located on the virtual machine, and enable secondary workloads to directly use the data management systems as a primary storage target to read or modify past versions of data.
Abstract: Methods and systems for managing, storing, and serving data within a virtualized environment are described. In some embodiments, a data management system may manage the extraction and storage of virtual machine snapshots, provide near instantaneous restoration of a virtual machine or one or more files located on the virtual machine, and enable secondary workloads to directly use the data management system as a primary storage target to read or modify past versions of data. The data management system may allow a virtual machine snapshot of a virtual machine stored within the system to be directly mounted to enable substantially instantaneous virtual machine recovery of the virtual machine.

Patent
Feng Wang1
01 Jul 2015
TL;DR: In this article, the authors present a resource management method for a virtual machine system, where the method includes: obtaining, by a virtual resource management platform, a QoS constraint parameter of a VM cluster and a current operating status statistical indicator of the VM cluster, and adjusting physical resources scheduling policy of a physical device platform or performing physical resource scheduling on the physical devices platform.
Abstract: An embodiment of the present invention provides a resource management method for a virtual machine system, where the method includes: obtaining, by a virtual resource management platform, a QoS constraint parameter of a virtual machine cluster and a current operating status statistical indicator of the virtual machine cluster, and according to the QoS constraint parameter of the virtual machine cluster and the current operating status statistical indicator of the virtual machine cluster, adjusting physical resources scheduling policy of a physical device platform or performing physical resource scheduling on the physical device platform. The method may ensure QoS of a cloud application.

Patent
David Alan Hepkin1
25 Mar 2015
TL;DR: In this paper, an instruction included in the instruction set architecture for the physical processor that when invoked indicates that a virtual processor implemented using the physical processors should switch directly from a first VM context to a second VM context is invoked.
Abstract: In a virtual computing environment, a system configured to switch between isolated virtual contexts. A system includes a physical processor. The physical processor includes an instruction set architecture. The instruction set architecture includes an instruction included in the instruction set architecture for the physical processor that when invoked indicates that a virtual processor implemented using the physical processor should switch directly from a first virtual machine context to a second virtual machine context. The first and second virtual machine contexts are isolated from each other.

Patent
Tal Zamir1
26 Jun 2015
TL;DR: In this article, the authors describe a technique for creating a virtual machine clone of a physical host computing device by attaching a virtual disk to the virtual machine and booting the guest operating system from the master boot record and the snapshot.
Abstract: Techniques are described for creating a virtual machine clone of a physical host computing device. A hosted hypervisor running within a host operating system on the physical computing device receives a request to boot a virtual machine clone of the device. In response to the request, the hosted hypervisor synthesizes a virtual disk that is comprised of a master boot record of the host computing device, a read-only snapshot obtained from a volume snapshot service of the host operating system and a delta virtual disk for recording changes. The hosted hypervisor then launches the virtual machine clone by attaching the synthesized virtual disk to the virtual machine clone and booting the guest operating system from the master boot record and the snapshot. Any changes made during the use of the virtual machine clone can be automatically propagated back and applied to the physical host device.

Patent
07 Aug 2015
TL;DR: In this article, a network data transmission analysis system can use contextual information in the execution of VM instances to isolate and migrate virtual machine instances onto physical computing devices, such as servers.
Abstract: Systems and method for the management of virtual machine instances are provided. A network data transmission analysis system can use contextual information in the execution of virtual machine instances to isolate and migrate virtual machine instances onto physical computing devices. The contextual information may include information obtained in observing the execution of virtual machines instances, information obtained from requests submitted by users, such as system administrators. Still further, the network data transmission analysis system can also include information collection and retention for identified virtual machine instances.

Proceedings ArticleDOI
28 Dec 2015
TL;DR: A new Prime and Probe cache side-channel attack, which can prime physical addresses which was implemented in a server machine comparable to cloud environment servers and shows that the attack needs less effort and time than other types and is easy to be launched.
Abstract: Cloud computing is considered one of the most dominant paradigms in the Information Technology (IT) industry nowadays. It supports multi-tenancy to fulfil future increasing demands for accessing and using resources provisioned over the Internet. However, multi-tenancy in cloud computing has unique vulnerabilities such as clients' co-residence and virtual machine physical co-residency. Physical co-residency of virtual machines can facilitate attackers with an ability to interfere with another virtual machine running on the same physical machine due to an insufficient logical isolation. In the worst scenario, attackers can exfiltrate sensitive information of victims on the same physical machine by using hardware side-channels. There are various types of side-channels attacks, which are classified according to hardware medium they target and exploit, for instance, cache side-channel attacks. CPU caches are one of the most hardware devices targeted by adversaries because it has high-rate interactions and sharing between processes. This paper presents a new Prime and Probe cache side-channel attack, which can prime physical addresses. These addresses are translated form virtual addresses used by a virtual machine. Then, time is measured to access these addresses and it will be varied according to where the data is located. If it is in the CPU cache, the time will be less than in the main memory. The attack was implemented in a server machine comparable to cloud environment servers. The results show that the attack needs less effort and time than other types and is easy to be launched.

Patent
23 Apr 2015
TL;DR: In this article, the authors described a secure domain isolation method for secure information processing, which includes configuring a computing component with data/programming associated with address swapping and/or establishing isolation between domains or virtual machines.
Abstract: Systems and methods are disclosed for providing secure information processing. In one exemplary implementation, there is provided a method of secure domain isolation. Moreover, the method may include configuring a computing component with data/programming associated with address swapping and/or establishing isolation between domains or virtual machines, processing information such as instructions from an input device while keeping the domains or virtual machines separate, and/or performing navigating and/or other processing among the domains or virtual machines as a function of the data/programming and/or information, wherein secure isolation between the domains or virtual machines is maintained.

Journal ArticleDOI
TL;DR: An environment-aware paradigm for virtual slices that allows improving energy efficiency and dealing with intermittent renewable power sources is investigated and an optimal solution for virtual slice assignment problem is proposed.
Abstract: Environmental footprint resulting from datacenters activities can be reduced by both energy efficiency and renewable energy in a complementary fashion thanks to cloud computing paradigms. In a cloud hosting multi-tenant applications, virtual service providers can be provided with real-time recommendation techniques to allocate their virtual resources in edge, core, or access layers in an optimal way to minimize costs and footprint. Such a dynamic technique requires a flexible and optimized networking scheme to enable elastic virtual tenants spanning multiple physical nodes. In this paper, we investigate an environment-aware paradigm for virtual slices that allows improving energy efficiency and dealing with intermittent renewable power sources. A virtual slice consists of optimal flows assigned to virtual machines (VMs) in a virtual data center taking into account traffic requirements, VM locations, physical network capacity, and renewable energy availability. Considering various cloud consolidation schemes, we formulate and then propose an optimal solution for virtual slice assignment problem. Simulations on the GSN showed that the proposed model achieves better performance than the existing methods with respect to network footprint reductions.

Proceedings Article
18 May 2015
TL;DR: FlexNIC, a flexible network DMA interface that can be used by operating systems and applications alike to reduce packet processing overheads, is proposed and shown how it can benefit widely used data center server applications, such as key-value stores.
Abstract: We propose FlexNIC, a flexible network DMA interface that can be used by operating systems and applications alike to reduce packet processing overheads. The recent surge of network I/O performance has put enormous pressure on memory and software I/O processing subsystems. Yet even at high speeds, flexibility in packet handling is still important for security, performance isolation, and virtualization. Thus, our proposal moves some of the packet processing traditionally done in software to the NIC DMA controller, where it can be done flexibly and at high speed. We show how FlexNIC can benefit widely used data center server applications, such as key-value stores.

Patent
15 Jul 2015
TL;DR: In this article, a virtual machine determines, based on the monitoring of the usage of virtual storage, whether to expand the virtual storage of the virtual machine and sends an expansion request to an authorization engine.
Abstract: Mechanisms are provided for automatically expanding a virtual storage of a virtual machine. The virtual machine monitors a usage of the virtual storage of the virtual machine. The virtual machine determines, based on the monitoring of the usage of the virtual storage, whether to expand the virtual storage of the virtual machine. In response to the virtual machine determining to expand the virtual storage of the virtual machine, a virtual machine manager executes one or more operations to expand the virtual storage. The monitoring and determining may be performed by a virtual storage management agent executing within the virtual machine and which may send an expansion request to an authorization engine to request expansion of the virtual storage.

01 Jan 2015
TL;DR: This dissertation describes complete solutions including architectures, algorithms, and implementations that apply coflows to multiple scenarios using central coordination, and demonstrates through large-scale cloud deployments and trace-driven simulations that simply knowing how flows relate to each other is enough for better network scheduling, meeting more deadlines, and providing higher performance isolation than what is otherwise possible using today's application-agnostic solutions.
Abstract: Over the past decade, the confluence of an unprecedented growth in data volumes and the rapid rise of cloud computing has fundamentally transformed systems software and corresponding infrastructure. To deal with massive datasets, more and more applications today are scaling out to large datacenters. These distributed data-parallel applications run on tens to thousands of machines in parallel to exploit I/O parallelism, and they enable a wide variety of use cases, including interactive analysis, SQL queries, machine learning, and graph processing. Communication between the distributed computation tasks of these applications often result in massive data transfers over the network. Consequently, concentrated efforts in both industry and academia have gone into building high-capacity, low-latency datacenter networks at scale. At the same time, researchers and practitioners have proposed a wide variety of solutions to minimize flow completion times or to ensure per-flow fairness based on the point-to-point flow abstraction that forms the basis of the TCP/IP stack. We observe that despite rapid innovations in both applications and infrastructure, application- and network-level goals are moving further apart. Data-parallel applications care about all their flows, but today’s networks treat each point-to-point flow independently. This fundamental mismatch has resulted in complex point solutions for application developers, a myriad of configuration options for end users, and an overall loss of performance. The key contribution of this dissertation is bridging this gap between application-level performance and network-level optimizations through the coflow abstraction. Each multipoint-to-multipoint coflow represents a collection of flows with a common application-level performance objective, enabling application-aware decision making in the network. We describe complete solutions including architectures, algorithms, and implementations that apply coflows to multiple scenarios using central coordination, and we demonstrate through large-scale cloud deployments and trace-driven simulations that simply knowing how flows relate to each other is enough for better network scheduling, meeting more deadlines, and providing higher performance isolation than what is otherwise possible using today’s application-agnostic solutions. In addition to performance improvements, coflows allow us to consolidate communication optimizations across multiple applications, simplifying software development and relieving end users from parameter tuning. On the theoretical front, we discover and characterize for the first time the concurrent open shop scheduling with coupled resources family of problems. Because any flow is also a coflow with just one flow, coflows and coflow-based solutions presented in this dissertation generalize a large body of work in both networking and scheduling literatures.

Patent
Zhong Qi Feng1, Jiang Tao Li1, Yi Bin Wang1, Chao Yu1, Qing Feng Zhang1 
11 Sep 2015
TL;DR: In this paper, the authors present methods for predictively provisioning, by one or more processor, cloud computing resources of a cloud computing environment for at least one virtual machine; and initializing, by the one or multiple processor, the at least single virtual machine with the provisioned cloud resources of the cloud environment.
Abstract: Methods, computer program products, and systems are presented The methods include, for instance: predictively provisioning, by one or more processor, cloud computing resources of a cloud computing environment for at least one virtual machine; and initializing, by the one or more processor, the at least one virtual machine with the provisioned cloud computing resources of the cloud computing environment In one embodiment, the predictively provisioning may include: receiving historical utilization information of multiple virtual machines of the cloud computing environment, the multiple virtual machines having similar characteristics to the at least one virtual machine; and determining the cloud computing resources for the at least one virtual machine using the historical utilization information of the multiple virtual machines In another embodiment, the predictively may include updating a provisioning database with the historical utilization information of the multiple virtual machines of the cloud computing environment

Proceedings ArticleDOI
07 Jul 2015
TL;DR: This paper proposes a cooperative multi agent learning approach to tackle the energy-performance tradeoff in cloud data centers in comparison to state-of-the-art algorithms.
Abstract: Distributed dynamic virtual machine (VM) consolidation (DDVMC) is a virtual machine management strategy that uses a distributed rather than a centralized algorithm for finding a right balance between saving energy and attaining best possible performance in cloud data center. One of the significant challenges in DDVMC is that the optimality of this strategy is highly dependent on the quality of the decision-making process. In this paper we propose a cooperative multi agent learning approach to tackle this challenge. The experimental results show that our approach yields far better results w.r.t. The energy-performance tradeoff in cloud data centers in comparison to state-of-the-art algorithms.

Proceedings ArticleDOI
26 May 2015
TL;DR: This work proposes In-and-Out-of-the-Box Virtual Machine and Hypervisor based Intrusion Detection and Prevention System for virtualized environment to ensure robust state of the virtual machine by detecting followed by eradicating rootkits as well as other attacks.
Abstract: Cloud Computing enabled by virtualization technology exhibits revolutionary change in IT Infrastructure. Hypervisor is a pillar of virtualization and it allows sharing of resources to virtual machines. Vulnerabilities present in virtual machine leveraged by an attacker to launch the advanced persistent attacks such as stealthy rootkit, Trojan, Denial of Service (DoS) and Distributed Denial of Service (DDoS) attack etc. Virtual Machines are prime target for malignant cloud user or an attacker to launch attacks as they are easily available for rent from Cloud Service Provider (CSP). Attacks on virtual machine can disrupt the normal operation of cloud infrastructure. In order to secure the virtual environment, defence mechanism is highly imperative at each virtual machine to identify the attacks occurring at virtual machine in timely manner. This work proposes In-and-Out-of-the-Box Virtual Machine and Hypervisor based Intrusion Detection and Prevention System for virtualized environment to ensure robust state of the virtual machine by detecting followed by eradicating rootkits as well as other attacks. We conducted experiments using popular open source Host based Intrusion Detection System (HIDS) called Open Source SECurity Event Correlator (OSSEC). Both Linux and windows based rootkits, DoS attack, Files integrity verification test are conducted and they are successfully detected by OSSEC.