scispace - formally typeset
Search or ask a question

Showing papers on "Temporal isolation among virtual machines published in 2010"


Journal ArticleDOI
13 Mar 2010
TL;DR: This study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling, and finds a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware.
Abstract: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2\% of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications.

532 citations


Proceedings ArticleDOI
18 Dec 2010
TL;DR: A two-level control system to manage the mappings of workloads to VMs and VMs to physical resources and an improved genetic algorithm with fuzzy multi-objective evaluation is proposed for efficiently searching the large solution space and conveniently combining possibly conflicting objectives.
Abstract: Server consolidation using virtualization technology has become increasingly important for improving data center efficiency It enables one physical server to host multiple independent virtual machines (VMs), and the transparent movement of workloads from one server to another Fine-grained virtual machine resource allocation and reallocation are possible in order to meet the performance targets of applications running on virtual machines On the other hand, these capabilities create demands on system management, especially for large-scale data centers In this paper, a two-level control system is proposed to manage the mappings of workloads to VMs and VMs to physical resources The focus is on the VM placement problem which is posed as a multi-objective optimization problem of simultaneously minimizing total resource wastage, power consumption and thermal dissipation costs An improved genetic algorithm with fuzzy multi-objective evaluation is proposed for efficiently searching the large solution space and conveniently combining possibly conflicting objectives The simulation-based evaluation using power-consumption and thermal-dissipation models based on profiling of a Blade Center, demonstrates the good performance, scalability and robustness of our proposed approach Compared with four well-known bin-packing algorithms and two single-objective approaches, the solutions obtained from our approach seek good balance among the conflicting objectives while others cannot

527 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: Simulation studies suggest that the proposed virtual machine placement and migration approach is effective in optimizing the data transfer between the virtual machine and data, thus helping optimize the overall application performance.
Abstract: Cloud computing represents a major step up in computing whereby shared computation resources are provided on demand. In such a scenario, applications and data thereof can be hosted by various networked virtual machines (VMs). As applications, especially data-intensive applications, often need to communicate with data frequently, the network I/O performance would affect the overall application performance significantly. Therefore, placement of virtual machines which host an application and migration of these virtual machines while the unexpected network latency or congestion occurs is critical to achieve and maintain the application performance. To address these issues, this paper proposes a virtual machine placement and migration approach to minimizing the data transfer time consumption. Our simulation studies suggest that the proposed approach is effective in optimizing the data transfer between the virtual machine and data, thus helping optimize the overall application performance.

241 citations


Patent
15 Mar 2010
TL;DR: In this paper, a computer-implemented method for providing network access control in virtual environments is described, which may include injecting a transient security agent into a virtual machine that is running on a host machine.
Abstract: A computer-implemented method for providing network access control in virtual environments The method may include: 1) injecting a transient security agent into a virtual machine that is running on a host machine; 2) receiving, from the transient security agent, an indication of whether the virtual machine complies with one or more network access control policies; and 3) controlling network access of the virtual machine based on the indication of whether the virtual machine complies with the one or more network access control policies Various other methods, systems, and computer-readable media are also disclosed herein

217 citations


Patent
11 Mar 2010
TL;DR: In this article, a server placement of the set of virtual machines within each virtual machine on at least one mapped server is generated for each cluster, which substantially satisfies a set of secondary constraints.
Abstract: A method, information processing system, and computer program product manage server placement of virtual machines in an operating environment. A mapping of each virtual machine in a plurality of virtual machines to at least one server in a set of servers is determined. The mapping substantially satisfies a set of primary constraints associated with the set of servers. A plurality of virtual machine clusters is created. Each virtual machine cluster includes a set of virtual machines from the plurality of virtual machines. A server placement of one virtual machine in a cluster is interchangeable with a server placement of another virtual machine in the same cluster while satisfying the set of primary constraints. A server placement of the set of virtual machines within each virtual machine on at least one mapped server is generated for each cluster. The server placement substantially satisfies a set of secondary constraints.

213 citations


Patent
Jacob Oshins1, Dustin L. Green1
23 Nov 2010
TL;DR: In this article, a virtual machine storage service can use a unique network identifier and a SR-IOV compliant device can be used to transport I/O between a VMs and the VMs.
Abstract: A virtual machine storage service can be use a unique network identifier and a SR-IOV compliant device can be used to transport I/O between a virtual machine and the virtual machine storage service. The virtual machine storage service can be offloaded to a child partition or migrated to another physical machine along with the unique network identifier.

165 citations


Proceedings Article
22 Jun 2010
TL;DR: Seawall is proposed, an edge-based solution, that achieves max-min fairness across tenant VMs by sending traffic through congestion-controlled, hypervisor-to-hypervisor tunnels.
Abstract: While today's virtual datacenters have hypervisor based mechanisms to partition compute resources between the tenants co-located on an end host, they provide little control over how tenants shore the network. This opens cloud applications to interference from other tenants, resulting in unpredictable performance and exposure to denial of service attacks. This paper explores the design space for achieving performance isolation between tenants. We find that existing schemes for enterprise datacenters suffer from at least one of these problems: they cannot keep up with the numbers of tenants and the VM churn observed in cloud datacenters; they impose static bandwidth limits to obtain isolation at the cost of network utilization; they require switch and/or NIC modifications; they cannot tolerate malicious tenants and compromised hypervisors. We propose Seawall, an edge-based solution, that achieves max-min fairness across tenant VMs by sending traffic through congestion-controlled, hypervisor-to-hypervisor tunnels.

164 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper presents a novel virtual network framework aimed to control the inter-communication among virtual machines deployed in physical machines with higher security.
Abstract: Cloud computing is the next generation of networking computing, since it can deliver both software and hardware as on-demand resources and services over the Internet. Undoubtedly, one of the significant concerns in cloud computing is security. Virtualization is a key feature of cloud computing. In this paper, we focus on the security of virtual network in virtualized environment. First, we outline the security issues in virtual machines, and then security problems that exist in a virtual network are discussed and analyzed based on Xen platform. Finally this paper presents a novel virtual network framework aimed to control the inter-communication among virtual machines deployed in physical machines with higher security.

164 citations


Proceedings Article
27 Apr 2010
TL;DR: A network QoS control framework for converged fabrics that automatically and flexibly programs a network of devices with the necessary QoS parameters, derived from a high level set of application requirements is proposed.
Abstract: Network convergence is becoming increasingly important for cost reduction and management simplification However, this convergence requires strict performance isolation while keeping fine-grained control of each service (eg VoIP, video conference etc) It is difficult to guarantee the performance requirements for various serviceswith manual configuration of the Quality-of-Service (QoS) knobs on a per-device basis as is prevalent today We propose a network QoS control framework for converged fabrics that automatically and flexibly programs a network of devices with the necessary QoS parameters, derived from a high level set of application requirements The controller leverages our QoS extensions of OpenFlow APIs, including per-flow rate-limiters and dynamic priority assignment We also present some results from a testbed implementation to validate the performance of our controller

153 citations


Proceedings ArticleDOI
03 Sep 2010
TL;DR: In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers, which can be circumvented if backup resources are pooled and shared across multiple virtual inf infrastructure, and intelligently embedded in the physical infrastructure.
Abstract: In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.

144 citations


Proceedings ArticleDOI
22 Jan 2010
TL;DR: This work tries to compare the performance of web servers based on the CLBVM policy and independent virtual machine(VM) running on a single physical server using Xen Virtualizaion.
Abstract: Cloud Computing adds more power to the existing Internet technologies. Virtualization harnesses the power of the existing infrastructure and resources. With virtualization we can simultaneously run multiple instances of different commodity operating systems. Since we have limited processors and jobs work in concurrent fashion, overload situations can occur. Things become even more challenging in distributed environment. We propose Central Load Balancing Policy for Virtual Machines (CLBVM) to balance the load evenly in a distributed virtual machine/cloud computing environment. This work tries to compare the performance of web servers based on our CLBVM policy and independent virtual machine(VM) running on a single physical server using Xen Virtualizaion. The paper discusses the efficacy and feasibility of using this kind of policy for overall performance improvement.

Patent
26 Jan 2010
TL;DR: In this paper, a system and method for allocating resources in a cloud environment includes determining permitted usage of virtual machines and partitioning resources between network servers in accordance with a virtual hyperviser generated by an abstraction layer configured as an interface between a solution manager and a cloud network.
Abstract: A system and method for allocating resources in a cloud environment includes determining permitted usage of virtual machines and partitioning resources between network servers in accordance with a virtual hyperviser generated in accordance with an abstraction layer configured as an interface between a solution manager and an interface to a cloud network. Resource usage limits are determined for each virtual machine associated with the virtual hyperviser, and the servers are analyzed through the virtual hypervisers to determine if the virtual machines need to be migrated. If reallocation is needed, virtual machine migration requests are issued to migrate virtual machines into a new configuration at the virtual hypervisor abstraction level. The servers are reanalyzed to determine if migration of the new configuration is needed. Shares are computed to enforce balance requirements, and virtual machine shares and limits are adjusted for resources according to the computed shares.

Patent
Son VoBa1, Octavian T. Ureche1
17 Feb 2010
TL;DR: In this paper, a virtual hard disk drive containing a guest operating system is bound to a source computing device through encryption, and when the virtual disk is moved to a difference computing device, a VM manager instantiates a virtual machine and causing the virtual machine to boot the operating system from the virtual Hard Disk drive.
Abstract: A virtual hard disk drive containing a guest operating system is bound to a source computing device through encryption. When the virtual hard drive is moved to a difference computing device, a virtual machine manager instantiates a virtual machine and causing the virtual machine to boot the operating system from the virtual hard disk drive. Because the guest operating system is encrypted by an encryption device on a source computing device, the virtual machine causing the decryption of the guest operating system with a copy of the key. The virtual hard disk is bound to the target computing device through encryption based on a hardware on the target computing device.

Proceedings ArticleDOI
05 Jul 2010
TL;DR: This measurement based analysis on performance impact of co-locating applications in a virtualized cloud in terms of throughput and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical host is presented.
Abstract: Virtualization is a key technology for cloud based data centers to implement the vision of infrastructure as a service (IaaS) and to promote effective server consolidation and application consolidation. However, current implementation of virtual machine monitor does not provide sufficient performance isolation to guarantee the effectiveness of resource sharing, especially when the applications running on multiple virtual machines of the same physical machine are competing for computing and communication sources. In this paper, we present our performance measurement study of network I/O applications in virtualized cloud. We focus our measurement based analysis on performance impact of co-locating applications in a virtualized cloud in terms of throughput and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical host. Our results show that by strategically co-locating network I/O applications, performance improvement for cloud consumers can be as high as 34%, and the cloud providers can achieve over 40% performance gain.

Patent
08 Apr 2010
TL;DR: In this article, a management server and a method for providing a cloud computing service at high speed and reasonable cost, is provided, where the management server provides a virtual machine to a client as a computing resource.
Abstract: A management server and method for providing a cloud computing service at high speed and reasonable cost, are provided. The management server provides a virtual machine to a client as a computing resource. The virtual machine is multiplexed by operating multiple virtual devices on a single virtual machine. Accordingly, demand for computing resources may be predicted in advance and may be provided to a user more efficiently.

Proceedings ArticleDOI
19 Apr 2010
TL;DR: This paper evaluates the impact of task oversubscription on the performance of MPI, OpenMP and UPC implementations of the NAS Parallel Benchmarks on UMA and NUMA multi-socket architectures and discusses sharing and partitioning system management techniques.
Abstract: Existing multicore systems already provide deep levels of thread parallelism; hybrid programming models and composability of parallel libraries are very active areas of research within the scientific programming community. As more applications and libraries become parallel, scenarios where multiple threads compete for a core are unavoidable. In this paper we evaluate the impact of task oversubscription on the performance of MPI, OpenMP and UPC implementations of the NAS Parallel Benchmarks on UMA and NUMA multi-socket architectures. We evaluate explicit thread affinity management against the default Linux load balancing and discuss sharing and partitioning system management techniques. Our results indicate that oversubscription provides beneficial effects for applications running in competitive environments. Sharing all the available cores between applications provides better throughput than explicit partitioning. Modest levels of oversubscription improve system throughput by 27% and provide better performance isolation of applications from their co-runners: best overall throughput is always observed when applications share cores and each is executed with multiple threads per core. Rather than “resource” symbiosis, our results indicate that the determining behavioral factor when applications share a system is the granularity of the synchronization operations.

Proceedings ArticleDOI
06 Dec 2010
TL;DR: This paper measures and analyzes the performance of three open source virtual machine monitors-OpenVZ, Xen and KVM, which adopt the container-based virtualization, para- virtualization and full-virtualization respectively and examines their design and implementation as a white box.
Abstract: Although virtualization holds numerous merits, it meanwhile incurs some performance loss. As the pivotal component of a virtualization system, the efficiency of virtual machine monitor(VMM) will largely impact the performance of the whole system. Therefore, it's indispensable to evaluate the performance of virtual machine monitors with different virtualization technologies. In this paper, we measure and analyze the performance of three open source virtual machine monitors-OpenVZ, Xen and KVM, which adopt the container-based virtualization, para-virtualization and full-virtualization respectively. We first measure them as a black box about their macro-performance on the virtualization of processor, memory, disk, network, server applications(including web, database and Java) and their micro-performance on the virtualization of system operation and context switch with several canonical benchmarks, and then analyze these testing results by examining their design and implementation as a white box. The experimental data not only show some valuable information for designers, but also provide a comprehensive performance understanding for users.

Proceedings ArticleDOI
22 Mar 2010
TL;DR: CCCV, a system that creates a covert channel and communicates data secretly using CPU loads between virtual machines on the Xen hypervisor, communicated 64-bit data with a 100% success rate in an ideal environment, and in an environment where Web servers are processing requests on other virtual machines.
Abstract: Multiple virtual machines on a single virtual machine monitor are isolated from each other. A malicious user on one virtual machine usually cannot relay secret data to other virtual machines without using explicit communication media such as shared files or a network. However, this isolation is threatened by communication in which CPU load is used as a covert channel. Unfortunately, this threat has not been fully understood or evaluated. In this study, we quantitatively evaluate the threat of CPU-based covert channels between virtual machines on the Xen hypervisor. We have developed CCCV, a system that creates a covert channel and communicates data secretly using CPU loads. CCCV consists of two user processes, a sender and a receiver. The sender runs on one virtual machine, and the receiver runs on another virtual machine on the same hypervisor. We measured the bandwidth and communication accuracy of the covert channel. CCCV communicated 64-bit data with a 100% success rate in an ideal environment, and with a success rate of over 90% in an environment where Web servers are processing requests on other virtual machines.

Proceedings ArticleDOI
Kejiang Ye1, Dawei Huang1, Xiaohong Jiang1, Huajun Chen1, Shuang Wu1 
18 Dec 2010
TL;DR: Experimental results show that both the two technologies can effectively implement energy-saving goals with little performance overheads, and efficient consolidation and migration strategies can improve the energy efficiency.
Abstract: Virtual machine technology is widely applied to modern data center for cloud computing as a key technology to realize energy-efficient operation of servers. Server consolidation achieves energy efficiency by enabling multiple instantiations of operating systems (OSes) to run simultaneously on a single physical machine. While, live migration of virtual machine can transfer the virtual machine workload from one physical machine to another without interrupting service. However, both the two technologies have their own performance overheads. There is a tradeoff between the performance and energy efficiency. In this paper, we study the energy efficiency from the performance perspective. Firstly, we present a virtual machine based energy-efficient data center architecture for cloud computing. Then we investigate the potential performance overheads caused by server consolidation and live migration of virtual machine technology. Experimental results show that both the two technologies can effectively implement energy-saving goals with little performance overheads. Efficient consolidation and migration strategies can improve the energy efficiency.

Patent
08 Feb 2010
TL;DR: In this article, the authors describe a technology by which a virtual hard disk is migrated from a source storage location to a target storage location without needing any shared physical storage, in which a machine may continue to use the virtual disk during migration.
Abstract: Described is a technology by which a virtual hard disk is migrated from a source storage location to a target storage location without needing any shared physical storage, in which a machine may continue to use the virtual hard disk during migration. This facilitates use the virtual hard disk in conjunction with live-migrating a virtual machine. Virtual hard disk migration may occur fully before or after the virtual machine is migrated to the target host, or partially before and partially after virtual machine migration. Background copying, sending of write-through data, and/or servicing read requests may be used in the migration. Also described is throttling data writes and/or data communication to manage the migration of the virtual hard disk.

Patent
28 Apr 2010
TL;DR: In this article, the availability information of a virtual machine is provided by a feedback agent executing on the virtual machine and a hypervisor executing multiple virtual machines on a common set of physical computing hardware.
Abstract: Methods and apparatus for providing availability information of a virtual machine to a load balancer are disclosed. The availability information of the virtual machine may be normalized information from performance metrics of the virtual machine and performance metrics of the physical machine on which the virtual machine operates. The normalized availability of a virtual machine is provided by a feedback agent executing on the virtual machine. Alternatively, the normalized availability of a virtual machine is provided by a feedback agent executing on a hypervisor executing multiple virtual machines on a common set of physical computing hardware.

Patent
Jacob Oshins1
19 Mar 2010
TL;DR: Techniques for effectuating a virtual NUMA architecture for virtual machines are disclosed in this paper, where the authors propose a VM architecture based on the NNUMA protocol and demonstrate its performance.
Abstract: Techniques for effectuating a virtual NUMA architecture for virtual machines are disclosed herein.

Patent
Lars Reuther1
29 Jun 2010
TL;DR: In this article, a virtualization module using second-level paging functionality is employed, paging-out the virtual machine memory content from one physical host to the shared storage, and the content of the memory file can be restored on another physical host by employing on-demand paging and optionally low-priority background paging from a shared storage to the other physical host.
Abstract: Techniques for providing the ability to live migrate a virtual machine from one physical host to another physical host employ shared storage as the transfer medium for the state of the virtual machine. In addition, the ability for a virtualization module to use second-level paging functionality is employed, paging-out the virtual machine memory content from one physical host to the shared storage. The content of the memory file can be restored on another physical host by employing on-demand paging and optionally low-priority background paging from the shared storage to the other physical host.

Patent
30 Mar 2010
TL;DR: In this paper, the storage optimization selection for virtual disks of a virtualization environment is based in part on the disk type of the virtual disk included in a virtual machine and metadata associated with the disk.
Abstract: Storage optimization selection for virtual disks of a virtualization environment, where the storage optimization can be selected based in part on the disk type of a virtual disk included in a virtual machine. The disk type of the virtual disk can be discovered by the virtualization environment which queries a database within the virtualization environment for metadata associated with the virtual disk. The metadata can be created when a virtual desktop infrastructure creates the virtual disk, and a virtual machine template that includes the at least one virtual disk. The virtual disk can be modified to either include or be associated with the metadata that describes a disk type of the virtual disk. Upon executing the virtual machine that includes the modified virtual disk, a storage subsystem of the virtualization environment can obtain the metadata of the virtual disk to discover the disk type of the virtual disk.

Patent
30 Nov 2010
TL;DR: In this paper, the authors propose systems and methods for reclassifying a set of virtual machines in a cloud-based network, which can analyze virtual machine data to determine performance metrics associated with the set of VMs, as well as target VMs' performance metrics to determine a subset of target machines to which the VMs can be reassigned or reclassified.
Abstract: Embodiments relate to systems and methods for reclassifying a set of virtual machines in a cloud-based network. The systems and methods can analyze virtual machine data to determine performance metrics associated with the set of virtual machines, as well as target data to determine a set of target machines to which the set of virtual machines can be reassigned or reclassified. In embodiments, benefits of reassigning any of the set of virtual machines to any of the set of target virtual machines can be determined. Based on the benefits, the systems and methods can reassign or reclassify appropriate virtual machines to appropriate target virtual machines.

Patent
09 Jul 2010
TL;DR: In this paper, the authors present a method, information processing system, and computer program product manage virtual machine migration, where a virtual machine is selected from a set of virtual machines and each physical machine in a plurality of physical machines is analyzed with respect to a first set of migration constraints associated with the virtual machine that has been selected.
Abstract: A method, information processing system, and computer program product manage virtual machine migration. A virtual machine is selected from a set of virtual machines. Each physical machine in a plurality of physical machines is analyzed with respect to a first set of migration constraints associated with the virtual machine that has been selected. A set of physical machines in the plurality of physical machines that satisfy the first set of migration constraints is identified. A migration impact factor is determined for each physical machine in the set of physical machines that has been identified based on a second set of migration constraints associated with the virtual machine that has been selected. A physical machine is selected from the set of physical machines with at least a lowest migration impact factor on which to migrate the virtual machine that has been selected in response to determining the migration impact factor.

Patent
12 Oct 2010
TL;DR: In this article, a computer implemented method and system for securing a virtual environment and virtual machines in the virtual environment is provided, where a credential authority server provides, on request, environment credentials to each of the virtual machines and the hypervisors on authorization of each VM and the Hypervisors.
Abstract: A computer implemented method and system for securing a virtual environment and virtual machines in the virtual environment is provided. A credential authority server is provided for managing environment credentials of the virtual environment. A virtual machine shim is associated with each of the virtual machines, and one or more hypervisor shims are associated with one or more hypervisors. The credential authority server provides, on request, environment credentials to each of the virtual machines and the hypervisors on authorization of each of the virtual machines and the hypervisors. Each virtual machine shim associated with each of the virtual machines communicates the provided environment credentials to the hypervisor shims for validation. The hypervisors associated with the hypervisor shims validate each of the virtual machines associated with each virtual machine shim based on the communicated environment credentials to allow instantiation of each of the virtual machines in the virtual environment.

Proceedings ArticleDOI
Yanyan Hu1, Xiang Long1, Jiong Zhang1, Jun He1, Li Xia1 
21 Jun 2010
TL;DR: This paper proposes a virtual machine I/O scheduling model based on multi-core dynamic partitioning, and implements a prototype based on Xen virtual machine, and demonstrates that the scheduling fairness can also be guaranteed.
Abstract: In a virtual machine system, the scheduler within the virtual machine monitor (VMM) plays a key role in determining the overall fairness and performance characteristics of the whole system. However, traditional VMM schedulers focus on sharing the processor resources fairly among guest domains while leaving the scheduling of I/O missions as a secondary concern. This would cause serious degradation of I/O performance and make virtualization less desirable for I/O-intensive applications. In order to eliminate the I/O performance bottleneck caused by scheduling delay, this paper proposes a virtual machine I/O scheduling model based on multi-core dynamic partitioning, and implements a prototype based on Xen virtual machine. In this model, I/O operations of guest domains are monitored and the runtime information is analyzed. When the preset conditions are satisfied, the processor cores of the system are divided into three subsets to undertake different missions respectively. Each subset employs specific scheduling strategy to meet the requirement of different tasks. Experiment results demonstrate that our scheduling model can efficiently improve the I/O performance of virtual machine system: in comparison with the case using default Xen credit scheduler, the network and disk bandwidth increase by 35% and 12% respectively, and the average latency of ping operations drops by 37%. At the same time, our method only causes slight negative effect on the performance of compute-intensive applications, and the scheduling fairness can also be guaranteed.

Patent
John M. Suit1
27 Dec 2010
TL;DR: In this article, the authors propose a method to determine the current resource usage data of virtual machines and assign the virtual machines to at least one business application service group (BASG) that requires the available resources of the virtual machine.
Abstract: Virtual machine resources may be monitored for optimal allocation. One example method may include generating a list of virtual machines operating in a network and surveying the virtual machines to determine their current resource usage data. The method may also include ranking the virtual machines based on their current resource usage data to indicate available resources of the virtual machines, and assigning the virtual machines to at least one business application service group (BASG) that requires the available resources of the virtual machines.

Book ChapterDOI
19 Jun 2010
TL;DR: This work takes a topology-aware approach to on-chip QOS and proposes to segregate shared resources into dedicated, QOS-enabled regions of the chip via a combination of topology and operating system support.
Abstract: Power limitations and complexity constraints demand modular designs, such as chip multiprocessors (CMPs) and systems-on-chip (SOCs). Today's CMPs feature up to a hundred discrete cores, with greater levels of integration anticipated in the future. Supporting effective on-chip resource sharing for cloud computing and server consolidation necessitates CMP-level quality-of-service (QOS) for performance isolation, service guarantees, and security. This work takes a topology-aware approach to on-chip QOS. We propose to segregate shared resources into dedicated, QOS-enabled regions of the chip. We than eliminate QOS-related hardware and its associated overheads from the rest of the die via a combination of topology and operating system support. We evaluate several topologies for the QOS-enabled regions, including a new organization called Destination Partitioned Subnets (DPS) which uses a light-weight dedicated network for each destination node. DPS matches or bests other topologies with comparable bisection bandwidth in performance, area- and energy-efficiency, fairness, and preemption resilience.