scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Improved filter-weight algorithm for utilization-aware resource scheduling in OpenStack

TL;DR: Improved nova-scheduler algorithm considers not only RAM, CPU but also vCPU utilization and network bandwidth, which is referred as metrics-weight scheduler in this paper.
Abstract: OpenStack is a cloud computing platform. OpenStack provides an Infrastructure as a Service (IaaS). OpenStack constitutes resources such as compute, storage and network resources. Resource allocation in cloud environment deals with assigning available resources in cost effective manner. Compute resources are allocated in the form of virtual machines (aka instances). Storage resources are allocated in the form of virtual disks (aka volumes). Network resources are allocated in the form of virtual switches, routers and subnets for instance. Resource allocation in OpenStack is carried out by nova-scheduler. However, it is unable to support providers objectives such as allocation of resources based on user privileges, preference to underlying physical infrastructure, actual resource utilizations for example, CPU, memory, storage, network bandwidth etc. An improved nova-scheduler algorithm considers not only RAM, CPU but also vCPU utilization and network bandwidth. Improved nova-scheduler is referred as metrics-weight scheduler in this paper. This paper gives performance evaluation and analysis of Filter-scheduler and Metrics-weight scheduler.
Citations
More filters
Book ChapterDOI
Rong Zhang1, Zhong Amin1, Bo Dong1, Feng Tian1, Rui Li1 
25 Jun 2018
TL;DR: This paper proposes a definition named “Container-VM-PM” architecture and proposes a novel container placement strategy by simultaneously taking into account the three involved entities, which is superior to the existing strategy with regarding to the physical resource utilization.
Abstract: Docker is a mature containerization technique used to perform operating system level virtualization. One open issue in the cloud environment is how to properly choose a virtual machine (VM) to initialize its instance, i.e., container, which is similar to the conventional problem of VM placement towards physical machines (PMs). Current studies mainly focus on container placement and VM placement independently, but rarely take into consideration of the two placements’ systematic collaboration. However, we view it as a main reason for scattered distribution of containers in a data center, which finally results in worse physical resource utilization. In this paper, we propose a definition named “Container-VM-PM” architecture and propose a novel container placement strategy by simultaneously taking into account the three involved entities. Furthermore, we model a fitness function for the selection of VM and PM. Simulation experiments show that our method is superior to the existing strategy with regarding to the physical resource utilization.

35 citations

Journal ArticleDOI
01 Jun 2018
TL;DR: SOWO is presented—a discrete particle swarm optimization-based workload optimization approach to minimize the number of active physical machines in virtual machine placement to improve the efficiency of resource utilization in a cloud center.
Abstract: Virtual machine placement has great potential to significantly improve the efficiency of resource utilization in a cloud center. Focusing on CPU and memory resource, this paper presents SOWO--a discrete particle swarm optimization-based workload optimization approach to minimize the number of active physical machines in virtual machine placement. The experiment results show the usability and superiority of SOWO. Compared with the OpenStack native scheduler, SOWO decreases the physical machine consumption by at least 50% and increases the memory utilization of physical machine by more than two times.

27 citations


Cites methods from "Improved filter-weight algorithm fo..."

  • ...Sahasrabudhe [27] analyzed filter scheduler and metrics-weight scheduler in OpenStack and then performed the performance evaluation....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes two architecture options together with proof-of-concept prototypes and corresponding embedding algorithms, which enable the provisioning of delay-sensitive IoT applications and proposes a multi-layer orchestration system where an orchestrator is added on top of VIMs and network controllers to integrate different resource domains.

21 citations

Proceedings ArticleDOI
15 Apr 2018
TL;DR: The slight modification of OpenStack, the mainstream VIM today, is shown, which enables it to manage a distributed cloud-fog infrastructure and takes into account network aspects that are extremely important in a resource setup with remote fogs.
Abstract: We see two important trends in ICT nowadays: the backend of online applications and services are moving to the cloud, and for delay-sensitive ones the cloud is being extended with fogs. The reason for these phenomena is most importantly economic, but there are other benefits too: fast service creation, flexible reconfigurability, and portability. The management and orchestration of these services are currently separated to at least two layers: virtual infrastructure managers (VIMs) and network controllers operate their own domains, it should consist of compute or network resources, while handling services with cross-domain deployment is done by an upper-level orchestrator. In this paper we show the slight modification of OpenStack, the mainstream VIM today, which enables it to manage a distributed cloud-fog infrastructure. While our solution alleviates the need for running OpenStack controllers in the lightweight edge, it takes into account network aspects that are extremely important in a resource setup with remote fogs. We propose and analyze an online resource orchestration algorithm, we describe the OpenStack-based implementation aspects and we also show large-scale simulation results on the performance of our algorithm.

16 citations


Cites background from "Improved filter-weight algorithm fo..."

  • ...In [12] a new filtering step is proposed, that takes into account the actual load (CPU, network I/O, RAM) of the physical node....

    [...]

Journal ArticleDOI
TL;DR: A high-performance Docker integration scheme based on OpenStack that implements a container management service called Yun that achieves the integration of OpenStack and Docker but also exhibits high performance in terms of deployment efficiency container throughput and the container’s system while also achieving load balancing.
Abstract: As an emerging technology in cloud computing Docker is becoming increasingly popular due to its high speed high efficiency and portability. The integration of Docker with OpenStack has been a hot topic in research and industrial areas e.g. as an emulation platform for evaluating cyberspace security technologies. This paper introduces a high-performance Docker integration scheme based on OpenStack that implements a container management service called Yun. Yun interacts with OpenStack’s services and manages the lifecycle of the container through the Docker Engine to integrate OpenStack and Docker. Yun improves the container deployment and throughput as well as the system performance by optimizing the message transmission architecture between internal components the underlying network data transmission architecture between containers and the scheduling methods. Based on the Docker Engine API Yun provides users with interfaces for CPU memory and disk resource limits to satisfy precise resource limits. Regarding scheduling Yun introduces a new NUMA-aware and resource-utilization-aware scheduling model to improve the performance of containers under resource competition and to balance the load of computing resources. Simultaneously Yun decouples from OpenStack versions by isolating its own running environment from the running environment of OpenStack to achieve better compatibility. Experiments show that compared to traditional methods Yun not only achieves the integration of OpenStack and Docker but also exhibits high performance in terms of deployment efficiency container throughput and the container’s system while also achieving load balancing.

5 citations


Cites background from "Improved filter-weight algorithm fo..."

  • ...There have been many studies, such as [32] and [42], on improving the scheduling model; however, Nova manages virtual machines by default instead of containers....

    [...]

References
More filters
Proceedings ArticleDOI
16 Jul 2010
TL;DR: This paper investigates the possibility to allocate the Virtual Machines (VMs) in a flexible way to permit the maximum usage of physical resources and uses an Improved Genetic Algorithm (IGA) for the automated scheduling policy.
Abstract: Based on the deep research on Infrastructure as a Service (IaaS) cloud systems of open-source, we propose an optimized scheduling algorithm to achieve the optimization or sub-optimization for cloud scheduling problems. In this paper, we investigate the possibility to allocate the Virtual Machines (VMs) in a flexible way to permit the maximum usage of physical resources. We use an Improved Genetic Algorithm (IGA) for the automated scheduling policy. The IGA uses the shortest genes and introduces the idea of Dividend Policy in Economics to select an optimal or suboptimal allocation for the VMs requests. The simulation experiments indicate that our dynamic scheduling policy performs much better than that of the Eucalyptus, Open Nebula, Nimbus IaaS cloud, etc. The tests illustrate that the speed of the IGA almost twice the traditional GA scheduling method in Grid environment and the utilization rate of resources always higher than the open-source IaaS cloud systems.

159 citations


"Improved filter-weight algorithm fo..." refers background in this paper

  • ...Therefore, nova-schedule should be customized according to factor in current state of the entire cloud infrastructure and apply complicated algorithm to ensure efficient usage [9]....

    [...]

Proceedings ArticleDOI
24 Jun 2012
TL;DR: This paper provides a practical model of cloud placement management under a stream of requests and presents a novel technique called Backward Speculative Placement (BSP) that projects the past demand behavior of a VM to a candidate target host.
Abstract: The problem of Virtual Machine (VM) placement in a compute cloud infrastructure is well-studied in the literature. However, the majority of the existing works ignore the dynamic nature of the incoming stream of VM deployment requests that continuously arrive to the cloud provider infrastructure. In this paper we provide a practical model of cloud placement management under a stream of requests and present a novel technique called Backward Speculative Placement (BSP)that projects the past demand behavior of a VM to a candidate target host. We exploit the BSP technique in two algorithms, first for handling the stream of deployment requests, second in a periodic optimization, to handle the dynamic aspects of the demands. We show the benefits of our BSPtechnique by comparing the results on a simulation period with a strategy of choosing an optimal placement at each time instant, produced by a generic MIP solver.

121 citations

Proceedings ArticleDOI
23 May 2009
TL;DR: The architecture of the cloud, the services offered by the cloud to support optimization and the methodology used by developers to enable runtime optimization of the clouds are shown.
Abstract: This paper presents a method for achieving optimization in clouds by using performance models in the development, deployment and operations of the applications running in the cloud. We show the architecture of the cloud, the services offered by the cloud to support optimization and the methodology used by developers to enable runtime optimization of the clouds. An optimization algorithm is presented which accommodates different goals, different scopes and timescales of optimization actions, and different control algorithms. The optimization here maximizes profits in the cloud constrained by QoS and SLAs across a large variety of workloads.

118 citations

Proceedings ArticleDOI
24 Jun 2012
TL;DR: Some of the initial findings by providing a testbed on which comparisons between the IaaS frameworks can be conducted are outlined and the work on making access to the various infrastructures on FutureGrid easier is presented.
Abstract: Today, many cloud Infrastructure as a Service(IaaS) frameworks exist. Users, developers, and administrators have to make a decision about which environment is best suited for them. Unfortunately, the comparison of such frameworks is difficult because either users do not have access to all of them or they are comparing the performance of such systems on different resources, which make it difficult to obtain objective comparisons. Hence, the community benefits from the availability of a testbed on which comparisons between the IaaS frameworks can be conducted. FutureGrid aims to offer a number of IaaS including Nimbus, Eucalyptus, OpenStack, and OpenNebula. One of the important features that FutureGrid provides is not only the ability to compare between IaaS frameworks, but also to compare them in regards to bare-metal and traditional high performance computing services. In this paper, we outline some of our initial findings by providing such a testbed. As one of our conclusions, we also present our work on making access to the various infrastructures on FutureGrid easier.

109 citations


"Improved filter-weight algorithm fo..." refers background in this paper

  • ...For resource allocation Eucalyptus uses There are many scheduling algorithms proposed which meet different goals such as time-saving, load balancing, performance enhancements, etc....

    [...]

Proceedings ArticleDOI
22 Oct 2012
TL;DR: This work reports on design, implementation and evaluation of a resource management system that builds upon OpenStack, an open-source cloud platform for private and public clouds that supports an Infrastructure-as-a-Service (IaaS) cloud and currently provides allocation for computational resources.
Abstract: We report on design, implementation and evaluation of a resource management system that builds upon OpenStack, an open-source cloud platform for private and public clouds. Our implementation supports an Infrastructure-as-a-Service (IaaS) cloud and currently provides allocation for computational resources in support of both interactive and computationally intensive applications. The design supports an extensible set of management objectives between which the system can switch at runtime. We demonstrate through examples how management objectives related to load-balancing and energy efficiency can be mapped onto the controllers of the resource allocation subsystem, which attempts to achieve an activated management objective at all times. The design is extensible in the sense that additional objectives can be introduced by providing instantiations for generic functions in the controllers. Our implementation monitors the fulfillment of the relevant management metrics in real time. Testbed evaluation demonstrates the effectiveness of our approach in a dynamic environment. It further illustrates the trade-off between closely meeting a specific management objective and the associated cost of VM live-migration.

98 citations