scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Proactive dynamic virtual-machine consolidation for energy conservation in cloud data centres

01 Dec 2018-Vol. 7, Iss: 1, pp 10
TL;DR: This paper provides an in-depth survey of the most recent techniques and algorithms used in proactive dynamic VM consolidation focused on energy consumption and presents a general framework that can be used on multiple phases of a complete consolidation process.
Abstract: Data center power consumption is among the largest commodity expenditures for many organizations. Reduction of power used in cloud data centres with heterogeneous physical resources can be achieved through Virtual-Machine (VM) consolidation which reduces the number of Physical Machines (PMs) used, subject to Quality of Service (QoS) constraints. This paper provides an in-depth survey of the most recent techniques and algorithms used in proactive dynamic VM consolidation focused on energy consumption. We present a general framework that can be used on multiple phases of a complete consolidation process.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This work proposes VM placement algorithms based on both bin-packing heuristics and servers’ power efficiency and introduces a new bin- packing heuristic called a Medium-Fit (MF) to reduce SLA violation.
Abstract: One of the main challenges in cloud computing is an enormous amount of energy consumed in data-centers. Several researches have been conducted on Virtual Machine(VM) consolidation to optimize energy consumption. Among the proposed VM consolidations, OpenStack Neat is notable for its practicality. OpenStack Neat is an open-source consolidation framework that can seamlessly integrate to OpenStack, one of the most common and widely used open-source cloud management tool. The framework has components for deciding when to migrate VMs and for selecting suitable hosts for the VMs (VM placement). The VM placement algorithm of OpenStack Neat is called Modified Best-Fit Decreasing (MBFD). MBFD is based on a heuristic that handles only minimizing the number of servers. The heuristic is not only less energy efficient but also increases Service Level Agreement (SLA) violation and consequently cause more VM migrations. To improve the energy efficiency, we propose VM placement algorithms based on both bin-packing heuristics and servers’ power efficiency. In addition, we introduce a new bin-packing heuristic called a Medium-Fit (MF) to reduce SLA violation. To evaluate performance of the proposed algorithms we have conducted experiments using CloudSim on three cloud data-center scenarios: homogeneous, heterogeneous and default. Workloads that run in the data-centers are generated from traces of PlanetLab and Bitbrains clouds. The results of the experiment show up-to 67% improvement in energy consumption and up-to 78% and 46% reduction in SLA violation and amount of VM migrations, respectively. Moreover, all improvements are statistically significant with significance level of 0.01.

49 citations

Journal ArticleDOI
TL;DR: A neuro-fuzzy approach for the classification and prediction of user behaviour is proposed and the scheme is found to be promising in terms of classification as well as prediction accuracy.
Abstract: Big data and cloud computing technology appeared on the scene as new trends due to the rapid growth of social media usage over the last decade. Big data represent the immense volume of complex data that show more details about behaviours, activities, and events that occur around the world. As a result, big data analytics needs to access diverse types of resources within a decreased response time to produce accurate and stable business experimentation that could help make brilliant decisions for organizations in real-time. These developments have spurred a revolutionary transformation in research, inventions, and business marketing. User behaviour analysis for classification and prediction is one of the hottest topics in data science. This type of analysis is performed for several purposes, such as finding users’ interests about a product (for marketing, e-commerce, etc.) or toward an event (elections, championships, etc.) and observing suspicious activities (security and privacy) based on their traits over the Internet. In this paper, a neuro-fuzzy approach for the classification and prediction of user behaviour is proposed. A dataset, composed of users’ temporal logs containing three types of information, namely, local machine, network and web usage logs, is targeted. To complement the analysis, each user’s 360-degree feedback is also utilized. Various rules have been implemented to address the company’s policy for determining the precise behaviour of a user, which could be helpful in managerial decisions. For prediction, a Gaussian Radial Basis Function Neural Network (GRBF-NN) is trained based on the example set generated by a Fuzzy Rule Based System (FRBS) and the 360-degree feedback of the user. The results are obtained and compared with other state-of-the-art schemes in the literature, and the scheme is found to be promising in terms of classification as well as prediction accuracy.

38 citations


Cites methods from "Proactive dynamic virtual-machine c..."

  • ...Dynamic consolidation frameworks typically consist of many overlapped domains [5], which have been divided into five main subsystems, of which the workload prediction subsystem uses a clustering process, VM and user behaviour estimation, prediction window size, and forecasting process....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors present an overview of virtualized data centers and consolidation solutions from the literature and present a brief thematic taxonomy and an illustration of some consolidation solutions.

34 citations

Journal ArticleDOI
TL;DR: This paper proposes an energy aware VM consolidation algorithm that minimizes SLAVs and develops different fine-tuned Machine Learning prediction models for individual VMs to predict the best time to trigger migrations from hosts.

26 citations

Journal ArticleDOI
01 Oct 2021
TL;DR: This work introduces a multi-objective approach to compute optimal placement strategies considering different goals, such as the impact of hardware outages, the power required by the datacenter, and the performance perceived by users.
Abstract: The ubiquitous diffusion of cloud computing requires suitable management policies to face the workload while guaranteeing quality constraints and mitigating costs. The typical trade-off is between the used power and the adherence to a service-level metric subscribed by customers. To this aim, a possible idea is to use an optimization-based placement mechanism to select the servers where to deploy virtual machines. Unfortunately, high packing factors could lead to performance and security issues, e.g., virtual machines can compete for hardware resources or collude to leak data. Therefore, we introduce a multi-objective approach to compute optimal placement strategies considering different goals, such as the impact of hardware outages, the power required by the datacenter, and the performance perceived by users. Placement strategies are found by using a deep reinforcement learning framework to select the best placement heuristic for each virtual machine composing the workload. Results indicate that our method outperforms bin packing heuristics widely used in the literature when considering either synthetic or real workloads.

22 citations

References
More filters
Book
01 Mar 2004
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Abstract: Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics.

33,341 citations

Book
19 Aug 2009
TL;DR: In this article, the mean and autocovariance functions of ARIMA models are estimated for multivariate time series and state-space models, and the spectral representation of the spectrum of a Stationary Process is inferred.
Abstract: 1 Stationary Time Series.- 2 Hilbert Spaces.- 3 Stationary ARMA Processes.- 4 The Spectral Representation of a Stationary Process.- 5 Prediction of Stationary Processes.- 6* Asymptotic Theory.- 7 Estimation of the Mean and the Autocovariance Function.- 8 Estimation for ARMA Models.- 9 Model Building and Forecasting with ARIMA Processes.- 10 Inference for the Spectrum of a Stationary Process.- 11 Multivariate Time Series.- 12 State-Space Models and the Kalman Recursions.- 13 Further Topics.- Appendix: Data Sets.

5,260 citations

Journal ArticleDOI
17 Aug 2008
TL;DR: This paper shows how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements and argues that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions.
Abstract: Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance.In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.

3,549 citations


"Proactive dynamic virtual-machine c..." refers background in this paper

  • ...The advantage of such approach is better to load balancing and less prone to bottleneck [10, 141]....

    [...]

Proceedings ArticleDOI
02 May 2005
TL;DR: The design options for migrating OSes running services with liveness constraints are considered, the concept of writable working set is introduced, and the design, implementation and evaluation of high-performance OS migration built on top of the Xen VMM are presented.
Abstract: Migrating operating system instances across distinct physical hosts is a useful tool for administrators of data centers and clusters: It allows a clean separation between hard-ware and software, and facilitates fault management, load balancing, and low-level system maintenance.By carrying out the majority of migration while OSes continue to run, we achieve impressive performance with minimal service downtimes; we demonstrate the migration of entire OS instances on a commodity cluster, recording service downtimes as low as 60ms. We show that that our performance is sufficient to make live migration a practical tool even for servers running interactive loads.In this paper we consider the design options for migrating OSes running services with liveness constraints, focusing on data center and cluster environments. We introduce and analyze the concept of writable working set, and present the design, implementation and evaluation of high-performance OS migration built on top of the Xen VMM.

3,186 citations


"Proactive dynamic virtual-machine c..." refers background or methods in this paper

  • ...In other words, the Np hard bin-packing problem principle is based on local best decision to pack a series of VMs having specified sizes into a least possible number of PMs [40]....

    [...]

  • ...Significant research has been done to improve bin-backing algorithms, like those used by CloudSim [18], Chowdhury et al [40] and Farahnakian et al [60]....

    [...]

  • ...The basic K-means-described in Algorithm 1-works as follows [167]: K-means has been used by Dabbagh et al [44] and Chowdhury et al [40] to create a set of clusters to group all types of VM requests....

    [...]

  • ...Chowdhury et al [40] and Farahnakian et al [60]....

    [...]

  • ...Bin-packing [40] CPU and memory Redesign CloudSim...

    [...]

Journal ArticleDOI
TL;DR: An architectural framework and principles for energy-efficient Cloud computing are defined and the proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS).

2,511 citations


"Proactive dynamic virtual-machine c..." refers background or methods in this paper

  • ...Dynamic provisioning-based energy consumption can represent the most efficient methods to improve the utilization of the resources and reduce energy [1, 14, 19]....

    [...]

  • ...VM Selection is the process of selecting one or more VMs from the full set of VMs allocated to the server and the future predicted new VMs, which must be located or reallocated to other servers [19]....

    [...]