scispace - formally typeset
Search or ask a question

Showing papers on "Service-level agreement published in 2015"


Journal ArticleDOI
Paul Manuel1
TL;DR: The paper describes how a service level agreement is prepared combining quality of service requirements of user and capabilities of cloud resource provider and demonstrates that the proposed model performs better than the first in first out model and similar trust models.
Abstract: Trust plays an important role in commercial cloud environments. It is one of the biggest challenges of cloud technology. Trust enables users to select the best resources in a heterogeneous cloud infrastructure. We introduce a novel trust model based on past credentials and present capabilities of a cloud resource provider. Trust value is calculated using four parameters such as availability, reliability, turnaround efficiency, and data integrity. A trust management system is proposed implementing this trust model. The paper describes how a service level agreement is prepared combining quality of service requirements of user and capabilities of cloud resource provider. We also demonstrate that our proposed model performs better than the first in first out model and similar trust models.

184 citations


Journal ArticleDOI
TL;DR: This research work will help researchers find the important characteristics of autonomic resource management and will also help to select the most suitable technique for autonomicresource management in a specific application along with significant future research directions.
Abstract: As computing infrastructure expands, resource management in a large, heterogeneous, and distributed environment becomes a challenging task. In a cloud environment, with uncertainty and dispersion of resources, one encounters problems of allocation of resources, which is caused by things such as heterogeneity, dynamism, and failures. Unfortunately, existing resource management techniques, frameworks, and mechanisms are insufficient to handle these environments, applications, and resource behaviors. To provide efficient performance of workloads and applications, the aforementioned characteristics should be addressed effectively. This research depicts a broad methodical literature analysis of autonomic resource management in the area of the cloud in general and QoS (Quality of Service)-aware autonomic resource management specifically. The current status of autonomic resource management in cloud computing is distributed into various categories. Methodical analysis of autonomic resource management in cloud computing and its techniques are described as developed by various industry and academic groups. Further, taxonomy of autonomic resource management in the cloud has been presented. This research work will help researchers find the important characteristics of autonomic resource management and will also help to select the most suitable technique for autonomic resource management in a specific application along with significant future research directions.

177 citations


Journal ArticleDOI
TL;DR: The issues in cloud computing are described using the phases of traditional digital forensics as the base and for each phase of the digital forensic process, a list of challenges and analysis of their possible solutions are included.

162 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools and further discuss how the aforementioned research dimensions are handled by current academic research as well as by commercial monitoring tools.
Abstract: Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.

150 citations


Journal ArticleDOI
TL;DR: A double resource renting scheme is designed firstly in which short-term renting and long- term renting are combined aiming at the existing issues, and the results show that the scheme can not only guarantee the service quality of all requests, but also obtain more profit than the latter.
Abstract: As an effective and efficient way to provide computing resources and services to customers on demand, cloud computing has become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, and it is mainly determined by the configuration of a cloud service platform under given market demand. However, a single long-term renting scheme is usually adopted to configure a cloud platform, which cannot guarantee the service quality but leads to serious resource waste. In this paper, a double resource renting scheme is designed firstly in which short-term renting and long-term renting are combined aiming at the existing issues. This double renting scheme can effectively guarantee the quality of service of all requests and reduce the resource waste greatly. Secondly, a service system is considered as an M/M/m+D queuing model and the performance indicators that affect the profit of our double renting scheme are analyzed, e.g., the average charge, the ratio of requests that need temporary servers, and so forth. Thirdly, a profit maximization problem is formulated for the double renting scheme and the optimized configuration of a cloud platform is obtained by solving the profit maximization problem. Finally, a series of calculations are conducted to compare the profit of our proposed scheme with that of the single renting scheme. The results show that our scheme can not only guarantee the service quality of all requests, but also obtain more profit than the latter.

133 citations


Journal ArticleDOI
TL;DR: A multi-objective task scheduling algorithm formappingtasks to a Vms is proposed in order to improve the throughput of the datacenter and reduce the cost without violating the SLA (Service Level Agreement) for an application in cloud SaaS environment.

131 citations


Journal ArticleDOI
TL;DR: This work uses a specific kind of stochastic Petri nets that can capture arbitrary duration distributions to predict the remaining duration of business processes in an online setting and estimates the risk of breaching a temporal deadline.

108 citations


Journal ArticleDOI
01 Jan 2015
TL;DR: This work proposes a framework, SelCSP, which combines trustworthiness and competence to estimate risk of interaction to support customers in reliably identifying ideal service provider.
Abstract: With rapid technological advancements, cloud marketplace witnessed frequent emergence of new service providers with similar offerings. However, service level agreements (SLAs) , which document guaranteed quality of service levels, have not been found to be consistent among providers, even though they offer services with similar functionality. In service outsourcing environments, like cloud, the quality of service levels are of prime importance to customers, as they use third-party cloud services to store and process their clients’ data. If loss of data occurs due to an outage, the customer’s business gets affected. Therefore, the major challenge for a customer is to select an appropriate service provider to ensure guaranteed service quality. To support customers in reliably identifying ideal service provider, this work proposes a framework, SelCSP , which combines trustworthiness and competence to estimate risk of interaction. Trustworthiness is computed from personal experiences gained through direct interactions or from feedbacks related to reputations of vendors. Competence is assessed based on transparency in provider’s SLA guarantees. A case study has been presented to demonstrate the application of our approach. Experimental results validate the practicability of the proposed estimating mechanisms.

108 citations


Proceedings ArticleDOI
27 Jun 2015
TL;DR: This paper investigates the effectiveness of VM and host resource utilization predictions in the VM consolidation task using real workload traces and shows that the approach provides substantial improvement over other heuristic algorithms in reducing energy consumption, number of VM migrations and number of SLA violations.
Abstract: Dynamic Virtual Machine (VM) consolidation is one of the most promising solutions to reduce energy consumption and improve resource utilization in data centers. Since VM consolidation problem is strictly NP-hard, many heuristic algorithms have been proposed to tackle the problem. However, most of the existing works deal only with minimizing the number of hosts based on their current resource utilization and these works do not explore the future resource requirements. Therefore, unnecessary VM migrations are generated and the rate of Service Level Agreement (SLA) violations are increased in data centers. To address this problem, our VM consolidation method which is formulated as a bin-packing problem considers both the current and future utilization of resources. The future utilization of resources is accurately predicted using a k-nearest neighbor regression based model. In this paper, we investigate the effectiveness of VM and host resource utilization predictions in the VM consolidation task using real workload traces. The experimental results show that our approach provides substantial improvement over other heuristic algorithms in reducing energy consumption, number of VM migrations and number of SLA violations.

94 citations


Proceedings ArticleDOI
04 May 2015
TL;DR: This work classifies an extensive up-to-date survey of the most relevant VMP literature proposing a novel taxonomy in order to identify research opportunities and define a general vision on this research area.
Abstract: Cloud computing data enters dynamically provide millions of virtual machines (VMs) in actual cloud markets. In this context, Virtual Machine Placement (VMP) is one of the most challenging problems in cloud infrastructure management, considering the large number of possible optimization criteria and different formulations that could be studied. VMP literature include relevant research topics such as energy efficiency, Service Level Agreement (SLA), Quality of Service (QoS), cloud service pricing schemes and carbon dioxide emissions, all of them with high economical and ecological impact. This work classifies an extensive up-to-date survey of the most relevant VMP literature proposing a novel taxonomy in order to identify research opportunities and define a general vision on this research area.

94 citations


Proceedings ArticleDOI
11 Dec 2015
TL;DR: This paper focuses on improving the energy efficiency of servers for this new deployment model by proposing a framework that consolidates containers on virtual machines and comparing a number of algorithms against metrics such as energy consumption, Service Level Agreement violations, average container migrations rate, and average number of created virtual machines.
Abstract: One of the major challenges that cloud providers face is minimizing power consumption of their data centers. To this point, majority of current research focuses on energy efficient management of resources in the Infrastructure as a Service model and through virtual machine consolidation. However, containers are increasingly gaining popularity and going to be major deployment model in cloud environment and specifically in Platform as a Service. This paper focuses on improving the energy efficiency of servers for this new deployment model by proposing a framework that consolidates containers on virtual machines. We first formally present the container consolidation problem and then we compare a number of algorithms and evaluate their performance against metrics such as energy consumption, Service Level Agreement violations, average container migrations rate, and average number of created virtual machines. Our proposed framework and algorithms can be utilized in a private cloud to minimize energy consumption, or alternatively in a public cloud to minimize the total number of hours the virtual machines leased.

Journal ArticleDOI
TL;DR: A composition model that takes both QoS of services and cloud network environment into consideration and a genetic algorithm based on genetic algorithm for geo-distributed cloud and service providers who want to minimize the SLA violations are proposed.

Journal ArticleDOI
01 Oct 2015
TL;DR: An optimized fine-grained and fair pricing scheme is investigated that can derive an optimal price in the acceptable price range that satisfies both customers and providers simultaneously, and also finds a best-fit billing cycle to maximize social welfare.
Abstract: Although many pricing schemes in IaaS platform are already proposed with pay-as-you-go and subscription/spot market policy to guarantee service level agreement, it is still inevitable to suffer from wasteful payment because of coarse-grained pricing scheme. In this paper, we investigate an optimized fine-grained and fair pricing scheme. Two tough issues are addressed: (1) the profits of resource providers and customers often contradict mutually; (2) VM-maintenance overhead like startup cost is often too huge to be neglected. Not only can we derive an optimal price in the acceptable price range that satisfies both customers and providers simultaneously, but we also find a best-fit billing cycle to maximize social welfare (i.e., the sum of the cost reductions for all customers and the revenue gained by the provider). We carefully evaluate the proposed optimized fine-grained pricing scheme with two large-scale real-world production traces (one from Grid Workload Archive and the other from Google data center). We compare the new scheme to classic coarse-grained hourly pricing scheme in experiments and find that customers and providers can both benefit from our new approach. The maximum social welfare can be increased up to $72.98$ and $48.15$ percent with respect to DAS-2 trace and Google trace respectively.

Journal ArticleDOI
TL;DR: Of the two key approaches in renegotiation, namely bargaining-based negotiation and offer generation--based negotiation, the latter approach is the most promising due to its ability to generate optimized multiple-offer SLA parameters within one round during renegotiation.
Abstract: Managing Service Level Agreement (SLA) within a cloud-based system is important to maintain service continuity and improve trust due to cloud flexibility and scalability. We conduct a general review on cloud-based systems to understand how service continuity and trust are addressed in cloud SLA management. The review shows that SLA renegotiation is necessary to improve trust and maintain service continuity; however, research on SLA renegotiation is limited. Of the two key approaches in renegotiation, namely bargaining-based negotiation and offer generation--based negotiation, the latter approach is the most promising due to its ability to generate optimized multiple-offer SLA parameters within one round during renegotiation.

Proceedings ArticleDOI
27 Jun 2015
TL;DR: A virtual machine consolidation algorithm with usage prediction (VMCUP) for improving the energy efficiency of cloud data centers and reduces the total migrations and the power consumption of the servers while complying with the service level agreement.
Abstract: Virtual machine consolidation aims at reducing the number of active physical servers in a data center, with the goal to reduce the total power consumption. In this context, most of the existing solutions rely on aggressive virtual machine migration, thus resulting in unnecessary overhead and energy wastage. This article presents a virtual machine consolidation algorithm with usage prediction (VMCUP) for improving the energy efficiency of cloud data centers. Our algorithm is executed during the virtual machine consolidation process to estimate the short-term future CPU utilization based on the local history of the considered servers. The joint use of current and predicted CPU utilization metrics allows a reliable characterization of overloaded and under loaded servers, thereby reducing both the load and the power consumption after consolidation. We evaluate our proposed solution through simulations on real workloads from the Planet Lab and the Google Cluster Data datasets. In comparison with the state of the art, the obtained results show that consolidation with usage prediction reduces the total migrations and the power consumption of the servers while complying with the service level agreement.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: It is argued that the introduction of Experience Level Agreements (ELA) as QoE-enabled counterpiece to traditional QoS-based Service Level Agreement (SLA) would provide a key step towards being able to sell service quality to the user.
Abstract: In contrast to the rather network-centric notion of Quality of Service (QoS), the concept of Quality of Experience (QoE) has a strongly user-centric perspective on service quality in communication networks as well as online services. However, related research on QoE so far has largely neglected the question of how to operationalize quality differentiation and to provide corresponding solutions tailored to the end users. In this paper, we argue that the introduction of Experience Level Agreements (ELA) as QoE-enabled counterpiece to traditional QoS-based Service Level Agreements (SLA) would provide a key step towards being able to sell service quality to the user. Hence, we investigate various ideas to exploit QoE awareness for improving SLAs (ranging from internal aspects like SLOs by service providers to completely novel definitions of ELAs which are able to characterize QoE explicitly), and discuss important problems and challenges of the proposed transition as well.

Patent
24 Feb 2015
TL;DR: In this paper, a distributed intelligence agent (DIA) in a computer network performs deep packet inspection on received packets to determine packet flows, and calculates per-flow service level agreement (SLA) metrics for the packets based on timestamp values placed in the packets by respective origin devices in the computer network.
Abstract: In one embodiment, a distributed intelligence agent (DIA) in a computer network performs deep packet inspection on received packets to determine packet flows, and calculates per-flow service level agreement (SLA) metrics for the packets based on timestamp values placed in the packets by respective origin devices in the computer network. By comparing the SLA metrics to respective SLAs to determine whether the respective SLAs are met, then in response to a particular SLA not being met for a particular flow, the DIA may download determined quality of service (QoS) configuration parameters to one or more visited devices along n calculated paths from a corresponding origin device for the particular flow to the DIA. In addition, in one or more embodiments, the QoS configuration parameters may be adjusted or de-configured based on whether they were successful.

Journal ArticleDOI
10 Nov 2015-PLOS ONE
TL;DR: A performance evaluation is carried out of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement, to satisfy both the client and provider by ensuring the best use of resources at a fair price.
Abstract: Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

Patent
27 May 2015
TL;DR: In this paper, a virtualization energy-saving system in cloud computing, comprising a cloud computing resource management prototype system and a backstage data processing system, is presented, where the resources are integrated to the least physical machine nodes by transferring the resources of the virtual machine, and the idle resources are closed, so that the consumption of the resources is reduced.
Abstract: The invention provides a virtualization energy-saving system in Cloud computing, comprising a Cloud computing resource management prototype system and a backstage data processing system; the backstage data processing system comprises a physical resource pool, a virtual machine layer, a local module, a user's application layer and a global resource manager; the Cloud computing resource management prototype system comprises a physical machine management module, a virtual machine management module, a virtual machine dispatching strategy module, a monitoring module, a mirror image management module and a Web module; the resources are integrated to the least physical machine nodes by transferring the resources of the virtual machine, and the idle resources are closed, so that the consumption of the resources is reduced. The virtualization energy conservation in the Cloud computing is designed and realized in the system by using a VM (Virtual Machine) transfer cost model and an SLA (Service Level Agreement) measurement model as the energy consumption estimation models of a data center.

Patent
22 Sep 2015
TL;DR: In this article, the authors proposed a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency.
Abstract: The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device.

Journal ArticleDOI
TL;DR: SmartSLA, a cost-sensitive virtualized resource management system for CPU-bound database services which is composed of two modules is proposed, able to minimize the total cost under time-varying workloads compared to the other cost-insensitive approaches.
Abstract: Virtualization-based multi-tenant database consolidation is an important technique for database-as-a-service (DBaaS) providers to minimize their total cost which is composed of SLA penalty cost, infrastructure cost and action cost. Due to the bursty and diverse tenant workloads, over-provisioning for the peak or under-provisioning for the off-peak often results in either infrastructure cost or service level agreement (SLA) penalty cost. Moreover, although the process of scaling out database systems will help DBaaS providers satisfy tenants’ service level agreement, its indiscriminate use has performance implications or incurs action cost. In this paper, we propose SmartSLA, a cost-sensitive virtualized resource management system for CPU-bound database services which is composed of two modules. The system modeling module uses machine learning techniques to learn a model for predicting the SLA penalty cost for each tenant under different resource allocations. Based on the learned model, the resource allocating module dynamically adjusts the resource allocation by weighing the potential reduction of SLA penalty cost against increase of infrastructure cost and action cost. SmartSLA is evaluated by using the TPC-W and modified YCSB benchmarks with dynamic workload trace and multiple database tenants. The experimental results show that SmartSLA is able to minimize the total cost under time-varying workloads compared to the other cost-insensitive approaches.

Journal ArticleDOI
TL;DR: A self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services with flexible hybrid queueing model is developed and a non-linear constrained optimisation problem with restrictions defined in service level agreement is proposed.
Abstract: Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

Journal ArticleDOI
TL;DR: A knowledge-based admission control along with scheduling algorithms for SaaS providers to effectively utilize public Cloud resources in order to maximize profit by minimizing cost and improving customers’ satisfaction level is proposed.
Abstract: Software as a Service (SaaS) in Cloud Computing offers reliable access to software applications for end users over the Internet without direct investment in infrastructure and software. SaaS providers utilize resources of internal datacenters or rent resources from a public Infrastructure as a Service (IaaS) provider in order to serve their customers. Internal hosting can increase cost of administration and maintenance, whereas hiring from an IaaS provider can impact quality of service due to its variable performance. To surmount these challenges, we propose a knowledge-based admission control along with scheduling algorithms for SaaS providers to effectively utilize public Cloud resources in order to maximize profit by minimizing cost and improving customers’ satisfaction level. In the proposed model, the admission control is based on Service Level Agreement (SLA) and uses different strategies to decide upon accepting user requests for that minimal performance impact, avoiding SLA penalties that are giving higher profit. However, because the admission control can make decisions optimally, there is a need of machine learning methods to predict the strategies. In order to model prediction of sequence of strategies, a customized decision tree algorithm has been used. In addition, we conducted several experiments to analyze which solution in which scenario fit better to maximize SaaS provider’s profit. Results obtained through our simulation shows that our proposed algorithm provides significant improvement (up to 38.4 % cost saving) compared to the previous research works.

Patent
Eugen Feller1, Julien Forgeat1
10 Apr 2015
TL;DR: In this paper, the authors describe a method in a server end station of a cloud for determining whether a service level agreement (SLA) violation has occurred or is expected to occur.
Abstract: According to one embodiment, a method in a server end station of a cloud for determining whether a service level agreement (SLA) violation has occurred or is expected to occur is described. The method includes receiving one or more insight models from an insight model builder, wherein each insight model is a based on one or more metrics previously collected from a virtualized infrastructure, and wherein each insight model models a particular behavior in the virtualized infrastructure and receiving real time metrics from the virtualized infrastructure. The method further includes for each of the one or more insight models, determining based on the received real time metrics that one or more services on the virtualized infrastructure is in an abnormal state or is expected to enter the abnormal state, wherein the abnormal state occurs when the insight model indicates that the associated modeled behavior violates a predetermined indicator.

Journal ArticleDOI
TL;DR: The empirical results obtained from simulations carried out using an agent-based testbed suggest that using the negotiation mechanism, a consumer and a provider agent have a mutually satisfying agreement on price, time slot, and QoS issues in terms of the aggregated utility.
Abstract: Since participants in a Cloud may be independent bodies, some mechanisms are necessary for resolving the different preferences to establish a service-level agreement (SLA) for Cloud service reservations. Whereas there are some mechanisms for supporting SLA negotiation, there is little or no negotiation support involving price, time slot, and QoS issues concurrently for a Cloud service reservation. For the concurrent price, timeslot, and QoS negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price, timeslot, and QoS proposals is necessary. The contribution of this work is designing a multi-issue negotiation mechanism to facilitate 1) concurrent price, time slot, and QoS negotiations including the design of QoS utility functions and 2) adaptive and similarity-based trade-off proposals for price, time slots, and level of QoS issues. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed, total utility, and to reduce computational load for evaluating proposals by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using an agent-based testbed suggest that using the negotiation mechanism, (i) a consumer and a provider agent have a mutually satisfying agreement on price, time slot, and QoS issues in terms of the aggregated utility and (ii) the fastest negotiation speed with (iii) comparatively lower number of evaluated proposals in a negotiation.

Proceedings ArticleDOI
21 Mar 2015
TL;DR: A novel power aware load balancing method, named ICA-MMT to manage power consumption in cloud computing data centers by exploiting the Imperialism Competitive Algorithm for detecting over utilized hosts and migrating one or several virtual machines of these hosts to the other hosts to decrease their utilization.
Abstract: Energy consumption has become a major challenge in cloud computing infrastructures. Cloud computing data centers consume enormous amount of electrical power resulting in high amount of carbon dioxide that affects the green environment as well as high operational costs for cloud providers. On the other hand, reducing the energy consumption would negatively impact the SLA (Service Level Agreement) that is a crucial concern in any resource allocation policy. In this paper, we propose a novel power aware load balancing method, named ICA-MMT to manage power consumption in cloud computing data centers. We have exploited the Imperialism Competitive Algorithm (ICA) for detecting over utilized hosts and then we migrate one or several virtual machines of these hosts to the other hosts to decrease their utilization. Finally, we consider other hosts as underutilized host and if it is possible, we migrate all of their VMs to the other hosts and switch them to the sleep mode. The results indicate that our method as compared to the previously proposed resource allocation policies such as LR-MMT (local Regression-Minimum Migration Time), MAD-MMT (Median Absolute Deviation- Minimum Migration Time), Bee-MMT (Bee colony algorithm- Minimum Migration Time) and non-Power aware policy offers least power consumption and SLA violation.

Proceedings ArticleDOI
30 Nov 2015
TL;DR: The proposed service quality evaluation model represents a visual recommender system for cloud service brokers and cloud users, which verifies the quality of cloud services delivered for each service and provides the service status of the cloud providers.
Abstract: Selecting the appropriate cloud services and cloudproviders according to the cloud users requirements is becoming a complex task, as the number of cloud providers increases. Cloud providers offer similar kinds of cloud services, but they are different in terms of price, quality of service, customer experience, and service delivery. The most challenging issue of the current cloud computing business is that cloud providers commit a certain Service Level Agreement (SLA), with cloud users, but there is little or no verification mechanisms which ensure that cloud providers are providing cloud services according to their commitment. In the current literature, there is a lack of an evaluation model which provides the real status of cloud providers for the cloud users. In this paper, an evaluation model is proposed, which verifies the quality of cloud services delivered for each service and provides the service status of the cloud providers. Finally, evaluation results obtained from cloud auditors are visualized in an ordered performance heat map, showing the cloud providers in a decreasing ordering of overall service quality. In this way, the proposed service quality evaluation model represents a visual recommender system for cloud service brokers and cloud users.

Journal ArticleDOI
TL;DR: A trust mining model (TMM) is proposed to identify trusted cloud services while negotiating an SLA, where the user can make a decision on whether to continue or discontinue the service with the service provider.
Abstract: To access cloud services the user needs to negotiate a service level agreement (SLA) with the service provider. There will be inadequate assurances to customers on whether the services are trustworthy to pick. Trust management plays a major role in guiding the users to access trustworthy services. Hence a trust mining model (TMM) is proposed to identify trusted cloud services while negotiating an SLA. The knowledge is discovered from a previously monitored dataset and a trust value is generated. The proposed trust model helps both the service provider and cloud user, where the user can make a decision on whether to continue or discontinue the service with the service provider. A Rough set and Bayesian inference are used together to generate the overall results. Using rough sets previously monitored data are mined and the indiscernibility in them is analyzed. Bayesian inference is applied to infer the overall trust degree. The accuracy of the results is compared with the previous models and the result shows that the TMM gives better accuracy. The model is simulated using CloudSim.

Journal ArticleDOI
01 Jan 2015
TL;DR: This work developed a broker-based framework capable of automatically selecting cloud services based on user-defined requirement parameters and the service level agreement (SLA) attributes of the cloud providers.
Abstract: The fact that cloud computing is widely accepted results in an increasing number of cloud providers. Customers are now burdened with the task of deciding which provider to choose for serving their requirements. This work developed a broker-based framework capable of automatically selecting cloud services based on user-defined requirement parameters and the service level agreement (SLA) attributes of the cloud providers. The goal is to help customers make clever decisions to achieve their best benefit. The framework contains components for decision making, cloud monitoring, user authentication, cloud access and SLA management. More specifically, we developed a utility-based, dynamic and flexible matching algorithm capable of maximising the users' profits. The matching algorithms as well as the entire framework were evaluated using a realistic simulation environment. Experimental results show that our utility-based approach performs well in terms of the matching quality and cost-saving.

Journal ArticleDOI
01 Feb 2015
TL;DR: Plug4Green is presented, an energy-aware VM placement algorithm that can be easily specialized and extended to fit the specificities of the data centres to reduce the energy consumption and the gas emission.
Abstract: To maintain an energy footprint as low as possible, data centres manage their VMs according to conventional and established rules Each data centre is however made unique due to its hardware and workload specificities This prevents the ad hoc design of current VM managers from taking these particularities into account to provide additional energy savings In this paper, we present Plug4Green, an energy-aware VM placement algorithm that can be easily specialized and extended to fit the specificities of the data centres Plug4Green computes the placement of the VMs and state of the servers depending on a large number of constraints, extracted automatically from SLAs The flexibility of Plug4Green is achieved by allowing the constraints to be formulated independently from each other but also from the power models This flexibility is validated through the implementation of 23 SLA constraints and 2 objectives aiming at reducing either the power consumption or the greenhouse gas emissions On a heterogeneous test bed, Plug4Green specialization to fit the hardware and the workload specificities allowed to reduce the energy consumption and the gas emission by up to 33% and 34%, respectively Finally, simulations showed that Plug4Green is capable of computing an improved placement for 7500 VMs running on 1500 servers within a minute