scispace - formally typeset
Search or ask a question

Showing papers on "Service-level agreement published in 2012"


Journal ArticleDOI
TL;DR: A competitive analysis is conducted and competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems are proved, and novel adaptive heuristics for dynamic consolidation of VMs are proposed based on an analysis of historical data from the resource usage by VMs.
Abstract: The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allows Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy-performance trade-off, as aggressive consolidation may lead to performance degradation. Because of the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. To understand the implications of the online nature of the problem, we conduct a competitive analysis and prove competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems. Furthermore, we propose novel adaptive heuristics for dynamic consolidation of VMs based on an analysis of historical data from the resource usage by VMs. The proposed algorithms significantly reduce energy consumption, while ensuring a high level of adherence to the service level agreement. We validate the high efficiency of the proposed algorithms by extensive simulations using real-world workload traces from more than a thousand PlanetLab VMs. Copyright © 2011 John Wiley & Sons, Ltd.

1,616 citations


Patent
13 Mar 2012
Abstract: System, method, and tangible computer-readable storage media are disclosed for providing a brokering service for compute resources The method includes, at a brokering service, polling a group of separately administered compute environments to identify resource capabilities and information, each compute resource environment including the group of managed nodes for processing workload, receiving a request for compute resources at the brokering service system, the request for compute resources being associated with a service level agreement (SLA) and based on the resource capabilities across the group of compute resource environments, selecting compute resources in one or more of the group of compute resource environments The brokering service system receives workload associated with the request and communicates the workload to the selected resources for processing The brokering services system can aggregate resources for multiple cloud service providers and act as an advocate for or a guarantor of the SLA associated with the workload

563 citations


Proceedings ArticleDOI
13 May 2012
TL;DR: This paper presents extensive experimental results, associated with both placement computation and run-time performance under time-varying traffic demands, to show that the heuristics introduced provide good results (compared to the optimal solution) for medium size data centers.
Abstract: Virtual Machine (VM) placement has to carefully consider the aggregated resource consumption of co-located VMs in order to obey service level agreements at lower possible cost. In this paper, we focus on satisfying the traffic demands of the VMs in addition to CPU and memory requirements. This is a much more complex problem both due to its quadratic nature (being the communication between a pair of VMs) and since it involves many factors beyond the physical host, like the network topologies and the routing scheme. Moreover, traffic patterns may vary over time and predicting the resulting effect on the actual available bandwidth between hosts within the data center is extremely difficult. We address this problem by trying to allocate a placement that not only satisfies the predicted communication demand but is also resilient to demand time-variations. This gives rise to a new optimization problem that we call the Min Cut Ratio-aware VM Placement (MCRVMP). The general MCRVMP problem is NP-Hard, hence, we introduce several heuristics to solve it in reasonable time. We present extensive experimental results, associated with both placement computation and run-time performance under time-varying traffic demands, to show that our heuristics provide good results (compared to the optimal solution) for medium size data centers.

213 citations


Journal ArticleDOI
TL;DR: The Detecting SLA Violation infrastructure (DeSVi) architecture is presented, sensing SLA violations through sophisticated resource monitoring and providing a guideline on the appropriate monitoring intervals for applications depending on their resource consumption behavior.

203 citations


Journal ArticleDOI
TL;DR: The main novelty of the approach is to address-in a unifying framework-service centers resource management by exploiting as actuation mechanisms allocation of virtual machines to servers, load balancing, capacity allocation, server power state tuning, and dynamic voltage/frequency scaling.
Abstract: With the increase of energy consumption associated with IT infrastructures, energy management is becoming a priority in the design and operation of complex service-based systems. At the same time, service providers need to comply with Service Level Agreement (SLA) contracts which determine the revenues and penalties on the basis of the achieved performance level. This paper focuses on the resource allocation problem in multitier virtualized systems with the goal of maximizing the SLAs revenue while minimizing energy costs. The main novelty of our approach is to address-in a unifying framework-service centers resource management by exploiting as actuation mechanisms allocation of virtual machines (VMs) to servers, load balancing, capacity allocation, server power state tuning, and dynamic voltage/frequency scaling. Resource management is modeled as an NP-hard mixed integer nonlinear programming problem, and solved by a local search procedure. To validate its effectiveness, the proposed model is compared to top-performing state-of-the-art techniques. The evaluation is based on simulation and on real experiments performed in a prototype environment. Synthetic as well as realistic workloads and a number of different scenarios of interest are considered. Results show that we are able to yield significant revenue gains for the provider when compared to alternative methods (up to 45 percent). Moreover, solutions are robust to service time and workload variations.

201 citations


Journal ArticleDOI
TL;DR: This paper proposes a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Paths (PCP), that tries to minimize the cost of workflow execution while meeting a user-defined deadline.
Abstract: Recently, utility Grids have emerged as a new model of service provisioning in heterogeneous distributed systems. In this model, users negotiate with service providers on their required Quality of Service and on the corresponding price to reach a Service Level Agreement. One of the most challenging problems in utility Grids is workflow scheduling, i.e., the problem of satisfying the QoS of the users as well as minimizing the cost of workflow execution. In this paper, we propose a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Paths (PCP), that tries to minimize the cost of workflow execution while meeting a user-defined deadline. The PCP algorithm has two phases: in the deadline distribution phase it recursively assigns subdeadlines to the tasks on the partial critical paths ending at previously assigned tasks, and in the planning phase it assigns the cheapest service to each task while meeting its subdeadline. The simulation results show that the performance of the PCP algorithm is very promising.

192 citations


Book ChapterDOI
TL;DR: This chapter discusses existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.
Abstract: In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers' service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.

179 citations


Patent
01 Jun 2012
TL;DR: In this article, the authors present a system and method for creating, deploying, selecting and associating cloud computing services from many cloud vendors to effectuate a large-scale information technology data processing center implemented in a software only form.
Abstract: A system and method for creating, deploying, selecting and associating cloud computing services from many cloud vendors to effectuate a large-scale information technology data processing center implemented in a software only form. Services may be employed from any number of different service providers and user define policies provides for switching to or aggregating different service providers when necessary. Configurations can be created that allow for service provider selection based on user-selectable parameters such as cost, availability, performance and service level agreement terms. The system employs measurement, aggregation, reporting and decision support of system usage and costing, performance, Service level, feature set, to automate the construction, operation and ongoing management of software based cloud. Drag and drop, non list based UI for the construction and modification of clouds implemented and modeled in software.

174 citations


Proceedings ArticleDOI
13 May 2012
TL;DR: An efficient heuristic algorithm based on convex optimization and dynamic programming is presented to solve the resource allocation problem of cloud computing system while meeting the specified client-level SLAs in a probabilistic sense.
Abstract: Cloud computing systems (or hosting datacenters) have attracted a lot of attention in recent years. Utility computing, reliable data storage, and infrastructure-independent computing are example applications of such systems. Electrical energy cost of a cloud computing system is a strong function of the consolidation and migration techniques used to assign incoming clients to existing servers. Moreover, each client typically has a service level agreement (SLA), which specifies constraints on performance and/or quality of service that it receives from the system. These constraints result in a basic trade-off between the total energy cost and client satisfaction in the system. In this paper, a resource allocation problem is considered that aims to minimize the total energy cost of cloud computing system while meeting the specified client-level SLAs in a probabilistic sense. The cloud computing system pays penalty for the percentage of a client's requests that do not meet a specified upper bound on their service time. An efficient heuristic algorithm based on convex optimization and dynamic programming is presented to solve the aforesaid resource allocation problem. Simulation results demonstrate the effectiveness of the proposed algorithm compared to previous work.

161 citations


Proceedings ArticleDOI
09 May 2012
TL;DR: A flexible and energy-aware framework for the (re)allocation of virtual machines in a data centre that decoupling the expressed constraints from the algorithms using the Constraint Programming (CP) paradigm and programming language is proposed.
Abstract: Data centres are powerful ICT facilities which constantly evolve in size, complexity, and power consumption. At the same time users' and operators' requirements become more and more complex. However, existing data centre frameworks do not typically take energy consumption into account as a key parameter of the data centre's configuration. To lower the power consumption while fulfilling performance requirements we propose a flexible and energy-aware framework for the (re)allocation of virtual machines in a data centre. The framework, being independent from the data centre management system, computes and enacts the best possible placement of virtual machines based on constraints expressed through service level agreements. The framework's flexibility is achieved by decoupling the expressed constraints from the algorithms using the Constraint Programming (CP) paradigm and programming language, basing ourselves on a cluster management library called Entropy. Finally, the experimental and simulation results demonstrate the effectiveness of this approach in achieving the pursued energy optimization goals.

132 citations


Proceedings ArticleDOI
13 May 2012
TL;DR: This paper proposes an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA).
Abstract: Energy consumption has become a major concern to the widespread deployment of cloud data centers. The growing importance for parallel applications in the cloud introduces significant challenges in reducing the power consumption drawn by the hosted servers. In this paper, we propose an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA). Since slacking non-critical jobs can achieve significant power saving, we exploit the slack room and allocate them in a global manner in our schedule. Using random generated and real-life application workflows, our results demonstrate that EES is able to reduce considerable energy consumption while still meeting SLA.

Proceedings ArticleDOI
David Breitgand1, Amir Epstein1
25 Mar 2012
TL;DR: This work considers consolidating virtual machines on the minimum number of physical containers in a cloud where the physical network may become a bottleneck, and models the problem as a Stochastic Bin Packing problem, where each virtual machine's bandwidth demand is treated as a random variable.
Abstract: Current trends in virtualization, green computing, and cloud computing require ever increasing efficiency in consolidating virtual machines without degrading quality of service. In this work, we consider consolidating virtual machines on the minimum number of physical containers (e.g., hosts or racks) in a cloud where the physical network (e.g., network interface or top of the rack switch link) may become a bottleneck. Since virtual machines do not simultaneously use maximum of their nominal bandwidth, the capacity of the physical container can be multiplexed. We assume that each virtual machine has a probabilistic guarantee on realizing its bandwidth Requirements-as derived from its Service Level Agreement with the cloud provider. Therefore, the problem of consolidating virtual machines on the minimum number of physical containers, while preserving these bandwidth allocation guarantees, can be modeled as a Stochastic Bin Packing (SBP) problem, where each virtual machine's bandwidth demand is treated as a random variable. We consider both offline and online versions of SBP. Under the assumption that the virtual machines' bandwidth consumption obeys normal distribution, we show a 2-approximation algorithm for the offline version and improve the previously reported results by presenting a (2 +∈)-competitive algorithm for the online version. We also observe that a dual polynomial-time approximation scheme (PTAS) for SBP can be obtained via reduction to the two-dimensional vector bin packing problem. Finally, we perform a thorough performance evaluation study using both synthetic and real data to evaluate the behavior of our proposed algorithms, showing their practical applicability.

Book ChapterDOI
27 Aug 2012
TL;DR: The driving principle is that Cloud Benchmarks must consider end-to-end performance and pricing, taking into account that services are delivered over the Internet, and this requirement yields new challenges for benchmarking.
Abstract: With the increasing adoption of Cloud Computing, we observe an increasing need for Cloud Benchmarks, in order to assess the performance of Cloud infrastructures and software stacks, to assist with provisioning decisions for Cloud users, and to compare Cloud offerings. We understand our paper as one of the first systematic approaches to the topic of Cloud Benchmarks. Our driving principle is that Cloud Benchmarks must consider end-to-end performance and pricing, taking into account that services are delivered over the Internet. This requirement yields new challenges for benchmarking and requires us to revisit existing benchmarking practices in order to adopt them to the Cloud.

Patent
21 May 2012
TL;DR: In this paper, a management server is provided in a computer system having one or more hosts and switches, the hosts having a plurality of virtual machines, each virtual machine being defined according to a service level agreement.
Abstract: A management server is provided in a computer system having one or more hosts, one or more storage systems and one or more switches, the hosts having a plurality of virtual machines, each virtual machine being defined according to a service level agreement. The management server is operable to manage the virtual machines and resources associated with the virtual machines; receive a notification of an event from a node in the computer system; determine if the event affects a service level agreement for any of the virtual machines defined in the computer system, the service level agreements listing required attributes for the corresponding virtual machines; allocate a new resource for a virtual machine whose service level agreement is affected by the event; and move the virtual machine whose service level agreement is affected by the event to the newly allocated resource.

Proceedings ArticleDOI
03 Nov 2012
TL;DR: For cloud computing to remain attractive, the DDoS threat is to be addressed before it triggers the billing mechanism, which can be addressed by using reactive/on-demand in-cloud eDDoS mitigation service (scrubber Service) for mitigating the application-layer and network-layer DDOS attacks with the help of an efficient client-puzzle approach.
Abstract: Cloud computing is not a new technology, it is a new way of delivering computing resources. Elastic cloud computing enables services to be deployed and accessed globally on demand with little maintenance by providing QoS as per service level agreement (SLA) of customer. The Cloud-based DDoS attacks or outside DDoS attacks can make ostensibly legitimate requests for a service to generate an economic Distributed Denial of Service (eDDoS) -- where the elastic nature of the cloud allows scaling of service beyond the economic means of the purveyor to pay their cloud-based service bills which leads to Economic Denial of Sustainability (EDoS). Attacks mimicking legitimate users are on the climb. For cloud computing to remain attractive, the DDoS threat is to be addressed before it triggers the billing mechanism. This problem can be addressed by using reactive/on-demand in-cloud eDDoS mitigation service (scrubber Service) for mitigating the application-layer and network-layer DDOS attacks with the help of an efficient client-puzzle approach.

Proceedings ArticleDOI
16 Jul 2012
TL;DR: An application monitoring architecture named CASViD, which stands for Cloud Application SLA Violation Detection architecture, which monitors and detects SLA violations at the application layer, and includes tools for resource allocation, scheduling, and deployment.
Abstract: Cloud resources and services are offered based on Service Level Agreements (SLAs) that state usage terms and penalties in case of violations. Although, there is a large body of work in the area of SLA provisioning and monitoring at infrastructure and platform layers, SLAs are usually assumed to be guaranteed at the application layer. However, application monitoring is a challenging task due to monitored metrics of the platform or infrastructure layer that cannot be easily mapped to the required metrics at the application layer. Sophisticated SLA monitoring among those layers to avoid costly SLA penalties and maximize the provider profit is still an open research challenge. This paper proposes an application monitoring architecture named CASViD, which stands for Cloud Application SLA Violation Detection architecture. CASViD architecture monitors and detects SLA violations at the application layer, and includes tools for resource allocation, scheduling, and deployment. Different from most of the existing monitoring architectures, CASViD focuses on application level monitoring, which is relevant when multiple customers share the same resources in a Cloud environment. We evaluate our architecture in a real Cloud testbed using applications that exhibit heterogeneous behaviors in order to investigate the effective measurement intervals for efficient monitoring of different application types. The achieved results show that our architecture, with low intrusion level, is able to monitor, detect SLA violations, and suggest effective measurement intervals for various workloads.

Proceedings ArticleDOI
20 Sep 2012
TL;DR: This paper formalizes the resource allocation problem using Queuing Theory and proposes optimal solutions for the problem considering various Quality of Service (QoS) parameters such as pricing mechanisms, arrival rates, service rates and available resources.
Abstract: Compared with the traditional computing models such as grid computing and cluster computing, a key advantage of Cloud computing is that it provides a practical business model for customers to use remote resources. However, it is challenging for Cloud providers to allocate the pooled computing resources dynamically among the differentiated customers so as to maximize their revenue. It is not an easy task to transform the customer-oriented service metrics into operating level metrics, and control the Cloud resources adaptively based on Service Level Agreement (SLA). This paper addresses the problem of maximizing the provider's revenue through SLA-based dynamic resource allocation as SLA plays a vital role in Cloud computing to bridge service providers and customers. We formalize the resource allocation problem using Queuing Theory and propose optimal solutions for the problem considering various Quality of Service (QoS) parameters such as pricing mechanisms, arrival rates, service rates and available resources. The experimental results, both with the synthetic dataset and with traced dadataset, show that our algorithms outperform related work.

Journal ArticleDOI
TL;DR: This research paper presents what cloud computing is, the various cloud models and the overview of the cloud computing architecture, and analyzes the key research challenges present in cloud computing and offers best practices to service providers and enterprises hoping to leverage cloud service to improve their bottom line in this severe economic climate.
Abstract: Cloud computing is a set of IT services that are provided to a customer over a network on a leased basis and with the ability to scale up or down their service requirements. Usually Cloud Computing services are delivered by a third party provider who owns the infrastructure.Cloud Computing holds the potential to eliminate the requirements for setting up of high-cost computing infrastructure for IT-based solutions and services that theindustry uses. It promises to provide a flexible IT architecture, accessible through internet from lightweight portable devices.This would allow multi-fold increase in the capacity and capabilities of the existing and new software.This new economic model for computing has found fertile ground and is attracting massive global investment. Many industries, such as banking, healthcare and education are moving towards the cloud due to the efficiency of services provided by the pay-per-use pattern based on the resources such as processing power used, transactions carried out, bandwidth consumed, data transferred, or storage space occupied etc.In a cloud computing environment, the entire data resides over a set of networked resources, enabling the data to be accessed through virtual machines. Despite the potential gains achieved from the cloud computing, the organizations are slow in accepting it due to security issues and challenges associated with it. Security is one of the major issues which hamper the growth of cloud. There are various research challenges also there for adopting cloud computing such as well managed service level agreement (SLA), privacy, interoperability and reliability.This research paper presents what cloud computing is, the various cloud models and the overview of the cloud computing architecture. This research paper also analyzes the key research challenges present in cloud computing and offers best practices to service providers as well as enterprises hoping to leverage cloud service to improve their bottom line in this severe economic climate.

Proceedings Article
01 Jan 2012
TL;DR: This paper proposes a generic architecture for a Cloud service broker operating in an Intercloud environment by using the latest Cloud standards, and presents in detail the broker value-added services and the broker design.
Abstract: The fast emerging Cloud computing market over the last years resulted in a variety of heterogeneous and less interoperable Cloud infrastructures. This leads to a challenging and urgent problem for Cloud users when selecting their best fitting Cloud provider and hence it ties them to a particular provider. A new growing research paradigm, which envisions a network of interconnected and interoperable Clouds through the use of open standards, is Intercloud computing. This allows users to easily migrate their application workloads across Clouds regardless of the underlying used Cloud provider platform. A very promising future use case of Intercloud computing is Cloud services brokerage. In this paper, we propose a generic architecture for a Cloud service broker operating in an Intercloud environment by using the latest Cloud standards. The broker aims to find the most suitable Cloud provider while satisfying the users’ service requirements in terms of functional and non-functional Service Level Agreement parameters. After discussing the broker value-added services, we present in detail the broker design. We focus especially on how the expected SLA management and resource interoperability functionalities are included in the broker. Finally, we present a realistic simulation testbed to validate and evaluate the proposed architecture.

Proceedings ArticleDOI
24 Jun 2012
TL;DR: An end-to-end framework that acts as a middleware which resides between the consumer applications and the cloud-hosted databases is presented to facilitate adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements.
Abstract: One of the main advantages of the cloud computing paradigm is that it simplifies the time-consuming processes of hardware provisioning, hardware purchasing and software deployment. Currently, we are witnessing a proliferation in the number of cloud-hosted applications with a tremendous increase in the scale of the data generated as well as being consumed by such applications. Cloud-hosted database systems powering these applications form a critical component in the software stack of these applications. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreement (SLA) for cloud services are not designed for flexibly handling even relatively straightforward performance and technical requirements of consumer applications. The concerns of consumers for cloud services regarding the SLA management of their hosted applications within the cloud environments will gain increasing importance as cloud computing becomes more pervasive. This paper introduces the notion, challenges and the importance of SLA-based provisioning and cost management for cloud-hosted databases from the consumer perspective. We present an end-to-end framework that acts as a middleware which resides between the consumer applications and the cloud-hosted databases. The aim of the framework is to facilitate adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements, avoiding the cost of any SLA violation and controlling the monetary cost of the allocated computing resources. The experimental results demonstrate that SLA-based provisioning is more adequate for providing consumer applications the required flexibility in achieving their goals.

Proceedings ArticleDOI
14 Dec 2012
TL;DR: A novel allocation and selection policy for the dynamic virtual machine (VM) consolidation in virtualized data centers to reduce energy consumption and SLA violation is proposed and performs greatly better than the previous ones on the whole.
Abstract: With the large-scale deployment of virtualized data centers, energy consumption and SLA (Service Level Agreement) violation have already become the urgent issue to be solved. And it is essential and important to design energy-aware allocation policy for energy-aware and SLA violation reduction. In this paper, we propose a novel allocation and selection policy for the dynamic virtual machine (VM) consolidation in virtualized data centers to reduce energy consumption and SLA violation. Firstly, we use the mean and standard deviation of CPU utilization for VM to determine the hosts overloaded or not, secondly we use the positive maximum correlation coefficient to select VMs from those overloading hosts for migration. Although the proposed allocation and selection policies performs a little worse than the previous ones in energy consumption, experiments show that it performs greatly better than the previous ones on the whole.

Proceedings Article
24 Apr 2012
TL;DR: An understanding of oversubscription in cloud through modeling and simulations is developed, and the relationship between overload mitigation schemes and SLAs is explored.
Abstract: Oversubscription can leverage under utilized capacity in the cloud but can lead to overload. A cloud provider must manage overload due to oversubscription for maximizing its prot while minimizing any service level agreement (SLA) violations. This paper develops an understanding of oversubscription in cloud through modeling and simulations, and explores the relationship between overload mitigation schemes and SLAs.

Proceedings ArticleDOI
04 Jan 2012
TL;DR: This paper attempts to identify issues and their corresponding challenges, proposing to use risk and Service Level Agreement management as the basis for a service level framework to improve governance, risk and compliance in cloud computing environments.
Abstract: Cloud Computing has become mainstream technology offering a commoditized approach to software, platform and infrastructure as a service over the Internet on a global scale. This raises important new security issues beyond traditional perimeter based approaches. This paper attempts to identify these issues and their corresponding challenges, proposing to use risk and Service Level Agreement (SLA) management as the basis for a service level framework to improve governance, risk and compliance in cloud computing environments.

01 Jan 2012
TL;DR: This paper presents an elaborated study of IaaS components' security and determines vulnerabilities and countermeasures, and considers Service Level Agreement very much importance.
Abstract: Cloud computing is current buzzword in the market. It is paradigm in which the resources can be leveraged on per use basis thus reducing the cost and complexity of service providers. Cloud computing promises to cut operational and capital costs and more importantly let IT departments focus on strategic projects instead of keeping datacenters running. It is much more than simple internet. It is a construct that allows user to access applications that actually reside at location other than user's own computer or other Internet-connected devices. There are numerous benefits of this construct. For instance other company hosts user application. This implies that they handle cost of servers, they manage software updates and depending on the contract user pays less i.e. for the service only. Confidentiality, Integrity, Availability, Authenticity, and Privacy are essential concerns for both Cloud providers and consumers as well. Infrastructure as a Service (IaaS) serves as the foundation layer for the other delivery models, and a lack of security in this layer will certainly affect the other delivery models, i.e., PaaS, and SaaS that are built upon IaaS layer. This paper presents an elaborated study of IaaS components' security and determines vulnerabilities and countermeasures. Service Level Agreement should be Considered very much importance.

Journal ArticleDOI
TL;DR: This paper shows how monitoring services have to be described, deployed, and then how they has to be executed to enforce accurate penalties by eliminating service level agreement failure cascading effects on violation detection.
Abstract: Cloud computing offers virtualized computing elements on demand in a pay-as-you-go manner. The major motivations to adopt Cloud services include no upfront investment on infrastructure and transferring responsibility of maintenance, backups, and license management to Cloud Providers. However, one of the key challenges that holds businesses from adopting Cloud computing services is that, by migrating to Cloud, they move some of their information and services out of their direct control. Their main concern is how well the Cloud providers keep their information (security) and deliver their services (performance). To cope with this challenge, several service level agreement management systems have been proposed. However, monitoring service deployment as a major responsibility of those systems have not been deeply investigated yet. Therefore, this paper shows how monitoring services have to be described, deployed (discovered and ranked), and then how they have to be executed to enforce accurate penalties by eliminating service level agreement failure cascading effects on violation detection. Copyright © 2011 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
24 Jun 2012
TL;DR: This paper presents an approach for optimally scheduling incoming requests to virtual computing resources in the cloud, so that the sum of payments for resources and loss incurred by service level agreement violations is minimized.
Abstract: Providers of applications deployed in an Infrastructure-as-a-Service cloud permanently face the decision of whether it is more cost-efficient to scale up(i.e., rent more resources from the cloud) or to delay incoming requests, even though doing so may lead to dissatisfied customers and broken service level agreements. This decision is further complicated by the fact that not all customers have the same agreements, and not all requests require the same amount of resources devoted to them. In this paper, we present an approach for optimally scheduling incoming requests to virtual computing resources in the cloud, so that the sum of payments for resources and loss incurred by service level agreement violations is minimized. We discuss our approach based on an illustrative use case. Furthermore, we present a numerical evaluation based on real-life request data, which shows that our agreement-aware algorithm improves upon earlier work, which does not take service level agreements into account.

Journal ArticleDOI
TL;DR: An economic-based resource allocation model to derive the service reliability of Grid-computing from cellular automata Monte-Carlo simulation (CA-MCS) for the service level agreement, and to evaluate total rental-time cost of Grid resources by virtual payment assessment for the free rider problem is presented.

Patent
04 Oct 2012
TL;DR: In this article, a method for migration from a multitenant database is shown that includes building an analytical model for each of a set of migration methods based on database characteristics; predicting performance of the set of migrating methods using the respective analytical model with respect to tenant service level agreements (SLAs) and current and predicted tenant workloads, where the prediction includes a migration speed and an SLA violation severity.
Abstract: A method for migration from a multitenant database is shown that includes building an analytical model for each of a set of migration methods based on database characteristics; predicting performance of the set of migration methods using the respective analytical model with respect to tenant service level agreements (SLAs) and current and predicted tenant workloads, where the prediction includes a migration speed and an SLA violation severity; and selecting a best migration method from the set of migration methods according to the respective predicted migration speeds and SLA violation severities.

Proceedings ArticleDOI
20 Sep 2012
TL;DR: This paper introduces a method for finding semantically equal SLA elements from differing SLAs by utilizing several machine learning algorithms and utilizes this method to enable automatic selection of optimal service offerings for Cloud and Grid users.
Abstract: Cloud computing is a novel computing paradigm that offers data, software, and hardware services in a manner similar to traditional utilities such as water, electricity, and telephony. Usually, in Cloud and Grid computing, contracts between traders are established using Service Level Agreements (SLAs), which include objectives of service usage. However, due to the rapidly growing number of service offerings and the lack of a standard for their specification, manual service selection is a costly task, preventing the successful implementation of ubiquitous computing on demand. In order to counteract these issues, automatic methods for matching SLAs are necessary. In this paper, we introduce a method for finding semantically equal SLA elements from differing SLAs by utilizing several machine learning algorithms. Moreover, we utilize this method to enable automatic selection of optimal service offerings for Cloud and Grid users. Finally, we introduce a framework for automatic SLA management, present a simulation-based evaluation, and demonstrate several significant benefits of our approach for Cloud and Grid users.

Journal ArticleDOI
TL;DR: The role of trust in service workflows and their contexts from a wide variety of literatures is explored and various mechanisms, architecture, techniques, standards, and frameworks are explained along the way with discussions.
Abstract: With the fast-growing Internet technology, several service-based interactions are prevalent and appear in several forms such as e-commerce, content provider, Virtual Organizations, Peer-to-Peer, Web Service, Grids, Cloud Computing, and individual interactions. This demands for an effective mechanism to establish trust among participants in a high-level abstract way, capturing relevant factors ranging on Service Level Agreement, security policies, requirements, regulations, constraints, Quality of Service, reputation, and recommendation. Trust is platform-independent and flexible to be seamlessly integrated into heterogeneous domains and interoperate with different security solutions in distributed environments. Establishing trust in a service workflow leads to the willingness of services to participate. Coordinating service workflows without trust consideration may pose higher risks, possibly results in poor performance, additional vulnerabilities, or failures. Although trust in service workflows and relevant contexts has been studied for a past decade, the standard development is still immature. Nowadays, trust approaches to service workflows comprise a large area of research where one can hardly classify into a comprehensive survey. This survey examines and explores the role of trust in service workflows and their contexts from a wide variety of literatures. Various mechanisms, architecture, techniques, standards, and frameworks are explained along the way with discussions. Working trust definition and classification are newly provided and supported with examples.