scispace - formally typeset
Search or ask a question

Showing papers on "Service-level agreement published in 2017"


Journal ArticleDOI
TL;DR: A DVFS policy that reduces power consumption while preventing performance degradation, and a DVFS‐aware consolidation policy that optimizes consumption are proposed, considering the DVFS configuration that would be necessary when mapping Virtual Machines to maintain Quality of Service.
Abstract: Summary Computational demand in data centers is increasing because of the growing popularity of Cloud applications. However, data centers are becoming unsustainable in terms of power consumption and growing energy costs so Cloud providers have to face the major challenge of placing them on a more scalable curve. Also, Cloud services are provided under strict Service Level Agreement conditions, so trade-offs between energy and performance have to be taken into account. Techniques as Dynamic Voltage and Frequency Scaling (DVFS) and consolidation are commonly used to reduce the energy consumption in data centers, although they are applied independently and their effects on Quality of Service are not always considered. Thus, understanding the relationship between power, DVFS, consolidation, and performance is crucial to enable energy-efficient management at the data center level. In this work, we propose a DVFS policy that reduces power consumption while preventing performance degradation, and a DVFS-aware consolidation policy that optimizes consumption, considering the DVFS configuration that would be necessary when mapping Virtual Machines to maintain Quality of Service. We have performed an extensive evaluation on the CloudSim toolkit using real Cloud traces and an accurate power model based on data gathered from real servers. Our results demonstrate that including DVFS awareness in workload management provides substantial energy savings of up to 41.62% for scenarios under dynamic workload conditions. These outcomes outperforms previous approaches, that do not consider integrated use of DVFS and consolidation strategies.

84 citations


Journal ArticleDOI
TL;DR: A Compliance-based Multi-dimensional Trust Evaluation System (CMTES) that enables CCs to determine the trustworthiness of a CSP from different perspectives, as trust is a subjective concept is proposed.

81 citations


Journal ArticleDOI
TL;DR: A cost saving super professional executor is provided which reduces the cost of renting virtual machines by 7% while improves the final service level agreement of the application provider and controls the mechanism's oscillation in decision-making.

80 citations


Journal ArticleDOI
TL;DR: Genetic algorithm was used to achieve global optimization with regard to service level agreement, service clustering was used for reducing the search space of the problem, and association rules were used for a composite service based on their histories to enhance service composition efficiency.
Abstract: One of the requirements of QoS-aware service composition in cloud computing environment is that it should be executed on-the-fly. It requires a trade-off between optimality and the execution speed of service composition. In line with this purpose, many researchers used combinatorial methods in previous works to achieve optimality within the shortest possible time. However, due to the ever-increasing number of services which leads to the enlargement of the search space of the problem, previous methods do not have adequate efficiency in composing the required services within reasonable time. In this paper, genetic algorithm was used to achieve global optimization with regard to service level agreement. Moreover, service clustering was used for reducing the search space of the problem, and association rules were used for a composite service based on their histories to enhance service composition efficiency. The conducted experiments acknowledged the higher efficiency of the proposed method in comparison with similar related works.

78 citations


Patent
04 Apr 2017
TL;DR: In this paper, the authors proposed a method for provisioning storage for virtual machines by meeting a service level agreement (SLA) which pertains to the operation of a virtual machine.
Abstract: Methods for provisioning storage for virtual machines by meeting a service level agreement (SLA) are disclosed. The SLA pertains to the operation of a virtual machine. An example of the method includes monitoring the workload of the first virtual machine; establishing at least one service level objective (SLO) in response to the observed workload; determining an SLA that meets the at least one SLO, wherein the SLA defines the time the SLO is satisfied; and provisioning at least one resource used by the first virtual machine in response to the SLA not being satisfied, wherein the provisioning causes the SLA to be satisfied.

74 citations


Proceedings ArticleDOI
21 May 2017
TL;DR: Two QoS-aware placement strategies are presented, an optimal solution based on the Integer Linear Programming (ILP) problem formulation and an efficient heuristic to obtain near optimal solution, aiming to support service differentiation between the users, while minimizing the associated service deployment cost for the operator.
Abstract: With Network Function Virtualization (NFV), network functions are deployed as modular software components on the commodity hardware, and can be further chained to provide services, offering much greater flexibility and lower cost of the service deployment for the network operators. At the same time, replacing the network functions implemented in purpose built hardware with software modules poses a great challenge for the operator to maintain the same level of performance. The grade of service promised to the end users is formalized in the Service Level Agreement (SLA) that typically contains the QoS parameters, such as minimum guaranteed data rate, maximum end to end latency, port availability and packet loss. State of the art solutions can guarantee only data rate and latency requirements, while service availability, which is an important service differentiator is mostly neglected. This paper focuses on the placement of virtualized network functions, aiming to support service differentiation between the users, while minimizing the associated service deployment cost for the operator. Two QoS-aware placement strategies are presented, an optimal solution based on the Integer Linear Programming (ILP) problem formulation and an efficient heuristic to obtain near optimal solution. Considering a national core network case study, we show the cost overhead of availability-awareness, as well as the risk of SLA violation when availability constraint is neglected. We also compare the proposed function placement heuristic to the optimal solution in terms of cost efficiency and execution time, and demonstrate that it can provide a good estimation of the deployment cost in much shorter time.

74 citations


Journal ArticleDOI
TL;DR: This paper explores the suitability of many-objective evolutionary algorithms for addressing the binding problem of web services on the basis of a real-world benchmark with 9 QoS properties, enabling its adoption by intelligent and decision-support systems in the field of service oriented computation.
Abstract: QoS-aware web service composition requires multiple simultaneous QoS attributes.Having conflicting QoS properties requires computationally efficient approaches.A comparative experimental study of multi and- many-objective algorithm is presented.Many-objective proposals can promote certain QoS properties while keeping trade-off. Web service based applications often invoke services provided by third-parties in their workflow. The Quality of Service (QoS) provided by the invoked supplier can be expressed in terms of the Service Level Agreement specifying the values contracted for particular aspects like cost or throughput, among others. In this scenario, intelligent systems can support the engineer to scrutinise the service market in order to select those candidates that best fit with the expected composition focusing on different QoS aspects. This search problem, also known as QoS-aware web service composition, is characterised by the presence of many diverse QoS properties to be simultaneously optimised from a multi-objective perspective. Nevertheless, as the number of QoS properties considered during the design phase increases and a larger number of decision factors come into play, it becomes more difficult to find the most suitable candidate solutions, so more sophisticated techniques are required to explore and return diverse, competitive alternatives. With this aim, this paper explores the suitability of many-objective evolutionary algorithms for addressing the binding problem of web services on the basis of a real-world benchmark with 9 QoS properties. A complete comparative study demonstrates that these techniques, never before applied to this problem, can achieve a better trade-off between all the QoS properties, or even promote specific QoS properties while keeping high values for the rest. In addition, this search process can be performed within a reasonable computational cost, enabling its adoption by intelligent and decision-support systems in the field of service oriented computation.

71 citations


Journal ArticleDOI
TL;DR: This paper presents two SLA-based task scheduling algorithms, namelySLA-MCT and SLA -Min-Min for heterogeneous multi-cloud environment, and shows that the proposed algorithms properly balance between makespan and gain cost of the services in comparison with other algorithms.
Abstract: Service-level agreement (SLA) is a major issue in cloud computing because it defines important parameters such as quality of service, uptime, downtime, period of service, pricing, and security. However, the service may vary from one cloud service provider (CSP) to another. The collaboration of the CSPs in the heterogeneous multi-cloud environment is very challenging, and it is not well covered in the recent literatures. In this paper, we present two SLA-based task scheduling algorithms, namely SLA-MCT and SLA-Min-Min for heterogeneous multi-cloud environment. The former algorithm is a single-phase scheduling, whereas the latter one is a two-phase scheduling. The proposed algorithms support three levels of SLA determined by the customers. Furthermore, the algorithms incorporate the SLA gain cost for the successful completion of the service and SLA violation cost for the unsuccessful end of the service. We simulate the proposed algorithms using benchmark and synthetic datasets. The experimental results of the proposed SLA-MCT are compared with three single-phase task scheduling algorithms, namely CLS, Execution-MCT, and Profit-MCT, and the results of the proposed SLA-Min-Min are compared with two-phase scheduling algorithms, namely Execution-Min-Min and Profit-Min-Min in terms of four performance metrics, namely makespan, average cloud utilization, gain, and penalty cost of the services. The results clearly show that the proposed algorithms properly balance between makespan and gain cost of the services in comparison with other algorithms.

70 citations


Journal ArticleDOI
TL;DR: This paper proposes a three-dimensional virtual resource scheduling method for energy saving in cloud computing (TVRSM), and presents a bin packing problem based heuristics VR allocation algorithm and a multi-dimensional power-aware based VR scheduling algorithm.

70 citations


Journal ArticleDOI
TL;DR: An adaptive heuristics energy-aware algorithm is proposed, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host.
Abstract: Mobile cloud computing (MCC) provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2) emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs) selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA) violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.

56 citations


Journal ArticleDOI
TL;DR: A scheme for green energy management in the presence of explicit and implicit integration of renewable energy in data center is presented and greenSLA algorithm is introduced which leverages the concept of virtualization of green energy to provide per interval specific Green SLA.
Abstract: While the proliferation of cloud services have greatly impacted our society, how green are these services is yet to be answered. Although, demand escalation for green services has grown due to societal awareness, the approaches to provide green services and establish Green SLAs remain oblivious for cloud or infrastructure providers. The main challenge for cloud provider is to manage Green SLAs with their customers while satisfying their business objectives, such as maximizing profits by lowering expenditure for green energy. Since, Green SLA needs to be proposed based on the presence of green energy, the intermittent nature of renewable sources makes it difficult to be achieved. In response, this paper presents a scheme for green energy management in the presence of explicit and implicit integration of renewable energy in data center. More specifically we propose three contributions: i) we introduce the concept of virtualization of green energy to address the uncertainty of green energy availability, ii) we extend the Cloud Service Level Agreement (CSLA) language to support Green SLA by introducing two new threshold parameters and iii) we introduce greenSLA algorithm which leverages the concept of virtualization of green energy to provide per interval specific Green SLA . Experiments were conducted with real workload profile from PlanetLab and server power model from SPECpower to demonstrate that, Green SLA can be successfully established and satisfied without incurring higher cost.

Journal ArticleDOI
TL;DR: A novel cloud service selection architecture, Hypergraph based Computational Model (HGCM) and Minimum Distance-Helly Property (MDHP) algorithm have been proposed for ranking the cloud service providers, finding the ranking algorithm to be scalable and computationally attractive.

Journal ArticleDOI
TL;DR: An optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods.
Abstract: Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints.

Journal ArticleDOI
TL;DR: The Optimized Personalized Viable SLA (OPV-SLA) framework is proposed which assists a service provider to form a viable SLA and start managing SLA violation before an SLA is formed and executed and the applicability of the framework is demonstrated through experiments.

Journal ArticleDOI
TL;DR: A literature systematic mapping is performed to enumerate existing solutions and open issues in security SLAs in cloud computing, as well as an analysis of the state of art and a classification of the selected papers.

Journal ArticleDOI
TL;DR: This paper first formulate the network functions composition problem as a non-linear optimization model to accurately capture the congestion of physical resources and proposes innovative orchestration mechanisms based on both centralized and distributed approaches, aimed at unleashing the potential of the NFV technology.
Abstract: Network Functions Virtualization (NFV) has recently gained momentum among network operators as a means to share their physical infrastructure among virtual operators, which can independently compose and configure their communication services. However, the spatio-temporal correlation of traffic demands and computational loads can result in high congestion and low network performance for virtual operators, thus leading to service level agreement breaches. In this paper, we analyze the congestion resulting from the sharing of the physical infrastructure and propose innovative orchestration mechanisms based on both centralized and distributed approaches, aimed at unleashing the potential of the NFV technology. In particular, we first formulate the network functions composition problem as a non-linear optimization model to accurately capture the congestion of physical resources. To further simplify the network management, we also propose a dynamic pricing strategy of network resources, proving that the resulting system achieves a stable equilibrium in a completely distributed fashion, even when all virtual operators independently select their best network configuration. Numerical results show that the proposed approaches consistently reduce resource congestion. Furthermore, the distributed solution well approaches the performance that can be achieved using a centralized network orchestration system.

Journal ArticleDOI
TL;DR: It is shown that, besides energy consumption, service level agreement (SLA) violations also severely degrade the cost-efficiency of data centers, and proposes two heuristics: Least-Reliable-First (LRF) and Decreased-Density-Greedy (DDG).
Abstract: Cost savings have become a significant challenge in the management of data centers. In this paper, we show that, besides energy consumption, service level agreement (SLA) violations also severely degrade the cost-efficiency of data centers. We present online VM placement algorithms for increasing cloud provider’s revenue. First, First-Fit and Harmonic algorithm are devised for VM placement without considering migrations. Both algorithms get the same performance in the worst-case analysis, and equal to the lower bound of the competitive ratio. However, Harmonic algorithm could create more revenue than First-Fit by more than 10 percent when job arriving rate is greater than 1.0. Second, we formulate an optimization problem of maximizing revenue from VM migration, and prove it as NP-Hard by a reduction from 3-Partition problem. Therefore, we propose two heuristics: Least-Reliable-First (LRF) and Decreased-Density-Greedy (DDG). Experiments demonstrate that DDG yields more revenue than LRF when migration cost is low, yet leads to losses when SLA penalty is low or job arriving rate is high, due to the large number of migrations. Finally, we compare the four algorithms above with algorithms adopted in Openstack using a real trace, and find that the results are consistent with the ones using synthetic data.

Journal ArticleDOI
TL;DR: A novel load balancing method, involving a well- organized use of resources, is proposed, known as the dynamic well-organized load balancing (DWOLB) algorithm, which is a powerful algorithm for reducing the energy that is consumed in cloud computing.

Journal ArticleDOI
01 Dec 2017
TL;DR: A decision-theoretic approach is proposed to make live migration decision that takes into account live migration overheads and achieves better performance and higher stability compared to other approaches that do not take into account the uncertainty of long-term predictions and the live migration overhead.
Abstract: Dynamic workloads in cloud computing can be managed through live migration of virtual machines from overloaded or underloaded hosts to other hosts to save energy and/or mitigate performance-related Service Level Agreement (SLA) violations. The challenging issue is how to detect when a host is overloaded to initiate live migration actions in time. In this paper, a new approach to make long-term predictions of resource demands of virtual machines for host overload detection is presented. To take into account the uncertainty of long-term predictions, a probability distribution model of the prediction error is built. Based on the probability distribution of the prediction error, a decision-theoretic approach is proposed to make live migration decision that take into account live migration overheads. Experimental results using the CloudSim simulator and PlanetLab workloads show that the proposed approach achieves better performance and higher stability compared to other approaches that do not take into account the uncertainty of long-term predictions and the live migration overhead.

Journal ArticleDOI
TL;DR: This paper presents a highly reliable cloud architecture by leveraging the 80/20 rule (80% of cluster failures come from 20% of physical machines) to identify failure-prone physical machines by dividing each cluster into reliable and risky sub-clusters.

Journal ArticleDOI
TL;DR: A comprehensive technique for optimum energy consumption and SLA violation reduction and the population-based or parallel simulated annealing (SA) algorithm is used in the Markov chain model for virtual machines placement policy.
Abstract: Significant savings in the energy consumption, without sacrificing service level agreement (SLA), are an excellent economic incentive for cloud providers. By applying efficient virtual Machine placement and consolidation algorithms, they are able to achieve these goals. In this paper, we propose a comprehensive technique for optimum energy consumption and SLA violation reduction. In the proposed approach, the issues of allocation and management of virtual machines are divided into smaller parts. In each part, new algorithms are proposed or existing algorithms have been improved. The proposed method performs all steps in distributed mode and acts in centralized mode only in the placement of virtual machines that require a global vision. For this purpose, the population-based or parallel simulated annealing (SA) algorithm is used in the Markov chain model for virtual machines placement policy. Simulation of algorithms in different scenarios in the CloudSim confirms better performance of the proposed comprehensive algorithm.

Journal ArticleDOI
TL;DR: This survey identifies the requirements that such management imposes on a PaaS provider: autonomy, scalability, adaptivity, SLA awareness, composability and upgradeability, and delves into the variety of mechanisms proposed to deal with all those requirements.
Abstract: Elasticity is a goal of cloud computing. An elastic system should manage in an autonomic way its resources, being adaptive to dynamic workloads, allocating additional resources when workload is increased and deallocating resources when workload decreases. PaaS providers should manage resources of customer applications with the aim of converting those applications into elastic services. This survey identifies the requirements that such management imposes on a PaaS provider: autonomy, scalability, adaptivity, SLA awareness, composability and upgradeability. This document delves into the variety of mechanisms that have been proposed to deal with all those requirements. Although there are multiple approaches to address those concerns, providers' main goal is maximisation of profits. This compels providers to look for balancing two opposed goals: maximising quality of service and minimising costs. Because of this, there are still several aspects that deserve additional research for finding optimal adaptability strategies. Those open issues are also discussed.

Journal ArticleDOI
TL;DR: A local and global cloud confederation model, namely FnF, that makes an optimal selection decision for target cloud data center(s) by exploiting Fuzzy logic and enhances its decision accuracy by precisely estimating the resource requirements for the big data processing tasks using multiple linear regression is developed.
Abstract: Nowadays, big media healthcare data processing in cloud has become an effective solution for satisfying QoS demands of medical users. It can support various healthcare services such as pre-processing, storing, sharing, and analysis of monitored data as well as acquiring context-awareness. However, to support energy and cost savings, the union of cloud data centers termed as cloud confederation can be an promising approach, which helps a cloud provider to overcome the limitation of physical resources. However, the key challenge in it is to achieve multiple contradictory objectives, e.g., meeting the required level of services defined in service level agreement, maintaining medial users'application QoS, etc. while maximizing profit of a cloud provider. In this paper, for executing heterogeneous big healthcare data processing requests from users, we develop a local and global cloud confederation model, namely FnF, that makes an optimal selection decision for target cloud data center(s) by exploiting Fuzzy logic. The FnF trades off in between profit of cloud provider and user application QoS in selecting federated data center(s). In addition, FnF enhances its decision accuracy by precisely estimating the resource requirements for the big data processing tasks using multiple linear regression. The proposed FnF model is validated through numerical as well as experimental evaluations. Simulation results depict the effectiveness and efficiency of the FnF model compared to state-of-the-art approaches.

Journal ArticleDOI
TL;DR: An analytical approach to percentile-based performance analysis of unreliable infrastructure-as-a-service clouds is presented and it is shown that the optimization problem can be numerically solved through a simulated-annealing method.
Abstract: Through Internet, a cloud computing system provides shared resources, data, and information to users or tenant users in an on-demand and pay-as-you-go styles. It delivers large-scale utility computing services to a wide range of consumers. To ensure that their provisioned service is acceptable, cloud providers must exploit techniques and mechanisms that meet the service-level-agreement (SLA) performance commitment to their clients. Thus, performance issues of cloud infrastructures have been receiving considerable attention by both researchers and practitioners as a prominent activity for improving service quality. This paper presents an analytical approach to percentile-based performance analysis of unreliable infrastructure-as-a-service clouds. The proposed analytical model is capable of calculating percentiles of the request response time under variable load intensities, fault frequencies, multiplexing abilities, and instantiation processing time. A case study based on a real-world cloud is carried out to prove the correctness of the proposed theoretical model. To achieve optimal performance-cost tradeoff, we formulate the performance model into an optimal capacity decision problem for cost minimization subjected to the constraints of request rejection and SLA violation rates. We show that the optimization problem can be numerically solved through a simulated-annealing method.

Journal ArticleDOI
TL;DR: A novel VM allocation policy using the Eagle Strategy of Hybrid Krill Herd (KH) Optimization technique to enhance the cloud service experience to internet users and proves the capability of Hybrid KH algorithm over Particle Swarmoptimization, Ant Colony Optimization, Genetic Algorithm and Simulated Annealing algorithms.

Posted Content
TL;DR: In this paper, the authors presented an optimal tunable-complexity bandwidth manager (TCBM) for the QoS live migration of VMs under a wireless channel from smartphone to access point, which minimizes the migration-induced communication energy under SLA-induced hard constrains on the total migration time, downtime and overall available bandwidth.
Abstract: Live virtual machine migration aims at enabling the dynamic balanced use of the networking/computing physical resources of virtualized data-centers, so to lead to reduced energy consumption. Here, we analytically characterize, prototype in software and test an optimal bandwidth manager for live migration of VMs in wireless channel. In this paper we present the optimal tunable-complexity bandwidth manager (TCBM) for the QoS live migration of VMs under a wireless channel from smartphone to access point. The goal is the minimization of the migration-induced communication energy under service level agreement (SLA)-induced hard constrains on the total migration time, downtime and overall available bandwidth.

Journal ArticleDOI
TL;DR: This paper develops an analytical model based on the principles of Markov chains and queueing theory that captures the behavior of a cloud-based firewall service comprising a load balancer and a variable number of virtual firewalls, and derives closed-form formulas to determine the minimal number of cloud firewall instances required to meet the response time specified in the service level agreement.
Abstract: This paper shows how to properly achieve elasticity for network firewalls deployed in a cloud environment. Elasticity is the ability to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible. Elasticity for cloud-based firewalls aims to satisfy an agreed-upon performance measure using only the minimal number of cloud firewall instances. Our contribution lies in determining the number of firewall instances that should be dynamically adjusted in accordance with the incoming traffic load and the targeted rules within the firewall rulebase. To do so, we develop an analytical model based on the principles of Markov chains and queueing theory. The model captures the behavior of a cloud-based firewall service comprising a load balancer and a variable number of virtual firewalls. From the analytical model, we then derive closed-form formulas to determine the minimal number of virtual firewalls required to meet the response time specified in the service level agreement. The model takes as input key system parameters including workload, processing capacity of load balancer and virtual machines, as well as the depth of the targeted firewall rules. We validate our model using discrete-event simulation, and real-world experiments conducted on Amazon Web Services cloud. We also provide numerical examples to show how our model can be used in practice by cloud performance/security engineers to achieve proper elasticity under fluctuating traffic load and variable depth of targeted firewall rules.

Journal ArticleDOI
TL;DR: This article introduces Linked USDL Agreement, a semantic model to specify, manage and share service level agreement descriptions on the Web, part of the LinkedUSDL family of ontologies that can describe not only technical but also business related aspects of services, incorporating Web principles.
Abstract: Nowadays, service trading over the Web is gaining momentum. In this highly dynamic scenario, both providers and consumers need to formalize their contractual and legal relationship, creating service level agreements. Although there exist some proposals that provide models to describe that relationship, they usually only cover technical aspects, not providing explicit semantics to the agreement terms. Furthermore, these models cannot be effectively shared on the Web, since they do not actually follow Web principles. These drawbacks hamper take-up and automatic analysis. In this article, we introduce Linked USDL Agreement, a semantic model to specify, manage and share service level agreement descriptions on the Web. This model is part of the Linked USDL family of ontologies that can describe not only technical but also business related aspects of services, incorporating Web principles. We validate our proposal by describing agreements in computational and non-computational scenarios, namely cloud computing and business process outsourcing services. Moreover, we evaluate the actual coverage and expressiveness of Linked USDL Agreement comparing it with existing models. In order to foster its adoption and effectively manage the service level agreement lifecycle, we present an implemented tool that supports creation, automatic analysis, and publication on the Web of agreement descriptions.

Journal ArticleDOI
TL;DR: This work presents a mixed integer linear programming (MILP) optimization model for MEC systems that minimizes power consumption while incurring an acceptable amount of delay, and evaluates it under several realistic scenarios to show that it can indeed be used for power optimization of large-scale M EC systems without violating delay constraints.
Abstract: Reducing the total power consumption and network delay are among the most interesting issues facing large-scale Mobile Cloud Computing (MCC) systems and their ability to satisfy the Service Level Agreement (SLA). Such systems utilize cloud computing infrastructure to support offloading some of user’s computationally heavy tasks to the cloud’s datacenters. However, the delay incurred by such offloading process lead the use of servers (called cloudlets) placed in the physical proximity of the users, creating what is known as Mobile Edge Computing (MEC). The cloudlet-based infrastructure has its challenges such as the limited capabilities of the cloudlet system (in terms of the ability to serve different request types from users in vast geographical regions). To cover the users demand for different types of services and in vast geographical regions, cloudlets cooperate among each other by passing user requests from one cloudlet to another. This cooperation affects both power consumption and delay. In this work, we present a mixed integer linear programming (MILP) optimization model for MEC systems with these two issues in mind. Specifically, we consider two types of cloudlets: local cloudlets and global cloudlets, which have higher capabilities. A user connects to a local cloudlet and sends all of its traffics to it. If the local cloudlet cannot serve the desired request, then the request is moved to another local cloudlet. If no local cloudlet can serve the request, then it is moved to a global cloudlet which can serve all service types. The process of routing requests through the hierarchical network of cloudlets increases power consumption and delay. Our model minimizes power consumption while incurring an acceptable amount of delay. We evaluate it under several realistic scenarios to show that it can indeed be used for power optimization of large-scale MEC systems without violating delay constraints.

Journal ArticleDOI
TL;DR: The proposed SADQ is the first scheme in optical networks to employ exhaustive differentiation at the levels of routing, spectrum allocation, and survivability in a single algorithm and is compared with two existing benchmark routing and spectrum allocation schemes designed under EONs.