scispace - formally typeset
Search or ask a question

Showing papers on "Service-level agreement published in 2019"


Journal ArticleDOI
TL;DR: The experimental results show, the proposed VM consolidation approach uses a regression-based model to approximate the future CPU and memory utilization of VMs and PMs provides substantial improvement over other heuristic and meta-heuristic algorithms in reducing the energy consumption, the number of VM migrations and thenumber of SLA violations.
Abstract: Virtual Machine (VM) consolidation provides a promising approach to save energy and improve resource utilization in data centers. Many heuristic algorithms have been proposed to tackle the VM consolidation as a vector bin-packing problem. However, the existing algorithms have focused mostly on the number of active Physical Machines (PMs) minimization according to their current resource requirements and neglected the future resource demands. Therefore, they generate unnecessary VM migrations and increase the rate of Service Level Agreement (SLA) violations in data centers. To address this problem, we propose a VM consolidation approach that takes into account both the current and future utilization of resources. Our approach uses a regression-based model to approximate the future CPU and memory utilization of VMs and PMs. We investigate the effectiveness of virtual and physical resource utilization prediction in VM consolidation performance using Google cluster and PlanetLab real workload traces. The experimental results show, our approach provides substantial improvement over other heuristic and meta-heuristic algorithms in reducing the energy consumption, the number of VM migrations and the number of SLA violations.

140 citations


Journal ArticleDOI
07 May 2019-Sensors
TL;DR: The authors propose an energy- and service-level agreement (SLA)-efficient cyber physical system for E-healthcare during data transmission services to address the two security threats, such as grey and black holes, that severely affect network services.
Abstract: Due to advances in technology, research in healthcare using a cyber-physical system (CPS) opens innovative dimensions of services. In this paper, the authors propose an energy- and service-level agreement (SLA)-efficient cyber physical system for E-healthcare during data transmission services. Furthermore, the proposed phenomenon will be enhanced to ensure the security by detecting and eliminating the malicious devices/nodes involved during the communication process through advances in the ad hoc on-demand distance vector (AODV) protocol. The proposed framework addresses the two security threats, such as grey and black holes, that severely affect network services. Furthermore, the proposed framework used to find the different network metrics such as average qualifying service set (QSS) paths, mean hop and energy efficiency of the quickest path. The framework is simulated by calculating the above metrics in mutual cases i.e., without the contribution of malevolent nodes and with the contribution of malevolent nodes over service time, hop count and energy constraints. Further, variation of SLA and energy shows their expediency in the selection of efficient network metrics.

71 citations


Journal ArticleDOI
TL;DR: A Systematic Literature Review (SLR) is provided and the resources allocation methods in the IoT and its platforms are investigated and the open issues about resource allocation in IoT to achieve a better utilization of this technology are focused.
Abstract: Internet of Things (IoT) as a novel paradigm is an environment with a vast number of connected things and applications. The IoT devices are used to generate data, which transforms into useable information and provides applied resources to end-users and this process is the main goal of IoT. Therefore, one of the important subjects in the IoT is resource allocation which aims is load balancing and minimizing operational cost, and power consuming. In addition, the resources should be allocated in such a way to be a balanced efficiency that can increase the system performance, Quality of Service (QoS) and Service Level Agreement (SLA). Although the resource allocation is very important in the IoT, there is no systematic review in this field. Therefore, in this paper, a Systematic Literature Review (SLR) is provided and the resources allocation methods in the IoT and used algorithms are investigated. Different classification, including cost-aware, context-aware, efficiency-aware, load-balancing-aware, power-aware, QoS-aware, SLA-based and utilization-aware resource allocation mechanisms are organized to investigate the resource allocation techniques. We present several parameters and describe them in each category. In addition, the used parameters in different articles are evaluated and the major developments in each category are surveyed and are outlined the new challenges. Furthermore, an SLR is provided in each of these eight categories. In this paper, a structure of different technical keys in the scope of resource allocation in the IoT and its platforms are presented and the important areas for improving the resource allocation methods in the future is highlighted and the open issues about resource allocation in IoT to achieve a better utilization of this technology are focused. The future direction is useful for academic researchers that work on IoT. This study shows that an independent technique does not exist to address all issues and challenges in resource allocation for IoT.

68 citations


Journal ArticleDOI
TL;DR: This paper proposes a hybrid metaheuristics technique which combines the osmotic behavior with bio-inspired load balancing algorithms in achieving load balancing between physical machines and shows results that show that OH_BAC decreases energy consumption, the number of VMs migrations and thenumber of shutdown hosts compared to existing algorithms.
Abstract: Cloud computing is increasing rapidly as a successful paradigm presenting on-demand infrastructure, platform, and software services to clients. Load balancing is one of the important issues in cloud computing to distribute the dynamic workload equally among all the nodes to avoid the status that some nodes are overloaded while others are underloaded. Many algorithms have been suggested to perform this task. Recently, worldview is turning into a new paradigm for optimization search by applying the osmosis theory from chemistry science to form osmotic computing. Osmotic computing is aimed to achieve balance in highly distributed environments. The main goal of this paper is to propose a hybrid metaheuristics technique which combines the osmotic behavior with bio-inspired load balancing algorithms. The osmotic behavior enables the automatic deployment of virtual machines (VMs) that are migrated through cloud infrastructures. Since the hybrid artificial bee colony and ant colony optimization proved its efficiency in the dynamic environment in cloud computing, the paper then exploits the advantages of these bio-inspired algorithms to form an osmotic hybrid artificial bee and ant colony (OH_BAC) optimization load balancing algorithm. It overcomes the drawbacks of the existing bio-inspired algorithms in achieving load balancing between physical machines. The simulation results show that OH_BAC decreases energy consumption, the number of VMs migrations and the number of shutdown hosts compared to existing algorithms. In addition, it enhances the quality of services (QoSs) which is measured by service level agreement violation (SLAV) and performance degradation due to migrations (PDMs).

66 citations


Journal ArticleDOI
TL;DR: A framework for self-management of cloud resources for execution of clustered workloads named as SCOOTER is proposed that efficiently schedules the provisioned cloud resources and maintains the Service Level Agreement (SLA) by considering properties of self- management and the maximum possible QoS parameters are required to improve cloud based services.
Abstract: Provisioning of adequate resources to cloud workloads depends on the Quality of Service (QoS) requirements of these cloud workloads. Based on workload requirements (QoS) of cloud users, discovery and allocation of best workload-resource pair is an optimization problem. Acceptable QoS can be offered only if provisioning of resources is appropriately controlled. So, there is a need for a QoS-based resource provisioning framework for the autonomic scheduling of resources to observe the behavior of the services and adjust it dynamically in order to satisfy the QoS requirements. In this paper, framework for self-management of cloud resources for execution of clustered workloads named as SCOOTER is proposed that efficiently schedules the provisioned cloud resources and maintains the Service Level Agreement (SLA) by considering properties of self-management and the maximum possible QoS parameters are required to improve cloud based services. Finally, the performance of SCOOTER has been evaluated in a cloud environment that demonstrates the optimized QoS parameters such as execution cost, energy consumption, execution time, SLA violation rate, fault detection rate, intrusion detection rate, resource utilization, resource contention, throughput and waiting time.

58 citations


Proceedings ArticleDOI
08 Jul 2019
TL;DR: A novel system named Microscaler is presented to automatically identify the scaling-needed services and scale them to meet the service level agreement (SLA) with an optimal cost for micro-service systems, which could achieve the optimal service scale satisfying the SLA requirements.
Abstract: Recently, the microservice becomes a popular architecture to construct cloud native systems due to its agility. In cloud native systems, autoscaling is a core enabling technique to adapt to workload changes by scaling out/in. However, it becomes a challenging problem in a microservice system, since such a system usually comprises a large number of different micro services with complex interactions. When bursty and unpredictable workloads arrive, it is difficult to pinpoint the scaling-needed services which need to scale and evaluate how much resource they need. In this paper, we present a novel system named Microscaler to automatically identify the scaling-needed services and scale them to meet the service level agreement (SLA) with an optimal cost for micro-service systems. Microscaler collects the quality of service metrics (QoS) with the help of the service mesh enabled infrastructure. Then, it determines the under-provisioning or over-provisioning services with a novel criterion named service power. By combining an online learning approach and a step-by-step heuristic approach, Microscaler could achieve the optimal service scale satisfying the SLA requirements. The experimental evaluations in a micro-service benchmark show that Microscaler converges to the optimal service scale faster than several state-of-the-art methods.

50 citations


Journal ArticleDOI
TL;DR: A blockchain-based DualFog-IoT architecture with three configuration filter of incoming requests at access level, namely: Real Time, Non-Real Time, and Delay Tolerant Blockchain applications, compared with existing centralized datacenter based IoT architecture.
Abstract: Integration of blockchain and Internet of Things (IoT) to build a secure, trusted and robust communication technology is currently of great interest for research communities and industries. But challenge is to identify the appropriate position of blockchain in current settings of IoT with minimal consequences. In this article we propose a blockchain-based DualFog-IoT architecture with three configuration filter of incoming requests at access level, namely: Real Time, Non-Real Time, and Delay Tolerant Blockchain applications. The DualFog-IoT segregate the Fog layer into two: Fog Cloud Cluster and Fog Mining Cluster. Fog Cloud Cluster and the main cloud datacenter work in a tandem similar to existing IoT architecture for real-time and non-real-time application requests, while the additional Fog Mining Cluster is dedicated to deal with only Delay Tolerant Blockchain application requests. The proposed DualFog-IoT is compared with existing centralized datacenter based IoT architecture. Along with the inherited features of blockchain, the proposed model decreases system drop rate, and further offload the cloud datacenter with minimal upgradation in existing IoT ecosystem. The reduced computing load from cloud datacenter doesn't only help in saving the capital and operational expenses, but it is also a huge contribution for saving energy resources and minimizing carbon emission in environment. Furthermore, the proposed DualFog-IoT is also being analyzed for optimization of computing resources at cloud level, the results presented shows the feasibility of proposed architecture under various ratios of incoming RT and NRT requests. However, the integration of blockchain has its footprints in terms of latent response for delay tolerant blockchain applications, but real-time and non-real-time requests are gracefully satisfying the service level agreement.

50 citations


Journal ArticleDOI
TL;DR: This work proposes VM placement algorithms based on both bin-packing heuristics and servers’ power efficiency and introduces a new bin- packing heuristic called a Medium-Fit (MF) to reduce SLA violation.
Abstract: One of the main challenges in cloud computing is an enormous amount of energy consumed in data-centers. Several researches have been conducted on Virtual Machine(VM) consolidation to optimize energy consumption. Among the proposed VM consolidations, OpenStack Neat is notable for its practicality. OpenStack Neat is an open-source consolidation framework that can seamlessly integrate to OpenStack, one of the most common and widely used open-source cloud management tool. The framework has components for deciding when to migrate VMs and for selecting suitable hosts for the VMs (VM placement). The VM placement algorithm of OpenStack Neat is called Modified Best-Fit Decreasing (MBFD). MBFD is based on a heuristic that handles only minimizing the number of servers. The heuristic is not only less energy efficient but also increases Service Level Agreement (SLA) violation and consequently cause more VM migrations. To improve the energy efficiency, we propose VM placement algorithms based on both bin-packing heuristics and servers’ power efficiency. In addition, we introduce a new bin-packing heuristic called a Medium-Fit (MF) to reduce SLA violation. To evaluate performance of the proposed algorithms we have conducted experiments using CloudSim on three cloud data-center scenarios: homogeneous, heterogeneous and default. Workloads that run in the data-centers are generated from traces of PlanetLab and Bitbrains clouds. The results of the experiment show up-to 67% improvement in energy consumption and up-to 78% and 46% reduction in SLA violation and amount of VM migrations, respectively. Moreover, all improvements are statistically significant with significance level of 0.01.

49 citations


Journal ArticleDOI
TL;DR: The developed algorithm (SEQR) evaluates the SLA-energy cooperative quickest ambulance route according to the user’s service requirements and quantified the SLAs and energy variation through the mean candidate s–t qualifying service set (QSS) routes.
Abstract: In this study, the problem of critical ambulance routing scheme, which is a significant variant of the quickest path problem (QPP), was investigated. The proposed QPP incorporates additional factors, such as service-level agreement (SLA) and energy cooperation, to compute the SLA-energy cooperative quickest route (SEQR) for a real-time critical healthcare service vehicle (e.g., ambulance). The continuity of critical healthcare services depends on the performance of the transport system. Therefore, in this research, SLA and energy were proposed as important measures for quantifying the performance. The developed algorithm (SEQR) evaluates the SLA-energy cooperative quickest ambulance route according to the user’s service requirements. The SEQR algorithm was tested with various transport networks. The SLAs and energy variation were quantified through the mean candidate s–t qualifying service set (QSS) routes for the service, average hop count, and average energy efficiency.

47 citations


Posted Content
TL;DR: This paper considers a scenario that contains several slices in a radio access network with base stations that share the same physical resources (e.g., bandwidth or slots), and proposes generative adversarial network-powered deep distributional Q network (GAN-DDQN) to learn the action-value distribution driven by minimizing the discrepancy between the estimated action- Value distribution and the target action- value distribution.
Abstract: Network slicing is a key technology in 5G communications system. Its purpose is to dynamically and efficiently allocate resources for diversified services with distinct requirements over a common underlying physical infrastructure. Therein, demand-aware resource allocation is of significant importance to network slicing. In this paper, we consider a scenario that contains several slices in a radio access network with base stations that share the same physical resources (e.g., bandwidth or slots). We leverage deep reinforcement learning (DRL) to solve this problem by considering the varying service demands as the environment state and the allocated resources as the environment action. In order to reduce the effects of the annoying randomness and noise embedded in the received service level agreement (SLA) satisfaction ratio (SSR) and spectrum efficiency (SE), we primarily propose generative adversarial network-powered deep distributional Q network (GAN-DDQN) to learn the action-value distribution driven by minimizing the discrepancy between the estimated action-value distribution and the target action-value distribution. We put forward a reward-clipping mechanism to stabilize GAN-DDQN training against the effects of widely-spanning utility values. Moreover, we further develop Dueling GAN-DDQN, which uses a specially designed dueling generator, to learn the action-value distribution by estimating the state-value distribution and the action advantage function. Finally, we verify the performance of the proposed GAN-DDQN and Dueling GAN-DDQN algorithms through extensive simulations.

43 citations


Journal ArticleDOI
TL;DR: This article investigates the application of Reinforcement Learning (RL) for performing dynamic SFC resources allocation in NFV-SDN enabled metro-core optical networks and builds an RL system able to optimize the resources allocation of SFCs in a multi-layer network.
Abstract: With the advent of 5G technology, we are witnessing the development of increasingly bandwidth-hungry network applications, such as enhanced mobile broadband, massive machine-type communications and ultra-reliable low-latency communications. Software Defined Networking (SDN), Network Function Virtualization (NFV) and Network Slicing (NS) are gaining momentum not only in research but also in IT industry representing the drivers of 5G. NS is an approach to network operations allowing the partition of a physical topology into multiple independent virtual networks, called network slices (or slices). Within a single slice, a set of Service Function Chains (SFCs) is defined and the network resources, e.g. bandwidth, can be provisioned dynamically on demand according to specific Quality of Service (QoS) and Service Level Agreement (SLA) requirements. Traditional schemes for network resources provisioning based on static policies may lead to poor resource utilization and suffer from scalability issues. In this article, we investigate the application of Reinforcement Learning (RL) for performing dynamic SFC resources allocation in NFV-SDN enabled metro-core optical networks. RL allows to build a self-learning system able to solve highly complex problems by employing RL agents to learn policies from an evolving network environment. In particular, we build an RL system able to optimize the resources allocation of SFCs in a multi-layer network (packet over flexi-grid optical layer). The RL agent decides if and when to reconfigure the SFCs, given state of the network and historical traffic traces. Numerical simulations show significant advantages of our RL-based optimization over rule-based optimization design.

Journal ArticleDOI
TL;DR: The experimental results show that RADAR delivers better outcomes in terms of execution cost, resource contention, execution time, and SLA violation while it delivers reliable services.
Abstract: Cloud computing utilizes heterogeneous resources that are located in various datacenters to provide an efficient performance on a pay‐per‐use basis. However, existing mechanisms, frameworks, and techniques for management of resources are inadequate to manage these applications, environments, and the behavior of resources. There is a requirement of a Quality of Service (QoS) based autonomic resource management technique to execute workloads and deliver cost‐efficient and reliable cloud services automatically. In this paper, we present an intelligent and autonomic resource management technique named RADAR. RADAR focuses on two properties of self‐management: firstly, self‐healing that handles unexpected failures and, secondly, self‐configuration of resources and applications. The performance of RADAR is evaluated in the cloud simulation environment and the experimental results show that RADAR delivers better outcomes in terms of execution cost, resource contention, execution time, and SLA violation while it delivers reliable services.

Journal ArticleDOI
TL;DR: A new multi-objective VM consolidation approach based on double thresholds and ant colony system (ACS) is proposed that remarkably reduces energy consumption and optimizes SLA violation rates thus achieving better comprehensive performance.
Abstract: With the large-scale deployment of cloud datacenters, high energy consumption and serious service level agreement (SLA) violations in datacenters have become an increasingly urgent problem to be addressed. Implementing an effective virtual machine (VM) consolidation methods is of great significance to reduce energy consumption and SLA violations. The VM consolidation problem is a well-known NP-hard problem. Meanwhile, efficient VM consolidation should consider multiple factors synthetically, including quality of service, energy consumption, and migration overhead, which is a multi-objective optimization problem. To solve the problem above, we propose a new multi-objective VM consolidation approach based on double thresholds and ant colony system (ACS). The proposed approach leverages double thresholds of CPU utilization to identify the host load status, VM consolidation is triggered when the host is overloaded or underloaded. During consolidation, the approach selects migration VMs and destination hosts simultaneously based on ACS, utilizing diverse selection policies according to the host load status. The extensive experiment is conducted to compare our proposed approach with the state-of-art VM consolidation approaches. The experimental results demonstrate that the proposed approach remarkably reduces energy consumption and optimizes SLA violation rates thus achieving better comprehensive performance.

Journal ArticleDOI
01 Jan 2019
TL;DR: An analytical model of an integrated architecture consisting of MIoT devices, fog and cloud computing has now become the most preferred solution for a healthcare monitoring system and can predict the system response time and accurately determine the number of computing resources needed for health data services to achieve desired performance.
Abstract: In a typical healthcare monitoring system, the cloud is the preferred platform to aggregate, store and analyse data collected from Medical Internet of Things (or MIoT) devices However, remote cloud servers and storage can be a source of substantial delay To overcome such delays, an intermediate layer of fog or edge nodes is used for localised processing and storage of MIoT data To this end, an integrated architecture consisting of MIoT devices, fog and cloud computing has now become the most preferred solution for a healthcare monitoring system In this study, we propose an analytical model of such a system and use it to show how to reduce the cost for computing resources while guaranteeing performance constraints The proposed analytical model is based on network of queues and has the ability to estimate the minimum required number of computing resources to meet the service level agreement The authors verify and cross-validate the analytical model through Java modelling tools discrete event simulator Results obtained from analysis and simulation show that the proposed model, under different workload conditions, can predict the system response time, and can accurately determine the number of computing resources needed for health data services to achieve desired performance

Journal ArticleDOI
TL;DR: The analysis helps service providers choose a suitable prediction method with optimal control parameters so that they can obtain accurate prediction results to manage SLA intelligently and avoid violation penalties.
Abstract: service level agreement (SLA) management is one of the key issues in cloud computing. The primary goal of a service provider is to minimize the risk of service violations, as these results in penalties in terms of both money and a decrease in trustworthiness. To avoid SLA violations, the service provider needs to predict the likelihood of violation for each SLO and its measurable characteristics (QoS parameters) and take immediate action to avoid violations occurring. There are several approaches discussed in the literature to predict service violation; however, none of these explores how a change in control parameters and the freshness of data impact prediction accuracy and result in the effective management of an SLA of the cloud service provider. The contribution of this paper is two-fold. First, we analyzed the accuracy of six widely used prediction algorithms—simple exponential smoothing, simple moving average, weighted moving average, Holt–Winter double exponential smoothing, extrapolation, and the autoregressive integrated moving average—by varying their individual control parameters. Each of the approaches is compared to 10 different datasets at different time intervals between 5 min and 4 weeks. Second, we analyzed the prediction accuracy of the simple exponential smoothing method by considering the freshness of a data; i.e., how the accuracy varies in the initial time period of prediction compared to later ones. To achieve this, we divided the cloud QoS dataset into sets of input values that range from 100 to 500 intervals in sets of 1–100, 1–200, 1–300, 1–400, and 1–500. From the analysis, we observed that different prediction methods behave differently based on the control parameter and the nature of the dataset. The analysis helps service providers choose a suitable prediction method with optimal control parameters so that they can obtain accurate prediction results to manage SLA intelligently and avoid violation penalties.

Book ChapterDOI
30 Oct 2019
TL;DR: This work uses the attack graph of a cloud network to formulate a general-sum Markov Game and uses the Common Vulnerability Scoring System to come up with meaningful utility values in each state of the game and shows that, for the threat model in which an adversary has knowledge of a defender’s strategy, the use of Stackelberg equilibrium can provide an optimal strategy for placement of security resources.
Abstract: The processing and storage of critical data in large-scale cloud networks necessitate the need for scalable security solutions. It has been shown that deploying all possible detection measures incur a cost on performance by using up valuable computing and networking resources, thereby resulting in Service Level Agreement (SLA) violations promised to the cloud-service users. Thus, there has been a recent interest in developing Moving Target Defense (MTD) mechanisms that helps to optimize the joint objective of maximizing security while ensuring that the impact on performance is minimized. Often, these techniques model the challenge of multi-stage attacks by stealthy adversaries as a single-step attack detection game and use graph connectivity measures as a heuristic to measure performance, thereby (1) losing out on valuable information that is inherently present in multi-stage models designed for large cloud networks, and (2) come up with strategies that have asymmetric impacts on performance, thereby heavily affecting the Quality of Service (QoS) for some cloud users. In this work, we use the attack graph of a cloud network to formulate a general-sum Markov Game and use the Common Vulnerability Scoring System (CVSS) to come up with meaningful utility values in each state of the game. We then show that, for the threat model in which an adversary has knowledge of a defender’s strategy, the use of Stackelberg equilibrium can provide an optimal strategy for placement of security resources. In cases where this assumption turns out to be too strong, we show that the Stackelberg equilibrium turns out to be a Nash equilibrium of the general-sum Markov Game. We compare the gains obtained using our method(s) to other baseline techniques used in cloud network security. Finally, we highlight how the method was used in a real-world small-scale cloud system.

Journal ArticleDOI
TL;DR: A resource management system for network slicing and a dynamic resource adjustment algorithm based on reinforcement learning approach from each tenant’s point of view are introduced that can significantly increase the profit of tenants compared to existing fixed resource allocation methods while satisfying the QoS requirements of end-users.
Abstract: Network slicing to create multiple virtual networks, called network slice, is a promising technology to enable networking resource sharing among multiple tenants for the 5th generation (5G) networks. By offering a network slice to slice tenants, network slicing supports parallel services to meet the service level agreement (SLA). In legacy networks, every tenant pays a fixed and roughly estimated monthly or annual fee for shared resources according to a contract signed with a provider. However, such a fixed resource allocation mechanism may result in low resource utilization or violation of user quality of service (QoS) due to fluctuations in the network demand. To address this issue, we introduce a resource management system for network slicing and propose a dynamic resource adjustment algorithm based on reinforcement learning approach from each tenant’s point of view. First, the resource management for network slicing is modeled as a Markov Decision Process (MDP) with the state space, action space, and reward function. Then, we propose a Q-learning-based dynamic resource adjustment algorithm that aims at maximizing the profit of tenants while ensuring the QoS requirements of end-users. The numerical simulation results demonstrate that the proposed algorithm can significantly increase the profit of tenants compared to existing fixed resource allocation methods while satisfying the QoS requirements of end-users.

Journal ArticleDOI
TL;DR: Two consolidation based energy-efficient techniques that reduce energy consumption along with resultant SLA violations are presented and enhanced the existing Enhanced-Conscious Task Consolidation (ECTC) and Maximum Utilization (MaxUtil) techniques that attempt to reduceEnergy consumption and SLA Violations.
Abstract: Cloud computing emerged as one of the leading computational paradigms due to elastic resource provisioning and pay-as-you-go model. Large data centers are used by the service providers to host the various services. These data centers consume enormous energy, which leads to increase in operating costs and carbon footprints. Therefore, green cloud computing is a necessity, which not only reduces energy consumption, but also affects the environment positively. In order to reduce the energy consumption, workload consolidation approach is used that consolidates the tasks in minimum possible servers. However, workload consolidation may lead to service level agreement (SLA) violations due to non-availability of resources on the server. Therefore, workload consolidation techniques should consider the aforementioned problem. In this paper, we present two consolidation based energy-efficient techniques that reduce energy consumption along with resultant SLA violations. In addition to that, we also enhanced the existing Enhanced-Conscious Task Consolidation (ECTC) and Maximum Utilization (MaxUtil) techniques that attempt to reduce energy consumption and SLA violations. Experimental results show that the proposed techniques perform better than the selected heuristic based techniques in terms of energy, SLA, and migrations.

Journal ArticleDOI
TL;DR: A novel DevOps framework aimed at supporting Cloud consumers in designing, deploying and operating (multi)Cloud systems that include the necessary privacy and security controls for ensuring transparency to end-users, third parties in service provision (if any) and law enforcement authorities is presented.
Abstract: Compliance with the new European General Data Protection Regulation (Regulation (EU) 2016/679, GDPR) and security assurance are currently two major challenges of Cloud-based systems. GDPR compliance implies both privacy and security mechanisms definition, enforcement and control, including evidence collection. This study presents a novel DevOps framework aimed at supporting Cloud consumers in designing, deploying and operating (multi)Cloud systems that include the necessary privacy and security controls for ensuring transparency to end-users, third parties in service provision (if any) and law enforcement authorities. The framework relies on the risk-driven specification at design time of privacy and security level objectives in the system service level agreement and in their continuous monitoring and enforcement at runtime.

Journal ArticleDOI
TL;DR: This article regards SLA management as a distrusted process that should not be handled by a single authority and proposes a conceptual blockchain-based framework to cope with some limitations associated with traditionalSLA management approaches.
Abstract: In pursuit of effective service level agreement (SLA) monitoring and enforcement in the context of Internet of Things (IoT) applications, this article regards SLA management as a distrusted process that should not be handled by a single authority. Here, we aim to justify our view on the matter and propose a conceptual blockchain-based framework to cope with some limitations associated with traditional SLA management approaches.

Proceedings ArticleDOI
01 Feb 2019
TL;DR: The preliminary results compare ARIMA, MLP, and GRU under different cloud configurations to help administrators choose the more appropriate and efficient predictive model for their specific problem.
Abstract: Cloud computing has transformed the means of computing in recent years with several benefits over traditional systems, like scalability and high availability. However, there are still some opportunities, especially in the area of resource provisioning and scaling [13]. Since workload may fluctuate a lot in certain environments, over-provisioning is a common practice to avoid abrupt Quality of Service (QoS) drops that may result in Service Level Agreement (SLA) violations, but at the price of an increase in provisioning costs and energy consumption. Workload prediction is one of the strategies by which efficiency and operational cost of a cloud can be improved [13]. Knowing demand in advance allows the previous allocation of sufficient resources to maintain QoS and avoid SLA violations [1]. This paper presents the advantages and disadvantages of three workload prediction techniques when applied in the context of cloud computing. Our preliminary results compare ARIMA, MLP, and GRU under different cloud configurations to help administrators choose the more appropriate and efficient predictive model for their specific problem.

Journal ArticleDOI
TL;DR: A novel virtual machine consolidation technique is presented based on energy and temperature based on heuristic and meta-heuristic algorithms in order to improve QoS (Quality of Service).
Abstract: Cloud computing provides access to shared resources through Internet. It provides facilities such as broad access, scalability and cost savings for users. However, cloud data centers consume a significant amount of energy because of inefficient resources allocation. In this paper, a novel virtual machine consolidation technique is presented based on energy and temperature in order to improve QoS (Quality of Service). In this paper, two heuristic and meta-heuristic algorithms are provided called HET-VC (Heuristic Energy and Temperature aware based VM consolidation) and FET-VC (FireFly Energy and Temperature aware based VM Consolidation). Six parameters are investigated for the proposed algorithms: energy efficiency, number of migrations, SLA (Service Level Agreement) violation, ESV, time and space complexities. Using the CloudSim simulator, it is found that energy consumption can be alleviated 42% and 54% in HET-VC and FET-VC, respectively using our proposed methods. The number of VM migrations is reduced by 44% and 52% under HET-VC and FET-VC, respectively. The HET-VC and FET-VC can improve SLA violation by 62% and 64%, respectively. The Energy and SLA Violations (ESV) are improved by 61% under HET-VC and by 76% under FET-VC.

Journal ArticleDOI
TL;DR: This paper proposes an enhanced decentralized sharing economy service using the service level agreement (SLA), which documents the services the provider will furnish and defines the service standards the provider is obligated to meet and facilitates multi-user collaboration and automates the process with no involvement of the third party.
Abstract: Recently, technology startups have leveraged the potential of blockchain-based technologies to govern institutions or interpersonal trust by enforcing signed treaties among different individuals in a decentralized environment. However, it is going to be hard enough convincing that the blockchain technology could completely replace the trust among trading partners in the sharing economy as sharing services always operate in a highly dynamic environment. With the rapid expanding of the rental market, the sharing economy faces more and more severe challenges in the form of regulatory uncertainty and concerns about abuses. This paper proposes an enhanced decentralized sharing economy service using the service level agreement (SLA), which documents the services the provider will furnish and defines the service standards the provider is obligated to meet. The SLA specifications are defined as the smart contract, which facilitates multi-user collaboration and automates the process with no involvement of the third party. To demonstrate the usability of the proposed solution in the sharing economy, a notebook sharing case study is implemented using the Hyperledger Fabric. The functionalities of the smart contract are tested using the Hyperledger Composer. Moreover, the efficiency of the designed approach is demonstrated through a series of experimental tests using different performance metrics.

Journal ArticleDOI
TL;DR: The presented method can offer the service operator a recommended resource allocation for the targeted service, in function of the targeted performance and maximum workload specified in the SLA, thereby avoiding unnecessary scaling steps.
Abstract: The virtualization of compute and network resources enables an unseen flexibility for deploying network services. A wide spectrum of emerging technologies allows an ever-growing range of orchestration possibilities in cloud-based environments. But in this context it remains challenging to rhyme dynamic cloud configurations with deterministic performance. The service operator must somehow map the performance specification in the Service Level Agreement (SLA) to an adequate resource allocation in the virtualized infrastructure. We propose the use of a VNF profile to alleviate this process. This is illustrated by profiling the performance of four example network functions (a virtual router, switch, firewall and cache server) under varying workloads and resource configurations. We then compare several methods to derive a model from the profiled datasets. We select the most accurate method to further train a model which predicts the services’ performance, in function of incoming workload and allocated resources. Our presented method can offer the service operator a recommended resource allocation for the targeted service, in function of the targeted performance and maximum workload specified in the SLA. This helps to deploy the softwarized service with an optimal amount of resources to meet the SLA requirements, thereby avoiding unnecessary scaling steps.

Journal ArticleDOI
TL;DR: An integrated framework and approach based on the Semantic Web and QoS characterisitcs (availability, response time, throughput and cost), as part of the service level agreement (SLA) will help the end user to match, select and filter cloud services and integrate cloud-service providers into a multi-cloud environment.
Abstract: Cloud computing provides a dynamic, heterogeneous and elastic environment by offering accessible ‘cloud services’ to end-users. The tasks involved in making cloud services available, such as matchmaking, selection and composition, are essential and closely related to each other. Integration of these tasks is critical for optimal composition and performance of the cloud service platform. More efficient solutions could be developed by considering cloud service tasks collectively, but the research and academic community have so far only considered these tasks individually. The purpose of this paper is to propose an integrated QoS-based approach for cloud service matchmaking, selection and composition using the Semantic Web.,In this paper, the authors propose a new approach using the Semantic Web and quality of service (QoS) model to perform cloud service matchmaking, selection and composition, to fulfil the requirements of an end user. In the Semantic Web, the authors develop cloud ontologies to provide semantic descriptions to the service provider and requester, so as to automate the cloud service tasks. This paper considers QoS parameters, such as availability, throughput, response time and cost, for quality assurance and enhanced user satisfaction.,This paper focus on the development of an integrated framework and approach for cloud service life cycle phases, such as discovery, selection and composition using QoS, to enhance user satisfaction and the Semantic Web, to achieve automation. To evaluate performance and usefulness, this paper uses a scenario based on a Healthcare Decision-Making System (HDMS). Results derived through the experiment prove that the proposed prototype performs well for the defined set of cloud-services tasks.,As a novel concept, our proposed integrated framework and approach for cloud service matchmaking, selection and composition based on the Semantic Web and QoS characterisitcs (availability, response time, throughput and cost), as part of the service level agreement (SLA) will help the end user to match, select and filter cloud services and integrate cloud-service providers into a multi-cloud environment.

Journal ArticleDOI
TL;DR: This paper provides Multi-Dimensional Regression Host Utilization (MDRHU) algorithms that combine CPU, memory and network BW utilization via Euclidean Distance and absolute summation, respectively, which provide improved results in terms of energy consumption and service level agreement violation.
Abstract: The use of cloud computing data centers is growing rapidly to meet the tremendous increase in demand for high-performance computing (HPC), storage and networking resources for business and scientific applications. Virtual machine (VM) consolidation involves the live migration of VMs to run on fewer physical servers, and thus allowing more servers to be switched off or run on low-power mode, as to improve the energy consumption efficiency, operating cost and CO2 emission. A crucial step in VM consolidation is host overload detection, which attempts to predict whether or not a physical server will be oversubscribed with VMs. In contrast to the majority of previous work which use CPU utilization as the sole indicator for host overload, a recent study has proposed a multiple regression host overload detection algorithm, which takes multiple factors into consideration: CPU, memory and network BW utilization. This paper provides further improvement along two directions. First, we provide Multi-Dimensional Regression Host Utilization (MDRHU) algorithms that combine CPU, memory and network BW utilization via Euclidean Distance (MDRHU-ED) and absolute summation (MDRHU-AS), respectively. This leads to improved results in terms of energy consumption and service level agreement violation. Second, the study explicitly takes real-world HPC workloads into consideration. Our extensive simulation study further illustrates the superiority of our proposed algorithms over existing methods. In particular, as compared to the most recently proposed multiple regression algorithm that is based on Geometric Relation (GR), our proposed algorithms provide an improvement of at least 12% in energy consumption, and an improvement of at least 80% in a metric that combines energy consumption, service-level-violation, and number of VM migrations.

Journal ArticleDOI
TL;DR: A reactive fault tolerance approach in the context of checkpointing is proposed and evaluated and indicates superior performance of the approach in terms of power consumption, response time, monetary cost and cloud capacity.
Abstract: The likelihood of failures rises in cloud computing systems as a result of their unstable nature. Additionally, the size of a cloud computing system varies with time and thus failures become a common incident. Failures have a high impact on cloud performance and the expected benefits for both customers and providers. Fault tolerance is an essential challenge facing cloud providers in order to mitigate the effects of failures and maintaining the Service Level Agreement (SLA) satisfied. Checkpointing is one of the most known reactive fault tolerance techniques used in distributed computing. However, it can incur considerable overheads that depend on the interval of the checkpoint applied and these overheads put down the performance of the cloud. In this paper, a reactive fault tolerance approach in the context of checkpointing is proposed and evaluated with the aim of getting better performance. The approach depends on applying a flexible interval of the checkpoint to reduce overheads. Simulation experiments indicate superior performance of the approach in terms of power consumption, response time, monetary cost and cloud capacity.

Book ChapterDOI
14 Mar 2019
TL;DR: This paper presents a novel distributed resource allocation algorithm with the purpose of enabling seamless integration and deployment of different applications in an IoT infrastructure and analyzes and discusses the approach and the potential to minimize the latency of different IoT applications.
Abstract: With the increased success of Internet of Things (IoT), the conventional centralized cloud computing is encountering severe challenges (e.g., high latency, non-adaptive machine type of communication), that proved insufficient to meet the stringent requirements of IoT applications. Besides requiring fast response time, increased security and privacy, they lack computational resources at the edge of the network. Motivated to solve these challenges, new technologies are driving a trend that distributes the computational resources and shifts the function of centralized cloud computing to the edge. Several edge computing technologies, edge and fog paradigms, originating from different backgrounds have been emerging to overweight these challenges. However, to fully utilize these limited devices, we need advanced resource management techniques. In this paper, we present a novel distributed resource allocation algorithm with the purpose of enabling seamless integration and deployment of different applications in an IoT infrastructure. The algorithm decides: (i) the mapping of an IoT application at the edge of the network; (ii) dynamic migration of parts of the application, such that Service Level Agreement (SLA) is satisfied. Furthermore, we analyze and discuss our approach and the potential to minimize the latency of different IoT applications.

Journal ArticleDOI
TL;DR: The proposed prediction model is based on the past usage pattern and aims to provide optimal resource management without the violations of the agreed service‐level conditions in cloud data centers.
Abstract: Cloud computing is an innovative computing paradigm designed to provide a flexible and low‐cost way to deliver information technology services on demand over the Internet. Proper scheduling and load balancing of the resources are required for the efficient operations in the distributed cloud environment. Since cloud computing is growing rapidly and customers are demanding better performance and more services, scheduling and load balancing of the cloud resources have become very interesting and important area of research. As more and more consumers assign their tasks to cloud, service‐level agreements (SLAs) between consumers and providers are emerging as an important aspect. The proposed prediction model is based on the past usage pattern and aims to provide optimal resource management without the violations of the agreed service‐level conditions in cloud data centers. It considers SLA in both the initial scheduling stage and in the load balancing stage, and it looks into different objectives to achieve the minimum makespan, the minimum degree of imbalance, and the minimum number of SLA violations. The experimental results show the effectiveness of the proposed system compared with other state‐of‐the‐art algorithms.

Journal ArticleDOI
TL;DR: The comparative analysis of results has been demonstrated that the proposed resource scheduling algorithm performed better as compared to existing algorithms, and can be used to improve the efficacy of cloud resources.
Abstract: Cloud computing is providing resources to customers based on application demand under service level agreement (SLA) rules. Service providers are concentrating on providing a requirement based resource to fulfill the quality of service (QoS) requirements. But, it has become a challenge to cope with service-oriented resources due to uncertainty and dynamic demand for cloud services. Task scheduling is an alternative to distributing resource by estimating the unpredictable workload. Therefore, an efficient resource scheduling technique needs to distribute appropriate virtual machines (VMs). Swarm intelligence, involving a metaheuristic approach, is suitable to handle such uncertainty problems meticulously. In this research paper, we present an efficient resource scheduling technique using ant colony optimization (ACO) algorithm, with an objective to minimize execution cost and time. The comparative analysis of results has been demonstrated that the proposed scheduling algorithm performed better as co...