scispace - formally typeset
Search or ask a question

Showing papers by "Ivona Brandic published in 2014"


Journal ArticleDOI
TL;DR: This article defines a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users.
Abstract: Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1p and 1.5p of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.

258 citations


Journal ArticleDOI
TL;DR: It is demonstrated in this paper that the combination of negotiation, brokering and deployment using SLA-aware extensions and autonomic computing principles are required for achieving reliable and efficient service operation in distributed environments.

74 citations


Proceedings Article
01 Jan 2014
TL;DR: In this article, the authors propose an approach for automatic service selection by considering SLA claims of SaaS providers, based on the utilization of prospect theory for the service ranking that represents a natural choice for scoring of comparable services due to the users preferences.
Abstract: Cloud computing popularity is growing rapidly and consequently the number of companies offering their services in the form of Software-as-a-Service (SaaS) or Infrastructure-as-a-Service (IaaS) is increasing. The diversity and usage benefits of IaaS offers are encouraging SaaS providers to lease resources from the Cloud instead of operating their own data centers. However, the question remains for them how to, on the one hand, exploit Cloud benefits to gain less maintenance overheads and on the other hand, maximize the satisfactions of customers with a wide range of requirements. The complexity of addressing these issues prevent many SaaS providers to benefit from the Cloud infrastructures. In this paper, we propose HS4MC approach for automatic service selection by considering SLA claims of SaaS providers. The novelty of our approach lies in the utilization of prospect theory for the service ranking that represents a natural choice for scoring of comparable services due to the users preferences. The HS4MC approach first constructs a set of SLAs based on the given accumulated SaaS provider requirements. Then, it selects a set of services that best fulfills the SLAs. We evaluate our approach in a simulated environment by comparing it with a state-of-the-art utility- based algorithm. The evaluation results show that our approach selects services that more effectively satisfy the SLAs.

15 citations


Proceedings ArticleDOI
21 Jul 2014
TL;DR: A Model-driven development (MDD) approach for building and managing arbitrary cloud services and an architecture of a Cloud Management System (CMS) that is able to manage such services by automatically transforming the service models from the abstract representation to the actual deployment are presented.
Abstract: Popularity of Cloud Computing produced the birth of Everything-as-a-Service (XaaS) concept, where each service can comprise large variety of software and hardware elements. Although having the same concept, each of these services represent complex system that have to be deployed and managed by a provider using individual tools for almost every element. This usually leads to a combination of different deployment tools that are unable to interact with each other in order to provide an unified and automatic service deployment procedure. Therefore, the tools are usually used manually or specifically integrated for a single cloud service, which on the other hand requires changing the entire deployment procedure in case the service gets modified. In this paper we utilize Model-driven development (MDD) approach for building and managing arbitrary cloud services. We define a metamodel of a cloud service called CoPS, which describes a cloud service as a composition of software and hardware elements by using three sequential models, namely Component, Product and Service. We also present an architecture of a Cloud Management System (CMS) that is able to manage such services by automatically transforming the service models from the abstract representation to the actual deployment. Finally, we validate our approach by realizing three real world use cases using a prototype implementation of the proposed CMS architecture.

12 citations


01 Jan 2014
TL;DR: The HS4MC approach for automatic service selection by considering SLA claims of SaaS providers is proposed and evaluation results show that the approach selects services that more effectively satisfy the SLAs.
Abstract: Cloud computing popularity is growing rapidly and consequently the number of companies offering their services in the form of Software-as-a-Service (SaaS) or Infrastructure-as-a-Service (IaaS) is increasing. The diversity and usage benefits of IaaS offers are encouraging SaaS providers to lease resources from the Cloud instead of operating their own data centers. However, the question remains for them how to, on the one hand, exploit Cloud benefits to gain less maintenance overheads and on the other hand, maximize the satisfactions of customers with a wide range of requirements. The complexity of addressing these issues prevent many SaaS providers to benefit from the Cloud infrastructures. In this paper, we propose HS4MC approach for automatic service selection by considering SLA claims of SaaS providers. The novelty of our approach lies in the utilization of prospect theory for the service ranking that represents a natural choice for scoring of comparable services due to the users preferences. The HS4MC approach first constructs a set of SLAs based on the given accumulated SaaS provider requirements. Then, it selects a set of services that best fulfills the SLAs. We evaluate our approach in a simulated environment by comparing it with a state-of-the-art utility- based algorithm. The evaluation results show that our approach selects services that more effectively satisfy the SLAs.

12 citations


Book ChapterDOI
16 Sep 2014
TL;DR: This paper proposes a novel service level agreement (SLA) specification approach for offering VMs with different availability and price values guaranteed over multiple SLAs to enable flexible energy-aware cloud management and evaluates the optimal number of such SLAs as well as theiravailability and price guaranteed values.
Abstract: Novel energy-aware cloud management methods dynamically reallocate computation across geographically distributed data centers to leverage regional electricity price and temperature differences. As a result, a managed virtual machine (VM) may suffer occasional downtimes. Current cloud providers only offer high availability VMs, without enough flexibility to apply such energy-aware management. In this paper we show how to analyse past traces of dynamic cloud management actions based on electricity prices and temperatures to estimate VM availability and price values. We propose a novel service level agreement (SLA) specification approach for offering VMs with different availability and price values guaranteed over multiple SLAs to enable flexible energy-aware cloud management. We determine the optimal number of such SLAs as well as their availability and price guaranteed values. We evaluate our approach in a user SLA selection simulation using Wikipedia and Grid’5000 workloads. The results show higher customer conversion and \(39\%\) average energy savings per VM.

8 citations


Proceedings ArticleDOI
03 Apr 2014
TL;DR: A multi-dimensional resource allocation scheme to automate the deployment of data-intensive large scale applications in Mutli-Cloud environments and demonstrates the effectiveness of the implemented matching and scheduling policies in improving the workflow execution performance and reducing the amount and costs of Intercloud data transfers.
Abstract: Large scale applications are emerged as one of the important applications in distributed computing. Today, the economic and technical benefits offered by the Cloud computing technology encouraged many users to migrate their applications to Cloud. On the other hand, the variety of the existing Clouds requires them to make decisions about which providers to choose in order to achieve the expected performance and service quality while keeping the payment low. In this paper, we present a multi-dimensional resource allocation scheme to automate the deployment of data-intensive large scale applications in Mutli-Cloud environments. The scheme applies a two level approach in which the target Clouds are matched with respect to the Service Level Agreement (SLA) requirements and user payment at first and then the application workloads are distributed to the selected Clouds using a data locality driven scheduling policy. Using an implemented Multi-Cloud simulation environment, we evaluated our approach with a real data-intensive workflow application in different scenarios. The experimental results demonstrate the effectiveness of the implemented matching and scheduling policies in improving the workflow execution performance and reducing the amount and costs of Intercloud data transfers.

8 citations


BookDOI
01 Jan 2014
TL;DR: This paper proposes a formal approach to extract BPAs from process model collections connecting process layer and BPA layer for assuring consistency between them.
Abstract: Business process management has become a standard commodity to manage and improve business operations in organisations. Large process model collections emerged. Managing, and maintaining them has become a major area of research. Business process architectures (BPAs) have been introduced to support this task focusing on interdependencies between processes. Both the process and BPA layer are often modeled independently, creating inconsistencies between both layers. However, a consistent overview on process interdependencies on BPA level is of high importance, especially in regard to assessing the impact of change when optimising business process collaborations. In this paper, we propose a formal approach to extract BPAs from process model collections connecting process layer and BPA layer for assuring consistency between them. Interdependencies between process models will be reflected in trigger and message flows on BPA level giving a high level overview of process collaboration as well as allowing its formal verification with existing approaches. We will show the extraction of BPAs from process model collections on a running example modeled in BPMN.

7 citations


Proceedings ArticleDOI
15 Dec 2014
TL;DR: This paper introduces CPU Performance Coefficient (CPU-PC), a novel performance metric used for measuring the real-time quality of CPU provisioning in virtualized environments and provides a measurement of the proposed metric for the customer as well, thus enabling the latter to monitor the quality of rented resources.
Abstract: The Cloud represents an emerging paradigm that provides on-demand computing resources, such as CPU. The resources are customized in quantity through various virtual machine (VM) flavours, which are deployed on top of time-shared infrastructure, where a single server can host several VMs. However, their Quality of Service (QoS) is limited and boils down to the VM availability, which does not provide any performance guarantees for the shared underlying resources. Consequently, the providers usually over-provision their resources trying to increase utilization, while the customers can suffer from poor performance due to increased concurrency. In this paper, we introduce CPU Performance Coefficient (CPU-PC), a novel performance metric used for measuring the real-time quality of CPU provisioning in virtualized environments. The metric isolates an impact of the provisioned CPU on the performance of the customer's application, hence allowing the provider to measure the quality of provisioned resources and manage them accordingly. Additionally, we provide a measurement of the proposed metric for the customer as well, thus enabling the latter to monitor the quality of rented resources. As evaluation, we utilize three real world applications used in existing Cloud services, and correlate the CPU-PC metric with the response time of the applications. An R-squared correlation of over 0.9557 indicates the applicability of our approach in the real world.

6 citations


01 Jan 2014
TL;DR: In: The 8th International Conference for Internet Technology and Secured Transactions (ICITST-2013) presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and expensive process of manually cataloging and verifying transactions.
Abstract: In: The 8th International Conference for Internet Technology and Secured Transactions (ICITST-2013). IEEE; 2013. (published in 2014)

2 citations


BookDOI
TL;DR: This paper shows how to analyse past traces of dynamic cloud management actions based on electricity prices and temperatures to estimate VM availability and price values, and proposes a novel SLA specification approach for offering VMs with different availability and prices guaranteed over multiple SLAs to enable flexible energy-aware cloud management.
Abstract: Novel energy-aware cloud management methods dynamically reallocate computation across geographically distributed data centers to leverage regional electricity price and temperature differences. As a result, a managed VM may suffer occasional downtimes. Current cloud providers only offer high availability VMs, without enough flexibility to apply such energy-aware management. In this paper we show how to analyse past traces of dynamic cloud management actions based on electricity prices and temperatures to estimate VM availability and price values. We propose a novel SLA specification approach for offering VMs with different availability and price values guaranteed over multiple SLAs to enable flexible energy-aware cloud management. We determine the optimal number of such SLAs as well as their availability and price guaranteed values. We evaluate our approach in a user SLA selection simulation using Wikipedia and Grid'5000 workloads. The results show higher customer conversion and 39% average energy savings per VM.

Proceedings Article
01 Aug 2014
TL;DR: In this paper, an approach to manage Cloud infrastructures by means of Autonomic Computing is proposed, where the basic structure of the autonomic systems is represented by a control loop that monitors (M) Cloud parameters, analyses (A) them, plans (P) actions and executes (E) them; the full cycle is known as MAPE.
Abstract: To guarantee the vision of Quality of Service (QoS) different goals in terms of SLAs have to be dynamically met between the Cloud provider and the customer (Breskovic et al., 2013). This SLA enactment should involve little human-based interaction in order to guarantee the scalability and efficient resource utilization of the system. To achieve this we start from Autonomic Computing, examine the autonomic control loop and adapt it to govern Cloud Computing infrastructures. We propose an approach to manage Cloud infrastructures by means of Autonomic Computing. The basic structure of the autonomic systems is represented by a control loop that monitors (M) Cloud parameters, analyses (A) them, plans (P) actions and executes (E) them; the full cycle is known as MAPE. MAPE-K loop stores knowledge (K) required for decision-making in a knowledge base (KB) that is accessed by the individual phases. This talk addresses the research question of finding a suitable KM system (i.e., a technique of how stored information should be used) and determining how it interacts with the other phases for dynamically and efficiently allocating resources.

01 Aug 2014
TL;DR: In this article, an approach to manage Cloud infrastructures by means of Autonomic Computing is proposed, where the basic structure of the autonomic systems is represented by a control loop that monitors (M) Cloud parameters, analyses (A) them, plans (P) actions and executes (E) them; the full cycle is known as MAPE.
Abstract: To guarantee the vision of Quality of Service (QoS) different goals in terms of SLAs have to be dynamically met between the Cloud provider and the customer (Breskovic et al., 2013). This SLA enactment should involve little human-based interaction in order to guarantee the scalability and efficient resource utilization of the system. To achieve this we start from Autonomic Computing, examine the autonomic control loop and adapt it to govern Cloud Computing infrastructures. We propose an approach to manage Cloud infrastructures by means of Autonomic Computing. The basic structure of the autonomic systems is represented by a control loop that monitors (M) Cloud parameters, analyses (A) them, plans (P) actions and executes (E) them; the full cycle is known as MAPE. MAPE-K loop stores knowledge (K) required for decision-making in a knowledge base (KB) that is accessed by the individual phases. This talk addresses the research question of finding a suitable KM system (i.e., a technique of how stored information should be used) and determining how it interacts with the other phases for dynamically and efficiently allocating resources.

Proceedings Article
19 Jun 2014
TL;DR: This paper argues that market platforms for the Cloud paradigm cannot (yet) be rigidly defined, and require the ability to progress and evolve with the paradigm, and presents an alternative approach: autonomic markets.
Abstract: One of the major challenges facing the Cloud paradigm is the emergence of suitable economic platforms for the trading of Cloud services. Today, many researchers investigate how specific Cloud market platforms can be conceived and in some cases implemented. However, such endeavours consider only specific types of actors, business models, or Cloud abstractions. We argue that market platforms for the Cloud paradigm cannot (yet) be rigidly defined, and require the ability to progress and evolve with the paradigm. In this paper, we discuss an alternative approach: autonomic markets. Autonomic markets automatically adapt to changed environmental conditions based upon a given concept of “performance”. We describe the autonomic MAPE loop in the context of electronic markets and consider the types of a knowledge produced and required for decision making. Finally, we present a conceptual framework for a market simulator, a critical tool for autonomic markets, based upon experiences using the GridSim simulation tool.