scispace - formally typeset
Search or ask a question

Showing papers on "Service-level agreement published in 2009"


Journal ArticleDOI
TL;DR: This paper defines Cloud computing and provides the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs), and provides insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA) oriented resource allocation.

5,850 citations


01 Jan 2009
TL;DR: This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA).
Abstract: Cloud computing that provides cheap and pay-as-you-go computing resources is rapidly gaining momentum as an alternative to traditional IT Infrastructure. As more and more consumers delegate their tasks to cloud providers, Service Level Agreements(SLA) between consumers and providers emerge as a key aspect. Due to the dynamic nature of the cloud, continuous monitoring on Quality of Service (QoS) attributes is necessary to enforce SLAs. Also numerous other factors such as trust (on the cloud provider) come into consideration, particularly for enterprise customers that may outsource its critical data. This complex nature of the cloud landscape warrants a sophisticated means of managing SLAs. This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement(WSLA) framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA). We use the third party support feature of WSLA to delegate monitoring and enforcement tasks to other entities in order to solve the trust issues. We also present a real world use case to validate our proposal.

411 citations


Proceedings ArticleDOI
14 Dec 2009
TL;DR: Aneka, an enterprise cloud computing solution, harnesses the power of compute resources by relying on private and public clouds and delivers to users the desired Quality of Service (QoS) as mentioned in this paper.
Abstract: Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.

335 citations


Proceedings ArticleDOI
15 Feb 2009
TL;DR: This paper proposes a pub-sub based model which simplifies the integration of sensor networks with cloud based community-centric applications and discusses issues and proposed reasonable solutions to enable this framework.
Abstract: In the past few years, wireless sensor networks (WSNs) have been gaining increasing attention because of their potential of enabling of novel and attractive solutions in areas such as industrial automation, environmental monitoring, transportation business, health-care etc. If we add this collection of sensor derived data to various Web-based social networks or virtual communities, blogs etc., we can have a remarkable transformation in our ability to "see" ourselves and our planet. Our primary goal is to facilitate connecting sensors, people and software objects to build community-centric sensing applications. However, the computational tools needed to launch this exploration may be more appropriately built from the data center "Cloud" computing model than the traditional HPC approaches. In this paper, we propose a framework to enable this exploration by integrating sensor networks to the emerging data center "cloud" model of computing. But there are many challenges to enable this framework. We propose a pub-sub based model which simplifies the integration of sensor networks with cloud based community-centric applications. Also there is a need for internetworking cloud providers in case of violation of service level agreement with users. We discussed these issues and proposed reasonable solutions.

181 citations


Journal ArticleDOI
TL;DR: This paper introduces MetaCDN, a system that exploits 'Storage Cloud' resources, creating an integrated overlay network that provides a low cost, high performance CDN for content creators and consumers.

157 citations


Book ChapterDOI
23 Nov 2009
TL;DR: This work proposes an approach for predicting SLA violations at runtime, which uses measured and estimated facts (instance data of the composition or QoS of used services) as input for a prediction model, based on machine learning regression techniques.
Abstract: SLAs are contractually binding agreements between service providers and consumers, mandating concrete numerical target values which the service needs to achieve. For service providers, it is essential to prevent SLA violations as much as possible to enhance customer satisfaction and avoid penalty payments. Therefore, it is desirable for providers to predict possible violations before they happen, while it is still possible to set counteractive measures. We propose an approach for predicting SLA violations at runtime, which uses measured and estimated facts (instance data of the composition or QoS of used services) as input for a prediction model. The prediction model is based on machine learning regression techniques, and trained using historical process instances. We present the basics of our approach, and briefly validate our ideas based on an illustrative example.

117 citations


Proceedings ArticleDOI
06 Jul 2009
TL;DR: An architecture for monitoring SLAs is proposed, which satisfies the two main requirements introduced by SLA establishment: the availability of historical data for evaluating SLA offers and the assessment of the capability to monitor the terms in a SLA offer.
Abstract: In modern service economies, service provisioning needs to be regulated by complex SLA hierarchies among providers of heterogeneous services, defined at the business, software, and infrastructure layers. Starting from the SLA Management framework defined in the SLA@SOI EU FP7 Integrated Project, we focus on the relationship between establishment and monitoring of such SLAs, showing how the two processes become tightly interleaved in order to provide meaningful mechanisms for SLA management. We first describe the process for SLA establishment adopted within the framework; then,we propose an architecture for monitoring SLAs, which satisfies the two main requirements introduced by SLA establishment: the availability of historical data for evaluating SLA offers and the assessment of the capability to monitor the terms in a SLA offer.

114 citations


Proceedings ArticleDOI
27 Aug 2009
TL;DR: This paper outlines usage scenarios and a set of requirements for emerging Cloud computing infrastructures, and proposes an accounting and billing architecture to be used within RESERVOIR, which is the primary focus for this architecture.
Abstract: Emerging Cloud computing infrastructures provide computing resources on demand based on postpaid principles. For example, the RESERVOIR project develops an infrastructure capable of delivering elastic capacity that can automatically be increased or decreased in order to cost-efficiently fulfill established Service Level Agreements. This infrastructure also makes it possible for a data center to extend its total capacity by subcontracting additional resources from collaborating data centers, making the infrastructure a federation of Clouds. For accounting and billing, such infrastructures call for novel approaches to perform accounting for capacity that varies over time and for services (or more precisely virtual machines) that migrate between physical machines or even between data centers. For billing, needs arise for new approaches to simultaneously manage postpaid and prepaid payment schemes for capacity that varies over time in response to user needs. In this paper, we outline usage scenarios and a set of requirements for such infrastructures, and propose an accounting and billing architecture to be used within RESERVOIR. Even though the primary focus for this architecture is accounting and billing between resource consumers and infrastructure provides, future support for inter-site billing is also taken into account.

111 citations


Proceedings ArticleDOI
20 Jul 2009
TL;DR: Based on the life cycle of a self-manageable Cloud service, a resource submission taxonomy is derived and the application of autonomic computing to Cloud services based on service mediation and negotiation bootstrapping case study is discussed.
Abstract: Cloud computing represents a promising computing paradigm, where computational power is provided as a utility. An important characteristic of Cloud computing, other than in similar paradigms like Grid or HPC computing, is the provision of non-functional guarantees to users. Thereby, applications can be executed considering predefined execution time, price, security or privacy standards, which are guaranteed in real time in form of Service Level Agreements (SLAs). However, due to changing components, workload, external conditions, hardware, and software failures, established SLAs may be violated. Thus, frequent user interactions with the system, which are usually necessary in case of failures, might turn out to be an obstacle for the success of Cloud computing. In this paper we discuss self-manageable Cloud services. In case of failures, environmental changes, and similar, services manage themselves automatically following the principles of autonomic computing. Based on the life cycle of a self-manageable Cloud service we derive a resource submission taxonomy. Furthermore, we present an architecture for the implementation of self-manageable Cloud services. Finally, we discuss the application of autonomic computing to Cloud services based on service mediation and negotiation bootstrapping case study.

84 citations


Journal ArticleDOI
TL;DR: The paper talks about the spreading "X as a service" phenomenon in the IT arena, cloud computing, and SLA, which has entered the vocabulary in many forms, mostly in computing aspect.
Abstract: The paper talks about the spreading "X as a service" phenomenon in the IT arena, cloud computing, and SLA In the article, the author mentioned that "X as a Service" has entered the vocabulary in many forms, mostly in computing aspect Also, cloud computing can lead to significant savings as well as significant service improvements Cloud computing vendors would like to reap most of those benefitsIt can change this by bringing past experiences and good judgment into play and adapt them to new situation That is the mark of being a "professional" in IT field Experience, understanding, and thoughtfulness can help the contract people write an effective service level agreement (SLA) that puts its needs in good legal form Ideally the SLA will include all needed services and none that aren't needed

81 citations


Proceedings ArticleDOI
15 Jun 2009
TL;DR: This document presents the design and evaluation of a system that enables live migration of VMs running large enterprise applications without severely disrupting their live services, even across the Internet.
Abstract: Recent developments in virtualisation technology have resulted in its proliferation of usage across datacentres. Ultimately, the goal of this technology is to more efficiently utilise server resources to reduce Total Cost of Ownership (TCO) by abstracting hardware and consolidating servers. This results in lower equipment costs and less electrical consumption for server power and cooling. However, the TCO benefits of holistic virtualisation extend beyond server assets. One of these aspects relates to the ability of being able to migrate Virtual Machines (VM) across distinct physical hosts over a network. However, limitations of the current migration technology start to appear when they are applied on larger application systems such as SAP ERP or SAP ByDesign. Such systems consume a large amount of memory and cannot be transferred as seamlessly as smaller ones, creating service interruption. Limiting the impact and optimising migration becomes even more important with the generalisation of Service Level Agreement (SLA). In this document we present our design and evaluation of a system that enables live migration of VMs running large enterprise applications without severely disrupting their live services, even across the Internet. By combining well-known techniques and innovative ones we can reduce system down-time and resource impact for migrating live, large Virtual Execution Environments.

Book ChapterDOI
22 Nov 2009
TL;DR: This paper presents the design and implementation of a working prototype built on a EUCALYPTUS-based heterogeneous compute cloud that actively monitors the response time of each virtual machine assigned to the farm and adaptively scales up the application to satisfy a SLA promising a specific average response time.
Abstract: Current service-level agreements (SLAs) offered by cloud providers make guarantees about quality attributes such as availability. However, although one of the most important quality attributes from the perspective of the users of a cloud-based Web application is its response time, current SLAs do not guarantee response time. Satisfying a maximum average response time guarantee for Web applications is difficult due to unpredictable traffic patterns, but in this paper we show how it can be accomplished through dynamic resource allocation in a virtual Web farm. We present the design and implementation of a working prototype built on a EUCALYPTUS-based heterogeneous compute cloud that actively monitors the response time of each virtual machine assigned to the farm and adaptively scales up the application to satisfy a SLA promising a specific average response time. We demonstrate the feasibility of the approach in an experimental evaluation with a testbed cloud and a synthetic workload. Adaptive resource management has the potential to increase the usability of Web applications while maximizing resource utilization.

Journal ArticleDOI
TL;DR: A joint specification of QoS definitions with a sophisticated service resilience characterization is proposed, and a concept called quality of resilience is defined, which can be used as a tool for characterization of network reliability, as well as comparison and selection of recovery methods.
Abstract: With the increased role of resilience in modern networks, the existing quality of service is required to be expanded with service availability and maintainability. Recently, studies have shown the strong limitation of the common availability metrics for measuring the user's quality of experience. In this article a joint specification of QoS definitions with a sophisticated service resilience characterization is proposed, and a concept called quality of resilience is defined. In this unified performance metric, the frequency and length of service interruption are evaluated. It can be used as a tool for characterization of network reliability, as well as comparison and selection of recovery methods. Additionally, by including it in service level agreements, new and more complex requirements of commercial applications can be guaranteed.

Proceedings ArticleDOI
01 Dec 2009
TL;DR: This work proposes an agent-based testbed for bolstering the discovery of Cloud resources and SLA negotiation, and shows that broker agents are successful in matching requests to resources, and consumer and provider agents aresuccessful in negotiating for mutually acceptable time slots.
Abstract: In a business model for Cloud computing, users pay providers for consumption of their computing capabilities. This work proposes an agent-based testbed for bolstering the discovery of Cloud resources and SLA negotiation. In the testbed, provider and consumer agents act as intermediaries between providers and consumers. Through a 4-stage resource discovery process (selection, evaluation, filtering, and recommendation), a set of broker agents match consumers' requests to advertisements from providers. Following the matching of requests to resources, consumer and provider agents negotiate for mutually acceptable resource time slots. Empirical results show that broker agents are successful in matching requests to resources, and consumer and provider agents are successful in negotiating for mutually acceptable time slots.

Book ChapterDOI
28 Mar 2009
TL;DR: This paper describes the approach to context-aware adaptive services within the IST PLASTIC project and makes use of Chameleon, a formal framework for adaptive Java applications.
Abstract: The near future envisions a pervasive heterogeneous computing infrastructure that makes it possible for mobile users to run software services on a variety of devices, from networks of devices to stand-alone wireless resource-constrained ones. To ensure that users meet their non-functional requirements by experiencing the best Quality of Service according to their needs and specific contexts of use, services need to be context-aware and adaptable. The development and the execution of such services is a big challenge and it is far to be solved. In this paper we present our experience in this direction by describing our approach to context-aware adaptive services within the IST PLASTIC project. The approach makes use of Chameleon , a formal framework for adaptive Java applications.

Proceedings ArticleDOI
15 Jun 2009
TL;DR: The first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project are presented.
Abstract: Nowadays, novel computing paradigms as for example Grid or Cloud Computing are gaining more and more on importance. In case of Cloud Computing users pay for the usage of the computing power provided as a service. Beforehand they can negotiate specific functional and non-functional requirements relevant for the application execution. However, providing computing power as a service bears different research challenges. On the one hand dynamic, versatile, and adaptable services are required, which can cope with system failures and environmental changes. On the other hand, human interaction with the system should be minimized. In this paper we present the first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project. We discuss novel meta-negotiation and SLA mapping solutions for Grid/Cloud services bridging the gap between current QoS models and Grid/Cloud middleware and representing important prerequisites for the establishment of autonomic Grid/Cloud services. We present document models for the specification of meta-negotiations and SLA mappings. Thereafter, we discuss the sample architecture for the management of meta-negotiations and SLA mappings.

Journal ArticleDOI
01 Jul 2009
TL;DR: A novel agent-based framework is presented which utilises the agents' ability of negotiation, interaction, and cooperation to facilitate autonomous SLA management in the context of service composition provision and results from simulations show that by integrating agents and web services the framework can address issues ofSLA management drawn from sophisticated service composition scenarios.
Abstract: In the web services environment, service level agreements (SLA) refers to mutually agreed understandings and expectations between service consumers and providers on the service provision. Although management of SLA is critical to wide adoption of web services technologies in the real world, support for it is very limited nowadays, especially in web service composition scenarios. There lacks adequate frameworks and technologies supporting various SLA operations such as SLA formation, enforcement, and recovery. This paper presents a novel agent-based framework which utilises the agents' ability of negotiation, interaction, and cooperation to facilitate autonomous SLA management in the context of service composition provision. Based on this framework, mechanisms for autonomous SLA operations are proposed and discussed. Results from simulations show that by integrating agents and web services the framework can address issues of SLA management drawn from sophisticated service composition scenarios.

Book ChapterDOI
25 Nov 2009
TL;DR: An approach to NFP-based ranking of WSs providing support for semantic mediation, consideration of expressive NFP descriptions both on provider and client side, and novel matching functions for handling either quantitative or qualitative NFPs is introduced.
Abstract: Service discovery is a key activity to actually identify the Web services (WSs) to be invoked and composed. Since it is likely that more than one service fulfill a set of user requirements, some ranking mechanisms based on non-functional properties (NFPs) are needed to support automatic or semi-automatic selection. This paper introduces an approach to NFP-based ranking of WSs providing support for semantic mediation, consideration of expressive NFP descriptions both on provider and client side, and novel matching functions for handling either quantitative or qualitative NFPs. The approach has been implemented in a ranker that integrates reasoning techniques with algorithmic ones in order to overcome current and intrinsic limitations of semantic Web technologies and to provide algorithmic techniques with more flexibility. Moreover, to the best of our knowledge, this paper presents the first experimental results related to NFP-based ranking of WSs considering a significant number of expressive NFP descriptions, showing the effectiveness of the approach.

Journal ArticleDOI
TL;DR: The designed mechanism provides differentiation between distinct categories of service consumers as well as protection against server overloads and does not require any modification to the system software of the host server, or to its application logic.
Abstract: Nowadays, enterprises providing services through Internet often require online services supplied by other enterprises. This entails the cooperation of enterprise servers using Web services technology. The service exchange between enterprises must be carried out with a determined level of quality, which is usually established in a service level agreement (SLA). However, the fulfilment of SLAs is not an easy task and requires equipping the servers with special control mechanisms which control the quality of the services supplied. The first contribution of this research work is the analysis and definition of the main requirements that these control mechanisms must fulfil. The second contribution is the design of a control mechanism which fulfils these requirements and overcomes numerous deficiencies posed by previous mechanisms. The designed mechanism provides differentiation between distinct categories of service consumers as well as protection against server overloads. Furthermore, it scales in a cluster and does not require any modification to the system software of the host server, or to its application logic.

Journal ArticleDOI
TL;DR: It is shown that a VMI agreement should be arranged into parts dealing with the generic and legal sides of the agreement, whereas the technical aspects and the relation‐specific topics should be addressed in the annexes.
Abstract: Purpose – The purpose of this paper is to define the standard structure of a vendor managed inventory (VMI) agreement, which can be used as a guideline for the early definition of the agreement.Design/methodology/approach – Starting from an industrial application of relevance, the information flow and the technical details, which are to be defined before the operation startup, are identified and discussed. These data are used as the key points for the definition of the basic frame of the agreement. A particular emphasis is given to the “Technical Specification” and the “Service Level Agreement” sections.Findings – It is shown that a VMI agreement should be arranged into parts dealing with the generic and legal sides of the agreement, whereas the technical aspects and the relation‐specific topics should be addressed in the annexes. This increases the flexibility of the agreement in that, as the VMI relationship evolves over time, changes will affect only the annexes leaving the main body of the agreement u...

Book ChapterDOI
22 Jan 2009
TL;DR: This chapter overviews the PLASTIC validation framework in which different techniques can be combined for the verification of both functional and extra-functional properties, spanning over both off-line and on-line testing stages.
Abstract: The emergence of the Service Oriented Architecture (SOA) is changing the way in which software applications are developed. A service-oriented application consists of the dynamic composition of autonomous services independently developed by different organizations and deployed on heterogenous networks. Therefore, validation of SOA poses several new challenges, without offering any discount for the more traditional testing problems. In this chapter we overview the PLASTIC validation framework in which different techniques can be combined for the verification of both functional and extra-functional properties, spanning over both off-line and on-line testing stages. The former stage concerns development time testing, at which services are exercised in a simulated environment. The latter foresees the monitoring of a service live usage, to dynamically reveal possible deviations from the expected behaviour. Some techniques and tools which fit within the outlined framework are presented.

Book ChapterDOI
19 Jun 2009
TL;DR: This paper proposes a pro-active energy efficient technique for change management in cloud computing environments that takes prior SLA (Service Level Agreement) requests into account while determining time slots in which changes should take place.
Abstract: The continuously increasing cost of managing IT systems has led many companies to outsource their commercial services to external hosting centers. Cloud computing has emerged as one of the enabling technologies that allow such external hosting efficiently. Like any IT environment, a Cloud Computing environment requires high level of maintenance to be able to provide services to its customers. Replacing defective items (hardware/software), applying security patches, or upgrading firmware are just a few examples of the typical maintenance procedures needed in such environments. While taking resources down for maintenance, applying efficient change management techniques is a key factor to the success of the cloud. As energy has become a precious resource, research has been conducted towards devising protocols that minimize energy consumption in IT systems. In this paper, we propose a pro-active energy efficient technique for change management in cloud computing environments. We formulate the management problem into an optimization problem that aims at minimizing the total energy consumption of the cloud. Our proposed approach is pro-active in the sense that it takes prior SLA (Service Level Agreement) requests into account while determining time slots in which changes should take place.

Patent
Mark Cameron Little1
27 Feb 2009
TL;DR: In this article, the performance characteristics of a distributed computing system are monitored, the components including hardware components and software components that operate on the hardware components are determined based on the monitoring.
Abstract: Components of a distributed computing system are monitored, the components including hardware components and software components that operate on the hardware components. At least one of the software components is a service that includes a service level agreement. Performance characteristics of the components are determined based on the monitoring. The performance characteristics of the service are compared to the service level agreement to determine whether the service level agreement has been violated. At least one of the service or an additional service collocated with the service is migrated based on the performance characteristics of the components if the service level agreement has been violated.

Book ChapterDOI
22 Jan 2009
TL;DR: In this paper, the authors discuss some of these challenges and possible solutions making reference to the approach undertaken in the IST PLASTIC project for a specific instance of Softure focused on software for Beyond 3G (B3G) networks.
Abstract: Software in the near ubiquitous future (Softure) will need to cope with variability, as software systems get deployed on an increasingly large diversity of computing platforms and operates in different execution environments. Heterogeneity of the underlying communication and computing infrastructure, mobility inducing changes to the execution environments and therefore changes to the availability of resources and continuously evolving requirements require software systems to be adaptable according to the context changes. Softure should also be reliable and meet the users performance requirements and needs. Moreover, due to its pervasiveness and in order to make adaptation effective and successful, adaptation must be considered in conjunction with dependability, i.e., no matter what adaptation is performed, the system must continue to guarantee a certain degree of Quality of Service (QoS). Hence, Softure must also be dependable, which is made more complex given the highly dynamic nature of service provision. Supporting the development and execution of Softure systems raises numerous challenges that involve languages, methods and tools for the systems thorough design and validation in order to ensure dependability of the self-adaptive systems that are targeted. However these challenges, taken in isolation are not new in the software domain. In this paper we will discuss some of these challenges and possible solutions making reference to the approach undertaken in the IST PLASTIC project for a specific instance of Softure focused on software for Beyond 3G (B3G) networks.

Book ChapterDOI
14 Jan 2009
TL;DR: This work uses the TCP and UCP network formalisms to allow for a simple yet very flexible specification of hard constraints, preferences, and tradeoffs over NFPs as well as service level objectives (SLO).
Abstract: When implementing a business or software activity in SOA, a match is sought between the required functionality and that provided by a web service. In selecting services to perform a certain business functionality, often only hard constraints are considered. However, client requirements over QoS or other NFP types are often soft and allow tradeoffs. We use a graphical language for specifying hard constraints, preferences and tradeoffs over NFPs as well as service level objectives (SLO). In particular, we use the TCP and UCP network formalisms to allow for a simple yet very flexible specification of hard constraints, preferences, and tradeoffs over these properties. Algorithms for selecting web services according to the hard constraints, as well as for optimizing the selected web service configuration, according to the specification, were developed.

Journal ArticleDOI
TL;DR: This framework is presented as a possible solution to the management of resources that need a given Quality of Service (QoS) and the QoS requirements of all the parties should converge on a formal agreement defined as the Service Level Agreement.

Patent
12 Mar 2009
TL;DR: In this article, a system and method for measuring compliance with a service level agreement for communications is presented, where a threshold is set for a core information rate and a user network interface core rate operable to avoid contention.
Abstract: A system and method for measuring compliance with a service level agreement for communications. A threshold is set for a core information rate and a user network interface core information rate operable to avoid contention. Frame loss is measured on a core network and legs of the network. A determination is made that the service loss agreement is noncompliant in response to determining there is frame loss and a user network interface core information rate has not been exceeded or the core committed information rate has not been exceeded.

Proceedings ArticleDOI
TL;DR: Aneka, an enterprise cloud computing solution, harnesses the power of compute resources by relying on private and public clouds and delivers to users the desired Quality of Service (QoS) as mentioned in this paper.
Abstract: Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.

Journal ArticleDOI
TL;DR: A new concept of Spanning Tree Elevation Protocol (STEP) is proposed that increases MEN performance while supporting QoS including traffic policing and service differentiation and greatly enhanced network throughput.

Patent
08 Oct 2009
TL;DR: In this article, a system and method of determining performance metrics for inclusion in a Service Level Agreement (SLA) between a customer and a host computing service provider is presented, which includes: receiving a provisioning request from a customer including receiving computing performance requirement parameters and environmental parameters for including inclusion in the SLA from the customer; deploying discovery tools to identify relevant infrastructure components based on performance metrics; and data is obtained from the probes while changing infrastructure components for simulating and assessing impact of one or more different customer scenarios for different performance policies.
Abstract: A system and method of determining performance metrics for inclusion in a Service Level Agreement (SLA) between a customer and a host computing service provider The method comprises: receiving a provisioning request from a customer including receiving computing performance requirement parameters and environmental parameters for inclusion in the SLA from the customer; deploying discovery tools to identify relevant infrastructure components based on performance metrics Based on identification of the customer's relevant infrastructure components, probes are deployed and installed Then, data is obtained from the probes while changing infrastructure components for simulating and assessing impact of one or more different customer scenarios for different performance policies In one aspect, the obtained data is used to identify and implement an a priori risk sharing agreement between the customer and service provider In a further aspect, the data obtained for simulating and assessing impact of one or more different customer policies include data for simulating and assessing different environmental policies