scispace - formally typeset
Search or ask a question

Showing papers on "Service level published in 2007"


Journal ArticleDOI
TL;DR: In this article, the authors consider a supply chain design problem where the decision maker needs to decide the number and locations of the distribution centers (DCs), where customers face random demand, and each DC maintains a certain amount of safety stock in order to achieve a certain service level for the customers it serves.

255 citations


Proceedings ArticleDOI
20 May 2007
TL;DR: How the different quality attributes of a system can be positively or negatively affected by the use of SOA technology is discussed, as well as possible tradeoffs and existing efforts to achieve that quality.
Abstract: The SOA approach is a very popular choice today for the implementation of distributed systems. The use of SOA or more specifically the Web services technology is an important architecture decision. An architect should understand how different quality attributes for a system are impacted by that decision. While there are significant benefits with respect to interoperability and modifiability, other qualities such as performance, security and testability are concerns. This paper discusses how the different quality attributes of a system can be positively or negatively affected by the use of such technology. It describes the factors related to each attribute, as well as possible tradeoffs and existing efforts to achieve that quality. The paper also discusses open issues in service level agreements that are used to contract the level of service quality between service providers and users.

209 citations


Journal ArticleDOI
TL;DR: The model can be used by both the National Blood Service and by hospital managers as a decision support tool to investigate different procedures and policies and present results for a representative medium-sized hospital.
Abstract: This case study is concerned with analysing policies for managing the blood inventory system in a typical UK hospital supplied by a regional blood centre. The objective of the project is to improve procedures and outcomes by modelling the entire supply chain for that hospital, from donor to recipient. The supply chain of blood products is broken down into material flows and information flows. Discrete-event simulation is used to determine ordering policies leading to reductions in shortages and wastage, increased service levels, improved safety procedures and reduced costs, by employing better system coordination. In this paper we describe the model and present results for a representative medium-sized hospital. The model can be used by both the National Blood Service and by hospital managers as a decision support tool to investigate different procedures and policies.

200 citations


Journal ArticleDOI
TL;DR: A multi-objective maximal covering-based emergency vehicle location model is proposed that addresses the issue of determining the best base locations for a limited number of vehicles so that the service level objectives are optimized.

186 citations


Journal ArticleDOI
TL;DR: In this paper, a general market for an industry of competing service facilities is analyzed, where firms differentiate themselves by their price levels and the waiting time their customers experience, as well as different attributes not determined directly through competition.
Abstract: We analyze a general market for an industry of competing service facilities. Firms differentiate themselves by their price levels and the waiting time their customers experience, as well as different attributes not determined directly through competition. Our model therefore assumes that the expected demand experienced by a given firm may depend on all of the industry's price levels as well as a (steady-state) waiting-time standard, which each of the firms announces and commits itself to by proper adjustment of its capacity level. We focus primarily on a separable specification, which in addition is linear in the prices. (Alternative nonseparable or nonlinear specifications are discussed in the concluding section.) We define a firm's service level as the difference between an upper-bound benchmark for the waiting-time standard (wI ) and the firm's actual waiting-time standard. Different types of competition and the resulting equilibrium behavior may arise, depending on the industry dynamics through which the firms select their strategic choices. In one case, firms may initially select their waiting-time standards, followed by a selection of their prices in a second stage (service-level first). Alternatively, the sequence of strategic choices may be reversed (price first) or, as a third alternative, the firms may make their choices simultaneously (simultaneous competition). We model each of the service facilities as a single-server M/M/1 queueing facility, which receives a given firm-specific price for each customer served. Each firm incurs a given cost per customer served as well as cost per unit of time proportional to its adopted capacity level.

184 citations


Journal ArticleDOI
TL;DR: In this research, the service level agreements for a service composition are established through autonomous agent negotiation and an innovative framework is proposed in which the service consumer is represented by a set of agents who negotiate quality of service constraints with the service providers for various services in the composition.

180 citations


Journal ArticleDOI
TL;DR: The thesis addresses dependability differentiation in connection-oriented backbone communication networks with a proposal of using adaptive management to increase compliance with interval availability guarantees and a classification scheme for how to approach differentiation is proposed.
Abstract: Unintentional failures affect links and nodes in communication networks. Recovery mechanisms are the key tool for achieving the dependability required by the services using the network. However, high dependability in communication networks comes at a high cost in terms of the capacity needed by these mechanisms.The traffic from all services and users is carried by the same backbone network. Since the users and services have different requirements, and users have different willingness to pay for a high quality of service, it is desirable to have methods that enable provision of different levels of dependability in the same network, i.e. dependability differentiation. The thesis addresses dependability differentiation in connection-oriented backbone communication networks. Two methods to provide connections meeting differentiated guarantees on the asymptotic availability are proposed.The first of these uses a novel flexible arrangement for dedicated protection denoted a protection pattern. The protection pattern is used in a proposed distributed connection management system. The system is compared with alternative proposals based on centralized management and shows good performance. The second proposal uses shared protection, which may potentially use less resources in terms of bandwidth, but has higher complexity than dedicated protection. The proposed system is based on rules to control the sharing to enable provision of guarantees. Simulation results show that the proposed method performs significantly better than an alternative strategy based on dedicated protection.A different approach to availability-guaranteed services is to offer guarantees on the interval availability which is a measure commonly used Service Level Agreements (SLAs). The thesis contains a proposal of using adaptive management to increase compliance with interval availability guarantees. Different adaptive management policies are proposed and compared to alternative static provisioning policies in a case study. The thesis also addresses the problem of measuring dependability by simulation. To reduce the simulation effort needed to obtain precise estimates of dependability attributes, a rare-event simulation technique has been applied to the well-known Network Simulator 2 (NS2). The results show that the technique is applicable to this types of simulation scenario, but the gain is modest. The thesis also contains a broad literature survey of dependability differentiation research. This is the first survey of the topic. Hence, it is in itself a significant contribution. A classification scheme for how to approach differentiation is proposed and a critical evaluation of the state of art is given.This thesis contributes to fill in some of the ``gaps'' identified, but there are still significant challenges ahead before differentiation may be deployed in operational networks.

156 citations


Journal ArticleDOI
TL;DR: It is shown what types of coordination mechanisms allow the decentralized supply chain to generate aggregate expected profits equal to the optimal profits in a centralized system, and how the parameters of these (perfect) coordination schemes can be determined.
Abstract: In a decentralized supply chain, with long-term competition between independent retailers facing random demands and buying from a common supplier, how should wholesale and retail prices be specified in an attempt to maximize supply-chain-wide profits? We show what types of coordination mechanisms allow the decentralized supply chain to generate aggregate expected profits equal to the optimal profits in a centralized system, and how the parameters of these (perfect) coordination schemes can be determined. We assume that the retailers face stochastic demand functions that may depend on all of the firms' prices as well as a measure of their service levels, e.g., the steady-state availability of the products. We systematically compare the coordination mechanisms when retailers compete only in terms of their prices, and when they engage in simultaneous price and service competition.

150 citations


Patent
29 Mar 2007
TL;DR: In this article, a method of provisioning one or more resources in a distributed computing network to ensure compliance with a service-level agreement associated with a computer application is described, where techniques are disclosed for network distribution and provisioning of applications, such as transactional applications and parallel applications, across multiple administrative domains.
Abstract: Techniques are disclosed for network distribution and provisioning of applications, such as transactional applications and parallel applications, across multiple administrative domains that ensure compliance with service level agreements. For example, a method of provisioning one or more resources in a distributed computing network to ensure compliance with a service level agreement associated with a computer application includes the following steps. Network performance is monitored between a local domain and one or more cooperating domains connected to the local domain by network paths. A present or predicted violation of the service level agreement is identified based on at least a portion of results of the monitoring step. One or more cooperating domains are selected that can effect compliance with the service level agreement by instantiating one or more network resources within at least one of the selected cooperating domains in response to a request from the local domain. Reconfiguration of the local domain is effectuated to allow the computer application to make use of the one or more newly instantiated network resources within the selected cooperating domain.

147 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report on an in-depth exploration of service quality in an Information Technology service department in a Higher Education Institute (HEI) and to evaluate the instrument used.
Abstract: Purpose – The purpose of the study is to report on an in‐depth exploration of service quality in an Information Technology service department in a Higher Education Institute (HEI) and to evaluate the instrument used.Design/methodology/approach – The study surveys customers using the SERVQUAL instrument, which is one of the most widely used and applied scales for the measurement of perceived service quality.Findings – A focused and rigorous examination of customers' views of the importance of the service elements is provided. The study confirmed previous research that the application of SERVQUAL in the public sector can produce different service quality dimensions from those found in private sector services. It was also found that the service quality gaps, and the relative importance of the five dimensions of service quality, were the same for students and staff, albeit with some specific differences. Reliability was the most important dimension for all customers and the greatest improvement in service qua...

140 citations


Journal ArticleDOI
TL;DR: The analysis reveals that a buyer could indeed orchestrate a competition among potential suppliers to promote service quality, and under identical allocation functions, the existence of a demand-independent service cost gives a distinct advantage to SS-type competitions.
Abstract: We consider a single buyer who wishes to outsource a fixed demand for a manufactured good or service at a fixed price to a set of potential suppliers. We examine the value of competition as a mechanism for the buyer to elicit service quality from the suppliers. We compare two approaches the buyer could use to orchestrate this competition: (1) a supplier-allocation (SA) approach, which allocates a proportion of demand to each supplier with the proportion allocated to a supplier increasing in the quality of service the supplier promises to offer, and (2) a supplier-selection (SS) approach, which allocates all demand to one supplier with the probability that a particular supplier is selected increasing in the quality of service to which the supplier commits. In both cases, suppliers incur a cost whenever they receive a positive portion of demand, with this cost increasing in the quality of service they offer and the demand they receive. The analysis reveals that (a) a buyer could indeed orchestrate a competition among potential suppliers to promote service quality, (b) under identical allocation functions, the existence of a demand-independent service cost gives a distinct advantage to SS-type competitions, in terms of higher service quality for the buyer and higher expected profit for the supplier, (c) the relative advantage of SS versus SA depends on the magnitude of demand-independent versus demand-dependent service costs, (d) in the presence of a demand-independent service cost, a buyer should limit the number of competing suppliers under SA competition but impose no such limits under SS competition, and (e) a buyer can induce suppliers to provide higher service levels by selecting an appropriate allocation function. We illustrate the impact of these results through three example applications.

Proceedings ArticleDOI
Yuan Chen1, Subu Iyer1, Xue Liu1, Dejan Milojicic1, Akhil Sahai1 
11 Jun 2007
TL;DR: This paper presents an approach that combines performance modeling with performance profiling to create models that translate SLOs to lower-level resource requirements for each system involved in providing the service, eliminating the involvement of domain experts.
Abstract: In today's complex and highly dynamic computing environments, systems/services have to be constantly adjusted to meet service level agreements (SLAs) and to improve resource utilization, thus reducing operating cost. Traditional design of such systems usually involves domain experts who implicitly translate service level objectives (SLOs) specified in SLAs to system-level thresholds in an ad-hoc manner. In this paper, we present an approach that combines performance modeling with performance profiling to create models that translate SLOs to lower-level resource requirements for each system involved in providing the service. Using these models, the process of creating an efficient design of a system/service can be automated, eliminating the involvement of domain experts. We demonstrate that our approach is practical and that it can be applied to different applications and software architectures. Our experiments show that for a typical 3-tier e-commerce application in a virtualized environment the SLAs can be met while improving CPU utilization up to 3 times.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the opportunities and challenges for improving the performance of supply chain processes by coordinated application of inventory management and capacity management and illustrate their approach by a supplier company in the telecommunication and automotive industry (tier 2).

Journal ArticleDOI
TL;DR: A Lagrangian heuristic is developed to obtain near-optimal solutions with reasonable computational requirements for large problem instances of a two-stage supply chain that replenishes a single product at retailers.
Abstract: Most existing network design and facility location models have focused on the trade-off between the fixed costs of locating facilities and variable transportation costs between facilities and customers. However, operational performance measures such as service levels and lead times are what motivates customers to bring business to a company and should be considered in the design of a distribution network. While some previous work has considered lead times and safety stocks separately, they are closely related in practice, since safety stocks are often set relative to the distribution of demand over the lead time. In this paper we consider a two-stage supply chain with a production facility that replenishes a single product at retailers. The objective is to locate Distribution Centers (DCs) in the network such that the sum of the location and inventory (pipeline and safety stock) costs is minimized. The replenishment lead time at the DCs depends on the volume of flow through the DC. We require the DCs to c...

Journal ArticleDOI
TL;DR: The proposed approach allows decision makers to perform trade-off analysis among customer service levels, product cost, and inventory investment depending on their risk attitude and provides an alternative tool to evaluate and improve SC configuration decisions in an uncertain SC environment.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the reasons for, and nature of warehouse automation implementations in order to further this understanding and found that the main reason for automation is to accommodate growth, with cost reduction and service improvement also being important.
Abstract: Purpose – Automated warehouse equipment is often regarded as being inflexible, and yet its use continues to rise even though markets are becoming increasingly volatile. The purpose of this paper is to explore the reasons for, and nature of, warehouse automation implementations in order to further this understanding.Design/methodology/approach – The research is based on semi‐structured interviews with some of the key stakeholders in automation projects. This is followed by a survey questionnaire to widen the findings.Findings – The research indicates that the main reason for automation is to accommodate growth, with cost reduction and service improvement also being important. The implementation process tends to be complex and lengthy, although most projects are controlled within the planned budget and timescale. There is, however, a real risk of disruption and service level failings during the operational start‐up of these projects, as well as some concerns about ongoing flexibility.Research limitations/im...

Journal ArticleDOI
TL;DR: In this article, a new lateral transshipment policy, called service level adjustment (SLA), is proposed to efficiently respond to customer demands, which considers the service level to decide the quantity for lateral trans-shipment.

Journal ArticleDOI
TL;DR: A model of consumer learning and choice behavior in response to uncertain service in the marketplace shows that asymmetry in consumer learning has a significant impact on the optimal service levels, market shares, and profits of the retailers.
Abstract: We develop a model of consumer learning and choice behavior in response to uncertain service in the marketplace. Learning could be asymmetric, that is, consumers may associate different weights with positive and negative experiences. Under this consumer model, we characterize the steady-state distribution of demand for retailers given that each retailer holds a constant in-stock service level. We then consider a noncooperative game in steady state between two retailers competing on the basis of their service levels. The demand distributions of retailers in this game are modeled using a multiplicative aggregate market-share model in which the mean demands are obtained from the steady-state results for individual purchases, but the model is simplified in other respects for tractability. Our model yields a unique pure strategy Nash equilibrium. We show that asymmetry in consumer learning has a significant impact on the optimal service levels, market shares, and profits of the retailers. When retailers have different costs, it also determines the extent of competitive advantage enjoyed by the lower-cost retailer.

Journal ArticleDOI
TL;DR: In this paper, the authors present a model for values-based sustainable service business grounded in the concept of values based service quality, and present a methodology for supporting values based sustainable service businesses.
Abstract: Purpose – The purpose of this research is to present a model for values-based sustainable servicebusiness grounded in the concept of values-based service quality. Design/methodology/approach – Base ...

Journal ArticleDOI
TL;DR: A modular, integrated, and computationally tractable method is proposed for the solution of the associated stochastic mixed-integer optimization problems containing joint probabilistic constraints with dependent right-hand side variables and turns out to be very efficient.
Abstract: We consider a supply chain operating in an uncertain environment: The customers' demand is characterized by a discrete probability distribution. A probabilistic programming approach is adopted for constructing an inventory-production-distribution plan over a multiperiod planning horizon. The plan does not allow the backlogging of the unsatisfied demand, and minimizes the costs of the supply chain while enabling it to reach a prescribed nonstockout service level. It is a strategic plan that hedges against undesirable outcomes, and that can be adjusted to account for possible favorable realizations of uncertain quantities. A modular, integrated, and computationally tractable method is proposed for the solution of the associated stochastic mixed-integer optimization problems containing joint probabilistic constraints with dependent right-hand side variables. The concept of p-efficiency is used to construct a finite number of demand trajectories, which in turn are employed to solve problems with joint probabilistic constraints. We complement this idea by designing a preordered set-based preprocessing algorithm that selects a subset of promising p-efficient demand trajectories. Finally, to solve the resulting disjunctive mixed-integer programming problem, we implement a special column-generation algorithm that limits the risk of congestion in the resources of the supply chain. The methodology is validated on an industrial problem faced by a large chemical supply chain and turns out to be very efficient: it finds a solution with a minimal integrality gap and provides substantial cost savings.

Journal ArticleDOI
TL;DR: In this article, the authors compared three inventory management models that rely on RFID data for tracking and dispatching of time-sensitive materials on a shop floor and showed that the desired level of system performance can be achieved by adjusting the values of the smoothing parameters.
Abstract: In this paper, inventory management of time-sensitive materials using RFID data is studied using simulation. Based on the production data obtained from a manufacturing company, three inventory management models that rely on RFID data for tracking and dispatching of time-sensitive materials on a shop floor is presented. The complexity of the models ranges from statically-set, fixed baseline inventory models to dynamic, forecast-integrated inventory control schemes. This study compares the inventory models on the basis of service level, cost, inventory and waste reduction, and decision-making complexity. A comparative analysis of the models is presented in a simulation environment, which also demonstrates the overall benefits and effectiveness of RFID technologies in providing low-cost manufacturing solutions, reduced inventory levels, and lower overall waste. The forecast-integrated inventory model is developed based on a trend-adjusted exponential smoothing algorithm, with two smoothing parameters, α and β, used as coefficients for the average production demand and its trend, respectively. The study shows that the desired level of system performance can be achieved by adjusting the values of the smoothing parameters.

Proceedings ArticleDOI
22 Oct 2007
TL;DR: The paper presents a novel methodology for designing autonomic QoS-aware resource managers that have the capability to predict the performance of the Grid components they manage and allocate resources in such a way that service level agreements are honored.
Abstract: As Grid Computing increasingly enters the commercial domain, performance and Quality of Service (QoS) issues are becoming a major concern. The inherent complexity, heterogeneity and dynamics of Grid computing environments pose some challenges in managing their capacity to ensure that QoS requirements are continuously met. In this paper, an approach to autonomic QoS-aware resource management in Grid computing based on online performance models is proposed. The paper presents a novel methodology for designing autonomic QoS-aware resource managers that have the capability to predict the performance of the Grid components they manage and allocate resources in such a way that service level agreements are honored. The goal is to make the Grid middleware self-configurable and adaptable to changes in the system environment and workload. The approach is subjected to an extensive experimental evaluation in the context of a real-world Grid environment and its effectiveness, practicality and performance are demonstrated.

Proceedings ArticleDOI
25 Jun 2007
TL;DR: Pathmap and the E2EProf toolkit successfully detect causal request paths and associated performance bottlenecks in the RUBiS ebay-like multi-tier Web application and in one of the datacenter of the industry partner, Delta Air Lines.
Abstract: Distributed systems are becoming increasingly complex, caused by the prevalent use of Web services, multi-tier architectures, and grid computing, where dynamic sets of components interact with each other across distributed and heterogeneous computing infrastructures. For these applications to be able to predictably and efficiently deliver services to end users, it is therefore, critical to understand and control their runtime behavior. In a datacenter environment, for instance, understanding the end-to-end dynamic behavior of certain IT subsystems, from the time requests are made to when responses are generated and finally, received, is a key prerequisite for improving application response, to provide required levels of performance, or to meet service level agreements (SLAs). The E2EProf toolkit enables the efficient and nonintrusive capture and analysis of end-to-end program behavior for complex enterprise applications. E2EProf permits an enterprise to recognize and analyze performance problems when they occur - online, to take corrective actions as soon as possible and wherever necessary along the paths currently taken by user requests - end-to-end, and to do so without the need to instrument applications - nonintrusively. Online analysis exploits a novel signal analysis algorithm, termed pathmap, which dynamically detects the causal paths taken by client requests through application and backend servers and annotates these paths with end-to-end latencies and with the contributions to these latencies from different path components. Thus, with pathmap, it is possible to dynamically identify the bottlenecks present in selected servers or services and to detect the abnormal or unusual performance behaviors indicative of potential problems or overloads. Pathmap and the E2EProf toolkit successfully detect causal request paths and associated performance bottlenecks in the RUBiS ebay-like multi-tier Web application and in one of the datacenter of our industry partner, Delta Air Lines.

Book ChapterDOI
17 Sep 2007
TL;DR: This paper presents a QoS scheduler that uses SLAs to efficiently schedule advance reservations for computation services based on their flexibility, and introduces the scheduling algorithms, and shows experimentally that it is possible to use flexible advance reservations to meet specified QoS while improving resource utilisation.
Abstract: Utility computing enables the use of computational resources and services by consumers with service obligations and expectations defined in Service Level Agreements (SLAs) Parallel applications and workflows can be executed across multiple sites to benefit from access to a wide range of resources and to respond to dynamic runtime requirements A utility computing provider has the difficult role of ensuring that all current SLAs are provisioned, while concurrently forming new SLAs and providing multiple services to numerous consumers Scheduling to satisfy SLAs can result in a low return from a provider's resources due to trading off Quality of Service (QoS) guarantees against utilisation One technique is to employ advance reservations so that an SLA aware scheduler can properly manage and schedule its resources To improve system utilisation we exploit the principle that some consumers will be more flexible than others in relation to the starting or completion time, and that we can juggle the execution schedule right up until each execution starts In this paper we present a QoS scheduler that uses SLAs to efficiently schedule advance reservations for computation services based on their flexibility In our SLA model users can reduce or increase the flexibility of their QoS requirements over time according to their needs and resource provider policies We introduce our scheduling algorithms, and show experimentally that it is possible to use flexible advance reservations to meet specified QoS while improving resource utilisation

Proceedings ArticleDOI
07 Jul 2007
TL;DR: This paper proposes the use of Genetic Algorithms to generate inputs and configurations for service-oriented systems that cause SLA violations and has been implemented in a tool and applied to an audio processing workflow and to a service for chart generation.
Abstract: The diffusion of service oriented architectures introduces the need for novel testing approaches. On the one side, testing must be able to identify failures in the functionality provided by service. On the other side, it needs to identify cases in which the Service Level Agreement (SLA) negotiated between the service provider and the service consumer is not met. This would allow the developer to improve service performances, where needed, and the provider to avoid promising Quality of Service (QoS) levels that cannot be guaranteed. This paper proposes the use of Genetic Algorithms to generate inputs and configurations for service-oriented systems that cause SLA violations. The approach has been implemented in a tool and applied to an audio processing workflow and to a service for chart generation. In both cases, the approach was able to produce test data able to violate some QoS constraints.

Journal ArticleDOI
TL;DR: An optimal operating policy is derived and it is proved that the expected overall costs of such a production system with backlogging permitted is less than or equal to that of the same model without backlogging.

Journal ArticleDOI
TL;DR: An optimization model is introduced that explicitly captures the interdependency between network design and inventory stocking decisions and shows that the integrated approach can provide significant cost savings over the decoupled approach, shifting the whole efficient frontier curve between cost and service level to superior regions.
Abstract: We study the integrated logistics network design and inventory stocking problem as characterized by the interdependency of the design and stocking decisions in service parts logistics. These two sets of decisions are usually considered sequentially in practice, and the associated problems are tackled separately in the research literature. The overall problem is typically further complicated due to time-based service constraints that provide lower limits on the percentage of demand satisfied within specified time windows. We introduce an optimization model that explicitly captures the interdependency between network design (location of facilities, and allocation of demands to facilities) and inventory stocking decisions (stock levels and their corresponding stochastic fill rates), and present computational results from our extensive experiments that investigate the effects of several factors including demand levels, time-based service levels and costs. We show that the integrated approach can provide signi...

Journal ArticleDOI
TL;DR: This paper presents a model formulation that minimizes the setup and holding costs with respect to a constraint on the probability that the inventory at the end of any period does not become negative and, alternatively, to a fill rate constraint.

Book ChapterDOI
01 Jan 2007
TL;DR: This chapter presents a framework that is developed to support the monitoring of service level agreements (SLAs), and an extension of WS-Agreement that uses an event calculus–based language, called EC-Assertion, for the specification of the service guarantee terms in a service level agreement that need to be monitored at runtime.
Abstract: In this chapter, we present a framework that we have developed to support the monitoring of service level agreements (SLAs). The agreements that can be monitored by this framework are expressed in an extension of WS-Agreement that we propose. The main characteristic of the proposed extension is that it uses an event calculus–based language, called EC-Assertion, for the specification of the service guarantee terms in a service level agreement that need to be monitored at runtime. The use of EC-Assertion for specifying service guarantee terms provides a well-defined semantics to the specification of such terms and a formal reasoning framework for assessing their satisfiability. The chapter describes also an implementation of the framework and the results of a set of experiments that we have conducted to evaluate it.

Proceedings ArticleDOI
07 Feb 2007
TL;DR: This work proposes a highly available dynamic deployment infrastructure, HAND, based on the Java Web services core of Globus toolkit 4.0, which provides capability, availability, and extensibility for dynamic deployment of Java Web Services in dynamic grid environments.
Abstract: Grid computing is becoming more and more attractive for coordinating large-scale heterogeneous resource sharing and problem solving. Of particular interest for effective grid computing is a software provisioning mechanism. We propose a highly available dynamic deployment infrastructure, HAND, based on the Java Web services core of Globus toolkit 4. HAND provides capability, availability, and extensibility for dynamic deployment of Java Web services in dynamic grid environments. We identify the factors that have impact to dynamic deployment in static and dynamic environments. We also present the design, analysis, implementation, and evaluation of two different approaches to dynamic deployment (service level and container level), and examine the performance of alternative data transfer protocol for service implementations. Our results demonstrate that HAND can deliver significantly improved availability and performance relative to other approaches