scispace - formally typeset
Search or ask a question

Showing papers on "Service level published in 2008"


Journal ArticleDOI
TL;DR: A manufacturer’s problem of managing his direct online sales channel together with an independently owned bricks-and-mortar retail channel is studied, when the channels compete in service.
Abstract: We study a manufacturer’s problem of managing his direct online sales channel together with an independently owned bricks-and-mortar retail channel, when the channels compete in service. We incorporate a detailed consumer channel choice model in which the demand faced in each channel depends on the service levels of both channels as well as the consumers’ valuation of the product and shopping experience. The direct channel’s service is measured by the delivery lead time for the product; the retail channel’s service is measured by product availability. We identify optimal dual channel strategies that depend on the channel environment described by factors such as the cost of managing a direct channel, retailer inconvenience, and some product characteristics. We also determine when the manufacturer should establish a direct channel or a retail channel if he is already selling through one of these channels. Finally, we conduct a sequence of controlled experiments with human subjects to investigate whether our model makes reasonable predictions of human behavior. We determine that the model accurately predicts the direction of changes in the subjects’ decisions, as well as their channel strategies in response to the changes in the channel environment. These observations suggest that the model can be used in designing channel strategies for an actual dual channel environment. 1

402 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a price-service competition model of two supply chains to investigate the optimal decisions of players under demand uncertainty, and analyzed the effects of the retailers' risk sensitivity on the players' optimal strategies.

322 citations


Journal ArticleDOI
TL;DR: An ambulance location optimization model that minimizes the number of ambulances needed to provide a specified service level and considers response time to be composed of a random delay (prior to travel to the scene) plus a random travel time is described.
Abstract: We describe an ambulance location optimization model that minimizes the number of ambulances needed to provide a specified service level. The model measures service level as the fraction of calls reached within a given time standard and considers response time to be composed of a random delay (prior to travel to the scene) plus a random travel time. In addition to modeling the uncertainty in the delay and in the travel time, we incorporate uncertainty in the ambulance availability in determining the response time. Models that do not account for the uncertainty in all three of these components may overestimate the possible service level for a given number of ambulances and underestimate the number of ambulances needed to provide a specified service level. By explicitly modeling the randomness in the ambulance availability and in the delays and the travel times, we arrive at a more realistic ambulance location model. Our model is tractable enough to be solved with general-purpose optimization solvers for cities with populations around one Million. We illustrate the use of the model using actual data from Edmonton.

245 citations


Journal ArticleDOI
TL;DR: In this paper, a supply chain distribution network design model is developed to select the optimum numbers, locations and capacity levels of plants and warehouses to deliver products to retailers at the least cost while satisfying desired service level to retailers.
Abstract: A supply chain (SC) distribution network design model is developed in this paper. The goal of the model is to select the optimum numbers, locations and capacity levels of plants and warehouses to deliver products to retailers at the least cost while satisfying desired service level to retailers. A maximal covering approach is used in statement of the service level. The model distinguishes itself from other models in this field in the modeling approach used. Because of somewhat imprecise nature of retailers’ demands and decision makers’ (DM) aspiration levels for the goals, a fuzzy modeling approach is used. Additionally, a novel and generic interactive fuzzy goal programming (IFGP)-based solution approach is proposed to determine the preferred compromise solution. To explore the viability of the proposed model and the solution approach, computational experiments are performed on realistic scale case problems.

225 citations


Journal ArticleDOI
TL;DR: In this article, the authors present new guidelines for the design and control of intermittent water distribution systems in developing countries, which are driven by a modified set of design objectives to be met at least cost.
Abstract: Urban areas of developing countries are facing increasing water scarcity and it is possible that this problem may be further aggravated due to rapid changes in the hydro-environment at different scales, like those of climate and land-cover. Due to water scarcity and limitations to the development of new water resources, it is prudent to shift from the traditional ‘supply based management’ to a ‘demand management’ paradigm. Demand management focuses on measures that make better and more efficient use of limited supplies, often at a level significantly below standard service levels. This paper particularly focuses on the intermittent water supplies in the cities of developing countries. Intermittent water supplies need to be adopted due to water scarcity and if not planned properly, results in inequities in water deliveries to consumers and poor levels of service. It is therefore important to recognise these realities when designing and operating such networks. The standard tools available for design of water supply systems often assume a continuous, unlimited supply and the supplied water amount is limited only be the demand, making them unsuitable for designing intermittent supplies that are governed by severely limited water availability. This paper presents details of new guidelines developed for the design and control of intermittent water distribution systems in developing countries. These include a modified network analysis simulation coupled with an optimal design tool. The guidelines are driven by a modified set of design objectives to be met at least cost. These objectives are equity in supply and people driven levels of service (PDLS) expressed in terms of four design parameters namely, duration of the supply; timings of the supply; pressure at the outlet (or flow-rate at outlet); and others such as the type of connection required and the locations of connections (in particular for standpipes). All the four parameters are calculated using methods and techniques that recognise the relationship between outflow at a water connection and the pressure experienced at that connection. The paper presents a case study where it is demonstrated that the new guidelines can provide an equitable and acceptable level of service throughout the design horizon of the project.

221 citations


Journal ArticleDOI
TL;DR: A bi-objective model is set up for the distribution network of a three-echelon supply chain, with two objective functions: minimizing costs, and minimizing the sum of backorders and surpluses of products in all periods.

213 citations


Journal ArticleDOI
TL;DR: The aim of this work is to analyze the quality, satisfaction, and loyalty sequence in the logistic service delivery context, with the purpose of considering the role of information and communication technologies (ICT) in this chain of effects.
Abstract: Purpose – Now‐a‐days, logistics research focuses on the ability of logistics to deliver a quality service and generate greater satisfaction with the delivered service. Therefore, the aim of this work is to analyze the quality, satisfaction, and loyalty sequence in the logistic service delivery context, with the purpose of considering the role of information and communication technologies (ICT) in this chain of effects.Design/methodology/approach – After reviewing the different approaches given by the literature, SEM analysis is used to contrast the hypotheses for the analyzed constructs in the presence of high/low ICT level. A questionnaire based on personal survey was conducted among manufacturers. The study collected data from 194 companies. Structural equation modeling was applied to these data to test relationships among the variables in the study.Findings – The reliability and validity tests show satisfactory results. The conclusions confirm this chain of consequences, and emphasize the incidence of ...

194 citations


Book
25 Sep 2008
TL;DR: In this article, the authors introduce freight distribution logistic, discuss some statistics about future trends in this area, and suggest that even though there has been a slight increase in the use of rail and water transportation modes, there is room to obtain a more efficient use of the road mode, mainly not to increase air pollution (fossil fuel combustion represents about 80% of the factors that jeopardize air quality).
Abstract: In this chapter, we introduce freight distribution logistic, discussing some statistics about future trends in this area. Basically, it appears that, even though there has been a slight increase in the use of rail and water transportation modes, there is room to obtain a more efficient use of the road mode, mainly not to increase air pollution (fossil-fuel combustion represents about 80% of the factors that jeopardize air quality). In order to be able to reach an equilibrium among different transportation modes, the entire supply chain has to be studied to install the appropriate service capacity and to define effective operational procedures to optimize the system performance. 1.1 Freight Distribution Logistic The way the governments and the economic world are now looking at transportation problems in general and at distribution logistic in particular has changed from past years. Europe, in particular, has adopted the so-called European Sustainable Development Strategy (SDS) in the European Council held in Gothenburg in June 2001. That was the opportunity to set out a coherent approach to sustainable development renewed in June 2006 to reaffirm the aim of continuous improvement of quality of life and economic growth through the efficient use of resources, and promoting the ecological and the social innovation potentials of the economy. Recently, in December 2007, the European Council insisted on the need to give priority to implementation measures. Paragraph 56 of Commission Progress Report of 22 October 2007 reads “. . . The EU must continue to work to move towards more sustainable trans-

186 citations


Journal ArticleDOI
TL;DR: An augmented SERVQUAL instrument that was used to measure private patients' service expectations and perceptions emerged from the study, and the "reliability and fair and equitable treatment" factor was found to be the most important healthcare service quality dimension.
Abstract: Purpose – The paper aims to focus on an augmented SERVQUAL instrument that was used to measure private patients' service expectations and perceptions.Design/method/approach – A questionnaire was administered to 750 and 34 per cent responded.Findings – A new service quality instrument called PRIVHEALTHQUAL emerged from the study, based on factor and reliability analysis. The “reliability and fair and equitable treatment” factor was found to be the most important healthcare service quality dimension.Originality/value – Adds to the existing body of research on service quality and demonstrates that SERVQUAL is not a generic service quality measure for all industries.

177 citations


Patent
06 Mar 2008
TL;DR: In this article, the future demand for service in the computer server system is forecast based on historical data, and the mapping of virtual machines to physical machines is updated based on the forecast of future demand.
Abstract: Historical data is measured for a computer server system. Future demand for service in the computer server system is forecast based on the historical data, and the mapping of virtual machines to physical machines is updated based on the forecast of the future demand. Measurement, forecasting, and placement modules can be employed.

173 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a synthesis of the evidence on the patronage growth performance of bus improvement measures in urban settings, including a summary of experience in Europe, North America and Australasia focusing on service improvement measures including network structure and service levels, bus priority measures, vehicles and stop infrastructure, fares and ticketing systems, passenger information and marketing, personal safety and security and synergy effects of measures.

Proceedings ArticleDOI
02 Jun 2008
TL;DR: An automated capacity and workload management system that integrates multiple resource controllers at three different scopes and time scales is described and results confirm that such an integrated solution ensures efficient and effective use of data center resources while reducing service level violations for high priority applications.
Abstract: Recent advances in hardware and software virtualization offer unprecedented management capabilities for the mapping of virtual resources to physical resources. It is highly desirable to further create a "service hosting abstraction" that allows application owners to focus on service level objectives (SLOs) for their applications. This calls for a resource management solution that achieves the SLOs for many applications in response to changing data center conditions and hides the complexity from both application owners and data center operators. In this paper, we describe an automated capacity and workload management system that integrates multiple resource controllers at three different scopes and time scales. Simulation and experimental results confirm that such an integrated solution ensures efficient and effective use of data center resources while reducing service level violations for high priority applications.

Patent
James Michael Ferris1
26 Nov 2008
TL;DR: In this paper, the authors propose a system and methods for service level backup using a re-cloud network, where one or more users can accept service based on a service level agreement (SLA).
Abstract: Embodiments relate to systems and methods for service level backup using a re-cloud network. A set of operating clouds can support one or more users. In embodiments, the one or more users can accept service based on a service level agreement (SLA), according to which the user is assured a certain level of service or support from the cloud, such as a minimum amount of uptime, a minimum amount of processor cycles or network bandwidth, or other guaranteed parameters of the usage of their virtual machine. In embodiments, the set of operating clouds in which the user's service is supported can be configured to communicate a service level augmentation request to a backup cloud to request additional resources to maintain the delivery of one or more SLA-specified support to one or more users. In embodiments, the backup cloud network can in turn be nested with other backup clouds or resources.

Journal ArticleDOI
TL;DR: This paper proposes a statistical technique for run-time monitoring of soft contracts, which consist of a probability distribution for the considered QoS parameter and shows how to compose such contracts, to yield a global probabilistic contract for the orchestration.
Abstract: Service level agreements (SLAs), or contracts, have an important role in Web services. They define the obligations and rights between the provider of a Web service and its client, about the function and the quality of the service (QoS). For composite services like orchestrations, contracts are deduced by a process called QoS contract composition, based on contracts established between the orchestration and the called Web services. Contracts are typically stated as hard guarantees (e.g., response time always less than 5 msec). Using hard bounds is not realistic, however, and more statistical approaches are needed. In this paper we propose using soft probabilistic contracts instead, which consist of a probability distribution for the considered QoS parameter-in this paper, we focus on timing. We show how to compose such contracts, to yield a global probabilistic contract for the orchestration. Our approach is implemented by the TOrQuE tool. Experiments on TOrQuE show that overly pessimistic contracts can be avoided and significant room for safe overbooking exists. An essential component of SLA management is then the continuous monitoring of the performance of called Web services, to check for violations of the SLA. We propose a statistical technique for run-time monitoring of soft contracts.

Journal ArticleDOI
01 Dec 2008
TL;DR: The article describes selected formalisms of the ContractLog KR and their adequacy for automated SLA management and presents results of experiments and examples from common industry use cases to demonstrate the expressiveness of the language and the scalability of the approach.
Abstract: Outsourcing of complex IT infrastructure to IT service providers has increased substantially during the past years. IT service providers must be able to fulfil their service-quality commitments based upon predefined Service Level Agreements (SLAs) with the service customer. They need to manage, execute and maintain thousands of SLAs for different customers and different types of services, which needs new levels of flexibility and automation not available with the current technology. The complexity of contractual logic in SLAs requires new forms of knowledge representation to automatically draw inferences and execute contractual agreemen ts. A logic-based approach provides several advantages including automated rule chaining allowing for compact knowledge representation as well as flexibility to adapt to rapidly changing business requirements. We suggest logical formalisms for the representation and enforcement of SLA rules and describe a proof-of-concept implementation. The article describes selected formalisms of the ContractLog KR and their adequacy for automated SLA management and presents results of experiments and examples from common industry use cases to demonstrate the expressiveness of the language and the scalability of the approach.

Journal ArticleDOI
TL;DR: The proposed single-class staffing (SCS) rule and ITP control are approximately optimal under various problem formulations and model assumptions, and it is numerically demonstrated that it performs well also for relatively small systems.
Abstract: We study large-scale service systems with multiple customer classes and many statistically identical servers. The following question is addressed: How many servers are required (staffing) and how does one match them with customers (control) to minimize staffing cost, subject to class-level quality-of-service constraints? We tackle this question by characterizing scheduling and staffing schemes that are asymptotically optimal in the limit, as system load grows to infinity. The asymptotic regimes considered are consistent with the efficiency-driven (ED), quality-driven (QD), and quality-and-efficiency-driven (QED) regimes, first introduced in the context of a single-class service system. Our main findings are as follows: (a) Decoupling of staffing and control, namely, (i) staffing disregards the multiclass nature of the system and is analogous to the staffing of a single-class system with the same aggregate demand and a single global quality-of-service constraint, and (ii) class-level service differentiation is obtained by using a simple idle-server-based threshold-priority (ITP) control (with state-independent thresholds); and (b) robustness of the staffing and control rules: our proposed single-class staffing (SCS) rule and ITP control are approximately optimal under various problem formulations and model assumptions. Particularly, although our solution is shown to be asymptotically optimal for large systems, we numerically demonstrate that it performs well also for relatively small systems.

Proceedings ArticleDOI
02 Jun 2008
TL;DR: This paper presents a bilateral protocol for SLA negotiation using the alternate offers mechanism wherein a party is able to respond to an offer by modifying some of its terms to generate a counter offer.
Abstract: Service level agreements (SLAs) between grid users and providers have been proposed as mechanisms for ensuring that the users' quality of service (QoS) requirements are met, and that the provider is able to realise utility from its infrastructure. This paper presents a bilateral protocol for SLA negotiation using the alternate offers mechanism wherein a party is able to respond to an offer by modifying some of its terms to generate a counter offer. We apply this protocol to the negotiation between a resource broker and a provider for advance reservation of compute nodes, and implement and evaluate it on a real grid system.

Book ChapterDOI
14 Oct 2008
TL;DR: This work proposes a model-driven approach, which automatically transforms a design model of service composition into an analysis model, which then feeds a probabilistic model checker for quality prediction, and developed a prototype tool called ATOP.
Abstract: The problem of composing services to deliver integrated business solutions has been widely studied in the last years. Besides addressing functional requirements, services compositions should also provide agreed service levels. Our goal is to support model-based analysis of service compositions, with a focus on the assessment of non-functional quality attributes, namely performance and reliability. We propose a model-driven approach, which automatically transforms a design model of service composition into an analysis model, which then feeds a probabilistic model checker for quality prediction. To bring this approach to fruition, we developed a prototype tool called ATOP , and we demonstrate its use on a simple case study.

01 Jan 2008
TL;DR: In this article, the authors focus on new types of 3D compact storage systems, where each unit load rests on a shuttle which can move in horizontal and depth directions, in cooperation with shuttles of other unit loads.
Abstract: textWarehouses are important nodes in supply chains. They decouple supply from demand in time, assortment, quantity, and space. By doing so, economies of scale can be achieved in transport, as warehouses allow to regroup transport flows leading to lower cost. They also allow postponement of value addition, increasing service levels at lower inventory levels. Warehouses are particularly needed in densely populated areas, close to where demand is generated and where the labor is available. Warehouses require much space to realize economies of scale. Warehouse buildings are often quite large, more than 10,000 m2 built space is common, and much infrastructure outside the building is needed. Unfortunately, in many of the urbanized areas, space for such large facilities has become short. In order to address this issue, enterprises are moving toward next generation storage systems, namely “three-dimensional (3D) compact storage systems”. These systems are designed in order to save floor space and labor costs, and increase the reliability of the order picking process. Several types of 3D compact storage systems exist with different handling systems (like S/R machines, conveyors, shuttles, or elevators) taking care of the horizontal, vertical and depth movements. This dissertation focuses on new types of 3D compact storage systems, in particular live-cube storage systems, where each unit load rests on a shuttle which can move in horizontal and depth directions, in cooperation with shuttles of other unit loads. In such a system, at least one empty storage slot per level is required. One of the key performance measures for compact storage systems, which is also the focus of this dissertation, is response time. We analyze these systems at two decision-making levels. At the tactical (design) decision level, we focus on the rack-dimensioning problem aiming at short response times. At the operational decision level, we investigate storage assignment and retrieval sequencing problems to shorten the response time.

Journal ArticleDOI
TL;DR: In this article, the authors examine the nature of optimal inventory policies in a system where a retailer manages substitutable products and show that the optimal inventory levels of the two items can be computed easily and follow what they refer to as partially decoupled policies, i.e., base stock policies that are not state dependent.
Abstract: In this paper, we examine the nature of optimal inventory policies in a system where a retailer manages substitutable products. We first consider a system with two products 1 and 2 whose total demand is D and individual demands are negatively correlated. A fixed proportion of the unsatisfied customers for an item will purchase the other item if it is available in inventory. For the single-period case, we show that the optimal inventory levels of the two items can be computed easily and follow what we refer to as “partially decoupled” policies, i.e., base stock policies that are not state dependent, in certain critical regions of interest both when D is known and random. Furthermore, we show that such a partially decoupled base-stock policy is optimal even in a multiperiod version of the problem for known D for a wide range of parameter values and in an N-product single-period model under some restrictive conditions. Using a numerical study, we show that heuristics based on the decoupled inventory policy perform well in conditions more general than the ones assumed to obtain the analytical results. The analytical and numerical results suggest that the approach presented here is most valuable in retail settings for product categories where the level of substitution between items in a category is not high, demand variation at the aggregate level is not high, and service levels or newsvendor ratios are high.

Journal ArticleDOI
TL;DR: It is shown that the supplier faces more inventory risk when the retailer has private service-level information, and the optimality of a cutoff level policy is established, which allows the supplier to assume all or none of the inventory risk.
Abstract: We study the important problem of how a supplier should optimally share the consequences of demand uncertainty (i.e., the cost of inventory excesses and shortages) with a retailer in a two-level supply chain facing a finite planning horizon. In particular, we characterize a multiperiod contract form, the promised lead-time contract, that reduces the supplier's risk from demand uncertainty and the retailer's risk from uncertain inventory availability. Under the contract terms, the supplier guarantees on-time delivery of complete orders of any size after the promised lead time. We characterize the optimal promised lead time and the corresponding payments that the supplier should offer to minimize her expected inventory cost, while ensuring the retailer's participation. In such a supply chain, the retailer often holds private information about his shortage cost (or his service level to end customers). Hence, to understand the impact of the promised lead-time contract on the supplier's and the retailer's performance, we study the system under local control with full information and local control with asymmetric information. By comparing the results under these information scenarios to those under a centrally controlled system, we provide insights into stock positioning and inventory risk sharing. We quantify, for example, how much and when the supplier and the retailer overinvest in inventory as compared to the centrally controlled supply chain. We show that the supplier faces more inventory risk when the retailer has private service-level information. We also show that a supplier located closer to the retailer is affected less by information asymmetry. Next, we characterize when the supplier should optimally choose not to sign a promised lead-time contract and consider doing business under other settings. In particular, we establish the optimality of a cutoff level policy. Finally, under both full and asymmetric service-level information, we characterize conditions when optimal promised lead times take extreme values of the feasible set, yielding the supplier to assume all or none of the inventory risk---hence the name all-or-nothing solution. We conclude with numerical examples demonstrating our results.

Book ChapterDOI
Thomas Kwok1, Ajay Mohindra1
01 Dec 2008
TL;DR: The first of a kind, a multi-tenant placement tool for application deployment in a distributed computing environment is described, which addresses and provides novel solutions to technical challenges of capacity planning and resource allocation for tenant aware systems.
Abstract: Cost of customization, deployment and operation of a software application supporting multiple tenants can be lowered through multi-tenancy in a new application business model called Software as a Service (SaaS) However, there are a number of technical challenges that need to be tackled before these benefits can be realized These challenges include calculations of resource requirements for multi-tenants with applied constraints in a shared application instance, the optimal placement of tenants and instances with maximum cost savings but without violating any requirements of service level agreements for all tenants in a set of servers Moreover, previously reported capacity planning and resource allocation methods and tools are not tenant aware This paper will address and provide novel solutions to these challenges We also describe the first of a kind, a multi-tenant placement tool for application deployment in a distributed computing environment

Journal ArticleDOI
TL;DR: An optimization-based model is presented to gain insights into the integrated inventory and transportation problem for a single-echelon, multi-facility service parts logistics system with time-based service level constraints.
Abstract: We present an optimization-based model to gain insights into the integrated inventory and transportation problem for a single-echelon, multi-facility service parts logistics system with time-based service level constraints. As an optimization goal we minimize the relevant inventory and transportation costs while ensuring that service constraints are met. The model builds on stochastic base-stock inventory model and integrates it with transportation options and service responsiveness that can be achieved using alternate modes (namely slow, medium and fast). The results obtained through different networks show that significant benefits can be obtained from transportation mode and inventory integration.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted an empirical analysis of the Delta/Continental/Northwest code-share alliance and found that the evidence did not suggest that the alliance facilitated collusion on the partners' overlapping routes.
Abstract: The U.S. Department of Transportation (DOT) expressed serious reservations before ultimately approving the Delta/Continental/Northwest code‐share alliance. The DOT’s main fear was that the alliance could facilitate collusion (explicit or tacit) on prices and/or service levels in the partners’ overlapping markets. However, since implementation of the alliance, there has not been a formal empirical analysis of its effects on price and traffic (number of passengers) levels. The main objective of this paper is to conduct such an analysis with a particular focus on testing whether the data are consistent with collusive behavior by the three airlines. The evidence does not suggest that the alliance facilitated collusion on the partners’ overlapping routes.

Journal ArticleDOI
TL;DR: This paper analyzes the optimal production and transshipment policy for a two-location make-to-stock queueing system with exponential production and interarrival times and develops three easy- to-implement heuristics that work very well for a large range of cost parameters.
Abstract: Inventory sharing through transshipment has attracted a great deal of attention from researchers and practitioners due to its potential for increasing service levels while simultaneously decreasing stock levels. In this paper, we analyze the optimal production and transshipment policy for a two-location make-to-stock queueing system with exponential production and interarrival times. A key feature of our model is that we allow transshipments to be triggered by both demand arrivals and production completions. Thus, transshipment is used to achieve production flexibility through inventory reallocation, as well as to fill emergency demands. We also consider capacity issues in transshipment by modeling each location as a single-server, make-to-stock queueing system. In this setting, we prove that the optimal production policy for each location belongs to the “hedging point” family of policies, while the optimal demand filling policy belongs to the “state-dependent rationing” family of policies. We analyze the structural properties of the optimal policy and provide conditions under which the optimal policy can be simplified. Given the complex nature of the optimal policy, we develop three easy-to-implement heuristics that work very well for a large range of cost parameters.

Journal ArticleDOI
TL;DR: A new constraint-based model for determining optimal stock levels for all products at a storage location, with restrictions on space, delivery and criticality of items taken into account is presented.
Abstract: The materials management group at any hospital is responsible for ensuring that their inventory policies provide a good service in delivering products. The group also needs to be aware of their own costs for distribution, in terms of the frequency of delivery. With future changes to the hospital’s infrastructure, function and size, the group require that these policies are reviewed and prepared for the changes taking place. For this, inventory policy models need to be built and used to anticipate the effect of these changes. Due to the importance of many products, high service levels are essential, yet there are often space and delivery constraints, limiting the amount of stock which can be held and delivered at each location. We present in this paper a new constraint-based model for determining optimal stock levels for all products at a storage location, with restrictions on space, delivery and criticality of items taken into account. We validate this model on sterile and bulk items in a real-life setting of an intensive care unit within Cork University Hospital, Ireland.

Proceedings ArticleDOI
07 Jul 2008
TL;DR: This paper demonstrates how to analyze SLAs during development phase and how to monitor these dependencies using event logs during runtime, and calls it MoDe4SLA (monitoring dependencies for SLAs).
Abstract: In service oriented computing different techniques for monitoring service level agreements (SLAs) are available. Many of these monitoring approaches focus on bilateral agreements between partners. However, when monitoring composite services it is not only important to figure out whether SLAs are violated, but we also need to analyze why these violations have occurred. When offering a composite service a company depends on its content providers to meet the service level they agreed upon. Due to these dependencies a company should not only monitor the SLA of the composite service, but also the SLAs of the services it depends on. By analyzing and monitoring the composite service in this way, causes for SLA violations can be easier found. In this paper we demonstrate how to analyze SLAs during development phase and how to monitor these dependencies using event logs during runtime. We call our approach MoDe4SLA (monitoring dependencies for SLAs).

Journal ArticleDOI
TL;DR: The analysis of the problem of optimal location of a set of facilities in the presence of stochastic demand and congestion yields several insights, including the importance of equitable facility configurations, the behavior of optimal and near-optimal capacities, and robust class of solutions that can be constructed for this problem.
Abstract: We analyze the problem of optimal location of a set of facilities in the presence of stochastic demand and congestion. Customers travel to the closest facility to obtain service; the problem is to determine the number, locations, and capacity of the facilities. Under rather general assumptions (spatially distributed continuous demand, general arrival and service processes, and nonlinear location and capacity costs) we show that the problem can be decomposed, and construct an efficient optimization algorithm. The analysis yields several insights, including the importance of equitable facility configurations (EFCs), the behavior of optimal and near-optimal capacities, and robust class of solutions that can be constructed for this problem.

Patent
Head Bubba1
06 Nov 2008
TL;DR: In this paper, the authors propose an approach to change the allocation of computing resources to an application program based on an established service level requirement for utilization of the resource by the application program.
Abstract: Allocating computing resources comprises allocating an amount of a resource to an application program based on an established service level requirement for utilization of the resource by the application program, determining whether the application program's utilization of the resource exceeds a utilization threshold, and changing the allocated amount of the resource in response to a determination that the application program's utilization of the resource exceeds the utilization threshold. The utilization threshold is based on the established service level requirement and is different than the established service level requirement. Changing the allocation of the resource based on the utilization threshold allows allocating sufficient resources to the application program prior to a breach of a service level agreement for the application program.

12 Nov 2008
TL;DR: This is the author's final draft of the paper published as CEUR Proceedings, 2008, 411, paper 2, and presented at the 2nd Workshop on Non Functional Properties and Service Level Agreements in Service Oriented Computing at ECOWS 2008, Dublin, Ireland, November 12, 2008.
Abstract: This is the author's final draft of the paper published as CEUR Proceedings, 2008, 411, paper 2, and presented at the 2nd Workshop on Non Functional Properties and Service Level Agreements in Service Oriented Computing at ECOWS 2008, Dublin, Ireland, November 12, 2008.. This paper is available from http://ftp.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-411/.