scispace - formally typeset
Search or ask a question

Showing papers on "Service level published in 2019"


Book ChapterDOI
TL;DR: In this paper, the authors study capacity management when workers self-schedule, where the agents have the flexibility to choose when they will or will not work and they optimize their schedules based on the compensation offered and their individual availability.
Abstract: Motivated by recent innovations in service delivery such as ride-sharing services and work-from-home call centers, we study capacity management when workers self-schedule. Our service provider chooses capacity to maximize its profit (revenue from served customers minus capacity costs) over a horizon. Because demand varies over the horizon, the provider benefits from flexibility to adjust its capacity from period to period. However, the firm controls its capacity only indirectly through compensation. The agents have the flexibility to choose when they will or will not work and they optimize their schedules based on the compensation offered and their individual availability. To guarantee adequate capacity, the firm must offer sufficiently high compensation. An augmented newsvendor formula captures the tradeoffs for the firm and the agents. If the firm could keep the flexibility but summon as many agents as it wants (i.e., have direct control) for the same wages it would not only generate higher profit, as is expected, but would also provide better service levels to its customers. If the agents require a “minimum wage” to remain in the agent pool they will have to relinquish some of their flexibility. To pay a minimum wage the firm must restrict the number of agents that can work in some time intervals. The costs to the firm are countered by the self-scheduling firm’s flexibility to match supply to varying demand. If the pool of agents is sufficiently large relative to peak demand, the firm earns more than it would if it had control of agents’ schedules but had to maintain a fixed staffing level over the horizon.

172 citations


Journal ArticleDOI
TL;DR: The proposed framework enables a smooth human–robot interaction that supports the efficient implementation of the chatbot healthcare service and proposes a chatbot-based healthcare service with a knowledge base for cloud computing.
Abstract: With the recent increase in the interest of individuals in health, lifecare, and disease, hospital medical services have been shifting from a treatment focus to prevention and health management. The medical industry is creating additional services for health- and life-promotion programs. This change represents a medical-service paradigm shift due to the prolonged life expectancy, aging, lifestyle changes, and income increases, and consequently, the concept of the smart health service has emerged as a major issue. Due to smart health, the existing health-promotion medical services that typically have been operated by large hospitals have been developing into remote medical-treatment services where personal health records are used in small hospitals; moreover, a further expansion has been occurring in the direction of u-Healthcare in which health conditions are continuously monitored in the everyday lives of the users. However, as the amount of data is increasing and the medical-data complexity is intensifying, the limitations of the previous approaches are increasingly problematic; furthermore, since even the same disease can show different symptoms depending on the personal health conditions, lifestyle, and genome information, universal healthcare is not effective for some patients, and it can even generate severe side effects. Thus, research on the AI-based healthcare that is in the form of mining-based smart health, which is a convergence technology of the 4IR, is actively being carried out. Particularly, the introduction of various smart medical equipment for which healthcare big data and a running machine have been combined and the expansion of the distribution of smartphone wearable devices have led to innovations such as personalized diagnostic and treatment services and chronic-disease management and prevention services. In addition, various already launched applications allow users to check their own health conditions and receive the corresponding feedback in real time. Based on these innovations, the preparation of a way to determine a user’s current health conditions, and to respond properly through contextual feedback in the case of unsound health conditions, is underway. However, since the previously made healthcare-related applications need to be linked to a wearable device, and they provide medical feedback to users based solely on specific biometric data, inaccurate information can be provided. In addition, the user interfaces of some healthcare applications are very complicated, causing user inconvenience regarding the attainment of desired information. Therefore, we propose a chatbot-based healthcare service with a knowledge base for cloud computing. The proposed method is a mobile health service in the form of a chatbot for the provision of fast treatment in response to accidents that may occur in everyday life, and also in response to changes of the conditions of patients with chronic diseases. A chatbot is an intelligent conversation platform that interacts with users via a chatting interface, and since its use can be facilitated by linkages with the major social network service messengers, general users can easily access and receive various health services. The proposed framework enables a smooth human–robot interaction that supports the efficient implementation of the chatbot healthcare service. The design of the framework comprises the following four levels: data level, information level, knowledge level, and service level.

97 citations


Journal ArticleDOI
TL;DR: This tutorial provides a gentle introduction to some ML techniques that have been recently applied in the field of the optical-network failure management, and introduces a taxonomy to classify failure-management tasks and discusses possible applications of ML for these failure management tasks.
Abstract: Failure management plays a role of capital importance in optical networks to avoid service disruptions and to satisfy customers’ service level agreements. Machine learning (ML) promises to revolutionize the (mostly manual and human-driven) approaches in which failure management in optical networks has been traditionally managed, by introducing automated methods for failure prediction, detection, localization, and identification. This tutorial provides a gentle introduction to some ML techniques that have been recently applied in the field of the optical-network failure management. It then introduces a taxonomy to classify failure-management tasks and discusses possible applications of ML for these failure management tasks. Finally, for a reader interested in more implementative details, we provide a step-by-step description of how to solve a representative example of a practical failure-management task.

88 citations


Journal ArticleDOI
TL;DR: The problem related to the reliability of path after transmitting the given amount of data with the service-level agreement cooperation in the computer communication network is addressed with bi-objective optimization of the quickest path problem, which minimizes the transmission time and hybrid logarithmic reliability.
Abstract: This article addresses the problem related to the reliability of path after transmitting the given amount of data with the service-level agreement cooperation in the computer communication network....

68 citations


Journal ArticleDOI
TL;DR: This work examines store picking operations for same day BOPS services, derives Best Performance Frontiers (BPFs) for single wave and multi-wave in-store order picking, and examines demand surge scenarios with different order arrival rates in an ordering cycle.

58 citations


Journal ArticleDOI
TL;DR: An instance size reduction algorithm and a mathematical programming based decomposition approach are developed to tackle a Consistent Vehicle Routing Problem, which considers customers with multiple daily deliveries and different service level agreements such as time windows, and release dates.

52 citations


Journal ArticleDOI
TL;DR: An original decision-support tool (DST) is illustrated that aids 3PL managers to decide on the proper warehouse management system (WMS) customization and exploits optimization and simulation techniques to quantify the impacts of the information availability on the warehousing operations performance.
Abstract: The purpose of this paper is to illustrate an original decision-support tool (DST) that aids 3PL managers to decide on the proper warehouse management system (WMS) customization. The aim of this tool is to address to the three main issues affecting such decision: the cost of the information sharing, the scarce visibility of the client’s data and the uncertainty of quantifying the return from investing into a WMS feature.,The tool behaves as a digital twin of a WMS. In addition, it incorporates a set of WMS’s features based both on heuristics and optimization techniques and uses simulation to perform what-if multi-scenario analyses of alternative management scenarios. In order to validate the effectiveness of the tool, its application to a real-world 3PL warehouse operating in the sector of biomedical products is illustrated.,The results of a simulation campaign along an observation horizon of ten months demonstrate how the tool supports the comparison of alternative scenarios with the as-is, thereby suggesting the most suitable WMS customization to adopt.,The tool supports 3PL managers in enhancing the efficiency of the operations and the fulfilling of the required service level, which is increasingly challenging given the large inventory mix and the variable clients portfolio that 3PLs have to manage. Particularly, the choice of the WMS customization that better perform with each business can be problematic, given the scarce information visibility of the provider on the client’s processes.,To the author’s knowledge, this paper is among the first to address a still uncovered gap of the warehousing literature by illustrating a DST that exploits optimization and simulation techniques to quantify the impacts of the information availability on the warehousing operations performance. As a second novel contribution, this tool enables to create a digital twin of a WMS and foresee the evolution of the warehouse’s performance over time.

51 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a two-period duopoly model to show how consumers' variety-seeking behavior affects the pricing and service level decisions of a traditional product and a sharing product.
Abstract: We developed a two-period duopoly model to show how consumers’ variety-seeking behavior affects the pricing and service level decisions of a traditional product and a sharing product. Our analysis revealed that, without considering the consumers’ variety-seeking behavior, the traditional product attracted consumers with a high level of service and high price, while the sharing product attracted consumers with a low level of service and low price. When we only considered variety-seeking behavior and did not adjust the service level, the product with the low level of service benefited from the consumers’ variety-seeking behavior, while the product with the high level of service lost profits. When we considered the variety-seeking behavior and adjusted the service level as well as the price, the sharing product was attractive to variety-seeking consumers and it gained a greater competitive advantage over the traditional product. For two periods, the number of variety-seeking consumers who switched from buying traditional products to buying sharing products was greater than those who switched from buying sharing products to buying traditional products. Furthermore, we found that when the consumers’ variety-seeking behavior was not obvious, the number of consumers shifting from the traditional product increased monotonically. In contrast, when the variety-seeking behavior was obvious, the number of consumers shifting from the traditional product decreased monotonically.

50 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a comprehensive list of factors that drive distribution structure decision making, including demand level, service level, product characteristics, logistics costs, accessibility, and contextual factors, and conclude with a framework that shows the relative position of these factors.

48 citations


Journal ArticleDOI
TL;DR: A collaborative SLA and Reputation-based Trust Management (RTM) solution for federated cloud environment that objectively computes the cloud providers’ reputation values under various scenarios and leverages several technical and user’s experience metrics to do so.

46 citations


Journal ArticleDOI
TL;DR: An integrated research framework as applicable in the present context is presented as a direct consequence of the review of the literature with future research directions.

Journal ArticleDOI
TL;DR: This paper exploits task-replication to not only satisfy reliability requirements, but also to improve the QoS of low-criticality tasks in overrun situation, and proposes LETR-MC scheme, which determines the number of replicas, and reduces the execution time overlap between the primary tasks and replicas.
Abstract: Advancement of Cyber-Physical Systems has attracted attention to Mixed-Criticality Systems (MCSs), both in research and in industrial designs. As multicore platforms are becoming the dominant trend in MCSs, joint energy and reliability management is a crucial issue. In addition, providing guaranteed service level for low-criticality tasks in critical mode is of great importance. To address these problems, we propose “LETR-MC” scheme that simultaneously supports certification, energy management, fault-tolerance, and guaranteed service level in mixed-criticality multicore systems. In this paper, we exploit task-replication to not only satisfy reliability requirements, but also to improve the QoS of low-criticality tasks in overrun situation. Our proposed LETR-MC scheme determines the number of replicas, and reduces the execution time overlap between the primary tasks and replicas. Moreover, instead of ignoring low-criticality tasks or selectively executing them without any guaranteed service level in overrun mode, it mathematically explores the minimum achievable service guarantee for each low-criticality task in different execution modes, i.e., normal, fault-occurrence, overrun and critical operation modes. We develop novel unified demand bound functions (DBF), along with a DVFS method based on the proposed DBF analysis. Our experimental results show that LETR-MC provides up to 59 percent (24 percent on average) energy saving, and significantly improves the service levels of low-criticality tasks compared to the state-of-the-art schemes.

Journal ArticleDOI
TL;DR: A mathematical model of production inventory system of green products in a green manufacturing industry to formulate a profit function for service level and random variable dependent demand implementing green technology in the manufacturing industry for reduction of green house gas emission is proposed.
Abstract: The article deals with a mathematical model of production inventory system of green products in a green manufacturing industry. The main objective of this proposed model is to formulate a profit function for service level and random variable dependent demand implementing green technology in the manufacturing industry for reduction of green house gas emission. The production lotsize is considered here as an increasing function of green technology and capital invested for setup the manufacturing system which meets the market demand. As a result, green technology, capital invested for setup and service level are decision variable which are optimized to achieve maximum profit. Finally, numerical example for normal distribution and distribution free cases are illustrated to justify the proposed model.

Journal ArticleDOI
TL;DR: This method has the potential to improve the inventory-service trade-off, i.e., it can achieve a certain cycle service level with lower inventory levels compared to the traditional forecasting techniques for intermittent spare part demand.

Journal ArticleDOI
TL;DR: An adaptive and fuzzy resource management framework (AFRM) is proposed in which the last resource values of each virtual machine are gathered through the environment sensors and are sent to a fuzzy controller and AFRM analyzes the received information to make decision on how to reallocate the resources in each iteration of a self-adaptive control cycle.
Abstract: Resource management plays a key role in the cloud-computing environment in which applications face with dynamically changing workloads. However, such dynamic and unpredictable workloads can lead to performance degradation of applications, especially when demands for resources are increased. To meet Quality of Service (QoS) requirements based on Service Level Agreements (SLA), resource management strategies must be taken into account. The question addressed in this research includes how to reduce the number of SLA violations based on the optimization of resources allocated to users applying an autonomous control cycle and a fuzzy knowledge management system. In this paper, an adaptive and fuzzy resource management framework (AFRM) is proposed in which the last resource values of each virtual machine are gathered through the environment sensors and are sent to a fuzzy controller. Then, AFRM analyzes the received information to make decision on how to reallocate the resources in each iteration of a self-adaptive control cycle. All the membership functions and rules are dynamically updated based on workload changes to satisfy QoS requirements. Two sets of experiments were conducted on the storage resource to examine AFRM in comparison to rule-based and static-fuzzy approaches in terms of RAE, utility, number of SLA violations, and cost applying HIGH, MEDIUM, MEDIUM-HIGH, and LOW workloads. The results reveal that AFRM outweighs the rule-based and static-fuzzy approaches from several aspects.

Journal ArticleDOI
TL;DR: Analysis indicates that the SERQPP path is a reliable and quickest option for data transmission in remote healthcare applications and quantitative and qualitative comparative study shows that the proposed algorithm outperforms in computation ofSERQPP without increasing the time complexity.
Abstract: Designing a mission critical system, such as a remote surgery, e-healthcare, e-banking, or e-shopping system, is a challenging task. The continuity and criticality of operation in mission critical systems depend on their delay, capacity, reliability, and energy. In this study, the energy available at each node and the service-level agreements (SLAs) are influenced by the continuity and criticality of data transmission. SLAs are drawn as requested service time and service mean time to failure. For the failure-free operation of mission critical systems, the SLA energy cooperative reliable and quickest path problem (SERQPP) algorithm is defined between a specified source and destination. Analysis indicates that the SERQPP path is a reliable and quickest option for data transmission in remote healthcare applications. The performance of the proposed algorithm is analyzed using mean number of qualifying service set (QSS) paths, average hop count, and average energy efficiency. Simulations are used to determine the variation trends for the SLAs, energy, numbers of nodes, distinct capacities, and data required for the computation of the SERQPP. In the results, it is showing that the number of QSS paths and average energy efficiency are increased with the increase in SLA and energy. In addition to this, quantitative and qualitative comparative study shows that the proposed algorithm outperforms in computation of SERQPP without increasing the time complexity. Finally, the major features of the SERQPP algorithm are discussed and highlighted.

Journal ArticleDOI
TL;DR: This paper proposes an innovative reservation enforcement technique that allows substantially longer reservation times while keeping the system profitable and achieving high service quality, and proposes a variable quality of service (QoS) model in which the guaranteed radius around the user within which the reserved vehicle will be placed depends on the zone of trip departure.
Abstract: Reservations in daily services can improve user satisfaction, and give additional information about the demand patterns to the operators. However, providing reservations to carsharing clients is difficult. While carsharing is especially convenient if it is allowing one-way trips and vehicle drop-off anywhere in the service area (called free-floating), this flexibility increases management complexity because of vehicle stock imbalance. Most of the commercial providers of free-floating carsharing offer reservations under highly restrictive terms, for example only up to 30 min in advance. In this paper, we propose an innovative reservation enforcement technique that allows substantially longer reservation times while keeping the system profitable and achieving high service quality. A simple way to enforce reservations is locking vehicles until the departure time of a client. However, it comes at the cost of idling vehicles that could be used by other users and decreasing the revenue. Our approach, called relocations-based reservation enforcement method (R-BR) combines vehicle locking and relocation movements. It locks vehicles only a short time before the trip departure if a suitable vehicle is close enough due to the natural trip patterns. If no such vehicle is available, a car is relocated from another place. Further, we propose a variable quality of service (QoS) model in which the guaranteed radius around the user within which the reserved vehicle will be placed, and the maximum allowed reservation time before the departure depends on the zone of trip departure. A simulation-based optimization is used whereby the carsharing operation is simulated and optimized using an iterated local search (ILS) metaheuristic for adjustment of service level parameters. The proposed technique is tested on a set of artificial problem examples and a case study of a simulated working day in the Lisbon Municipality, Portugal. Results show that the proposed R-BR method is substantially better than the simple vehicle locking when the constant QoS approach is used and that the devised ILS metaheuristic can further increase the system performance, especially with high trip volumes.

Journal ArticleDOI
TL;DR: A multidimensional evaluation framework for evaluating the Kutsuplus, Helsinki Metropolitan Region (HMR) on-demand micro-transit pilot provides a range of implications for user-centric service design, underpinned with an understanding of interdependencies between operating scheme, service pricing, and service level provided by other transport modes.
Abstract: As technical limitations are not anymore the main obstacle for successful urban micro-transit operation, further development has to focus on evaluating a range of potential challenges, providing lessons for policy and service development, including organisation of piloting activities. Contrastingly, few studies had employed detailed empirical data with trip and user properties to evaluate flexible micro-transit services in urban environments. This research focuses on evaluating the Kutsuplus, Helsinki Metropolitan Region (HMR) on-demand micro-transit pilot. Previous research on Kutsuplus has focused on evaluating financing and pricing policy, and users' and non-users' perceptions about the implemented service. This research develops a multidimensional evaluation framework, focused on the analysis of completed user journeys, accounting for Kutsuplus operating area, timing, and pricing scheme. Thus, this framework uses 82,290 completed Kutsuplus journeys, combined with routing, HMR travel demand data and pricing modelling. Results indicate that demand for Kutsuplus has been increasing over time, with low average vehicle occupancy, and low wait time after journey offer acceptance. Hourly demand pattern for Kutsuplus had a similar shape to the demand patter for fixed public transport, with small differences in peak time start and duration. Spatial demand had more orbital than radial direction, more versatile directional demand, focus on the western side of service area, and business-related locations in general. Most of the users were age 30 to 65, with younger or older users having also distinct trip characteristics. Kutsuplus was on par with private car for shorter journeys, but could also lead to undesired replacement of walking and cycling trips. Kutsuplus pricing was between public transport and UberPOP. With these and other results, the multidimensional evaluation framework provides a range of implications for user-centric service design, underpinned with an understanding of interdependencies between operating scheme, service pricing, and service level provided by other transport modes. Finally, we provide recommendations for further analysis of micro-transit journey data, raising implications for data collection practices in the future micro-transit pilots, and for further directions in developing our understanding of emerging mobility-on-demand services.

Journal ArticleDOI
TL;DR: This work focuses on estimating the relative costs and benefits of serving different customer types by the different levels of service in a complex healthcare setting where the major differentiation among server types is the intensity of service provided.
Abstract: Many service systems have servers with different capabilities and customers with varying needs. One common way this occurs is when servers are hierarchical in their skills or in the level of service they can provide. Much of the literature studying such systems relies on an understanding of the relative costs and benefits associated with serving different customer types by the different levels of service. In this work, we focus on estimating these costs and benefits in a complex healthcare setting where the major differentiation among server types is the intensity of service provided. Step-down units (SDUs) were initially introduced in hospitals to provide an intermediate level of care for semicritically ill patients who are not sick enough to require intensive care but not stable enough to be treated in the general medical/surgical ward. One complicating factor is that the needs of customers is sometimes uncertain—specifically, it is difficult to know a priori which level of care a particular patient nee...

Journal ArticleDOI
TL;DR: A new formulation is proposed for the tsptw, where the service starting time at each customer’s location is expressed as a convex piecewise affine function of the travel times, which would enable the tractable formulation of the corresponding distributionally robust problem.
Abstract: We study an a priori Traveling Salesman Problem with Time Windows (tsptw) in which the travel times along the arcs are uncertain and the goal is to determine within a budget constraint, a route for the service vehicle in order to arrive at the customers’ locations within their stipulated time windows as well as possible. In particular, service at customer’s location cannot commence before the beginning of the time window and any arrival after the end of the time window is considered late and constitutes to poor customer service. In articulating the service level of the tsptw under uncertainty, we propose a new decision criterion, called the essential riskiness index, which has the computationally attractive feature of convexity that enables us to formulate and solve the problem more effectively. As a decision criterion for articulating service levels, it takes into account both the probability of lateness and its magnitude, and can be applied in contexts where either the distributional information of the uncertain travel times is fully or partially known. We propose a new formulation for the tsptw, where we explicitly express the service starting time at each customer’s location as a convex piecewise affine function of the travel times, which would enable us to obtain the tractable formulation of the corresponding distributionally robust problem. We also show how to optimize the essential riskiness index via Benders decomposition and present cases where we can obtain closed-form solutions to the subproblems. We also illustrate in our numerical studies that this approach scales well with the number of samples used for the sample average approximation. The approach can be extended to a more general setting including Vehicle Routing Problem with Time Windows with uncertain travel times and customers’ demands.

Journal ArticleDOI
TL;DR: Two novel models can be used to realize global and partial coordination of the OSSSC, respectively since both models utilize the game theory to achieve unification of individual and collective rationality, and fair allocation of the system surplus profits based on individual decisions.
Abstract: Service quality coordination is a critical issue of online shopping service supply chain (OSSSC) management. However, such decision-making is problematic due to the behavior of channel members with characteristics of individual rationality and concerns of fairness. The two characteristics matter much to the performance and service quality of the supply chain. Therefore, a service quality coordination contract for the OSSSC should be designed based on these two behavioral elements. This paper proposes two novel models, which take into account the two important elements mentioned above and designs coordination contracts for the OSSSC with two competing third party logistics (TPL) companies and a single online store. Equilibrium decisions in two benchmark scenarios (decentralized and centralized control) were analyzed, revealing that the use of a traditional price-only contract cannot appropriately coordinate the supply chain. Both proposed contracts can be used to realize global and partial coordination of the OSSSC, respectively. This is caused since both models utilize the game theory to achieve unification of individual and collective rationality, and fair allocation of the system surplus profits based on individual decisions. Numerical experiments were conducted to examine the impact of the intensity of the competition between both TPL companies on their decisions and the attention of the online store to the ratio of service level to the service price on the coordination effect.

Journal ArticleDOI
TL;DR: This paper presents a comprehensive survey on existing virtual machine migration and selection processes to understand the specific application-oriented capabilities of these strategies with the advantages and bottlenecks and furnishes the further improvement strategies.
Abstract: The recent growth in the demand for scalable applications from the consumers of the services has motivated the application development community to build and deploy the applications on cloud in the form of services. The deployed applications have significant dependency on the infrastructure available with the application providers. Bounded by the limitations of available resource pools on-premises, many application development companies have migrated the applications to third party cloud environments called data centers. The data center owners or the cloud service providers are entitled to ensure high performance and high availability of the applications and at the same time the desired scalability for the applications. Also, the cloud service providers are also challenging in terms of cost reduction and energy consumption reductions for better manageability of the data center without degrading the performance of the deployed applications. It is to be noted that the performance of the application does not only depend on the responsiveness of the applications rather also must be measured in terms of service level agreements. The violation of the service level agreements or SLA can easily disprove the purpose of application deployments on cloud-based data centers. Thus, the data center owners apply multiple load balancing strategies for maintaining the desired outcomes from the application owners at the minimized cost of data center maintainability. Hence, the demand of the research is to thoroughly study and identify the scopes for improvements in the parallel research outcomes. As the number of applications ranging from small data-centric applications coming with the demand of frequent updates with higher computational capabilities to the big data-centric application as big data analytics applications coming with efficient algorithms for data and computation load managements, the data center owners are forced to think for efficient algorithms for load managements. The algorithms presented by various research attempts have engrossed on application specific demands for load balancing using virtual machine migrations and the solution as the proposed algorithms have become application problem specific. Henceforth, the further demand of the research is a guideline for selecting the appropriate load balancing algorithm via virtual machine migration for characteristics-based specific applications. Hence, this paper presents a comprehensive survey on existing virtual machine migration and selection processes to understand the specific application-oriented capabilities of these strategies with the advantages and bottlenecks. Also, with the understanding of the existing measures for load balancing, it is also important to furnish the further improvement strategies, which can be made possible with a detailed understanding of the parallel research outcomes. Henceforth, this paper also equips the study with guidelines for improvements and for further study. Nonetheless, the study cannot be completed without the mathematical analysis for better understanding and experimental analysis on different standards of datasets for better conclusive decisions. Hence, this paper also presents the discussion on mathematical models and experimental result analysis for the conclusive decision on the improvement factors and the usability of the migration methods for various purposes. Finally, this paper is a comprehensive survey on the background of the research, recent research outcomes using mathematical modeling and experimental studies on various available datasets, and finally identify the scopes of improvements considering various aspects such as execution time, mean time before a VM migration, mean time before a host shutdown, number of node shutdowns, SLA performance degradation, VM migrations, and energy consumption.

Journal ArticleDOI
TL;DR: In this article, the authors summarized and systematic reviewed the applications of the LCA and PSS integration in circular economy from a micro perspective and identified the challenges for current researches and proposed future research directions to promote the development of LCA from the perspective of enterprises.

Journal ArticleDOI
TL;DR: A holistic placement framework for the deployment of user-requested virtual machines is proposed by pursuing different performance goals, such as counteract to hardware outages or reboots due to software aging issues, ensure proper security policies, maintain a suitable service level perceived by users, and reduce power requirements.
Abstract: Modern datacenters rely on virtualization to deliver complex and scalable cloud services. To avoid inflating costs or reducing the perceived service level, suitable resource optimization techniques are needed. Placement can be used to prevent inefficient maps between virtual and physical machines. In this perspective, we propose a holistic placement framework considering conflicting performance metrics, such as the service level delivered by the cloud, the energetic footprint, hardware or software outages, and security policies. Unfortunately, computing the best placement strategies is nontrivial, as it requires the ability to trade among several goals, possibly in a real-time manner. Therefore, we approach the problem via model predictive control to devise optimal maps between virtual and physical machines. Results show the effectiveness of our technique in comparison with classical heuristics. Note to Practitioners —This paper is motivated by the success of cloud-based services. Specifically, we consider the case where the resources of datacenters are accessed through the infrastructure-as-a-service paradigm. To efficiently handle the load of requests and to tame operational costs, a proper optimization is needed. To this end, we focus on the selection of the best maps between virtual and physical machines. We propose a holistic placement framework for the deployment of user-requested virtual machines by pursuing different performance goals, such as counteract to hardware outages or reboots due to software aging issues, ensure proper security policies, maintain a suitable service level perceived by users, and reduce power requirements. The optimal strategies are computed with model predictive control, which allows to consider complex constraints and take advantage of future information. The proposed framework is tested in different scenarios characterized by a variety of workloads and traces gathered in a real cloud datacenter. Results indicate that our approach outperforms bin-packing techniques.

Journal ArticleDOI
TL;DR: A unified mixed integer linear programming (MILP) model is proposed, in which several sub-problems such as the number, location, and capacity of bicycle stations; total fleet size design; depot location design; and rebalancing and maintenance plans are combined and can be solved together.

Journal ArticleDOI
27 Aug 2019-Energies
TL;DR: In this article, the authors exhaustively reviewed the potential mechanisms in data centers that provided flexibilities together with flexible contracts such as green service level and supply-demand agreements, which have a positive impact on total cost of ownership for data centers.
Abstract: The power demand (kW) and energy consumption (kWh) of data centers were augmented drastically due to the increased communication and computation needs of IT services. Leveraging demand and energy management within data centers is a necessity. Thanks to the automated ICT infrastructure empowered by the IoT technology, such types of management are becoming more feasible than ever. In this paper, we look at management from two different perspectives: (1) minimization of the overall energy consumption and (2) reduction of peak power demand during demand-response periods. Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustively reviewed the potential mechanisms in data centers that provided flexibilities together with flexible contracts such as green service level and supply-demand agreements. We extended state-of-the-art by introducing the methodological building blocks and foundations of management systems for the above mentioned two perspectives. We validated our results by conducting experiments on a lab-grade scale cloud computing data center at the premises of HPE in Milano. The obtained results support the theoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT: 33% of overall energy savings and 50% of power demand reduction during demand-response periods in the case of data center federation.

Journal ArticleDOI
TL;DR: Computational results show that the proposed MI-SOCP-based hierarchical approach can obtain higher solution quality with less computational effect than the existing model-based approach.
Abstract: Stochastic scheduling optimisation is a hot and challenging research topic with wide applications. Most existing works on stochastic parallel machine scheduling address uncertain processing time, a...

Journal ArticleDOI
TL;DR: An innovative design of an autonomous system supporting the automatic decision-making for maintenance scheduling is proposed, which illustrates the high-level fusion of technical and business drives; optimizes strategies and plans with regard to maintenance costs, service level and reliability.

Journal ArticleDOI
TL;DR: A multi-objective simulation-based optimization framework to effectively manage the train conflicts after the occurrences of a disturbance caused by a temporary line blockage is proposed.
Abstract: Railway disturbance management is inherently a multi-objective optimization problem that concerns both the operators’ cost and passenger’s service level. This study proposes a multi-objective simul...

Journal ArticleDOI
TL;DR: A new frequency based service network design model with transshipments, capacity constraints and heterogeneous users is presented and it is demonstrated that including heterogeneity explicitly in network design pays off in terms of an improved user performance of the network.
Abstract: Value of time and value of reliability are two important user attributes that reflect shippers’ behavioral preferences, and as such influence the design of transport service networks. As shippers preferences will vary widely, it is important to consider these variations between users in the design of service networks. Up to now, network design research has ignored the combined use of time and reliability valuations for heterogeneous user populations. The objective of this paper is to address these attributes in a model for freight service network design targeting service performance improvement. We present a new frequency based service network design model with transshipments, capacity constraints and heterogeneous users. We apply the model to demonstrate that including heterogeneity explicitly in network design pays off in terms of an improved user performance of the network. A case study is conducted for a railway network in China. Values of time and reliability are estimated from a recent Stated Preference survey and used to determine distinct user classes. The proposed optimization problem is solved using an improved Simulated Annealing based heuristic method, for the case of the aggregate user group and the case of two distinct classes. Results show that by taking variations in shippers’ VOT and VOR into account, users’ total generalized cost is reduced while service levels improve. We conclude that incorporating heterogeneous VOT and VOR into the service network design problem is of interest for decisions on network investments.