scispace - formally typeset
Search or ask a question

Showing papers in "OR Spectrum in 2019"


Journal ArticleDOI
TL;DR: This work considers a flexible job shop scheduling problem with sequence-dependent setup times that incorporates heterogeneous machine operator qualifications by taking account of machine- and operator-dependent processing times, and presents exact and heuristic decomposition-based solution approaches.
Abstract: We consider a flexible job shop scheduling problem with sequence-dependent setup times that incorporates heterogeneous machine operator qualifications by taking account of machine- and operator-dependent processing times. We analyze two objective functions, minimizing the makespan and minimizing the total tardiness, and present exact and heuristic decomposition-based solution approaches. These approaches divide the scheduling problem into a vehicle routing problem with precedence constraints and an operator assignment problem, and connect these problems via logic inequalities. We assess the quality of our solution methods in an extensive computational study that is based on randomly generated as well as real-world problem instances.

32 citations


Journal ArticleDOI
TL;DR: The concept of recoverable robustness is used to obtain a recoveredable robust solution to the stand allocation problem, a solution that can be recovered by limited means for the included scenarios, and the solutions obtained outperform the solutions of a traditional robust approach.
Abstract: This paper presents an innovative approach to the tactical planning of aircraft remote and contact-stands allocation at airports. We use the concept of recoverable robustness to obtain a recoverable robust solution to the stand allocation problem, a solution that can be recovered by limited means for the included scenarios. Four objective functions are discussed and tested to assess the efficiency of a stand allocation plan. Namely, the minimization of passengers’ walking distance, the minimization of tows, the maximization of the number of passengers allocated to contact-stands, and the maximization of the potential airport commercial revenue. The inclusion of an airport commercial revenue metric in the stand allocation optimization model and the comparison of its performance to the pre-mentioned other objectives is another novelty of this work. The research was developed in collaboration with the Guarulhos International Airport of Sao Paulo for which the recoverable robust approach was tested for 6 days of operations at the airport. We demonstrate that the solutions obtained with the proposed approach outperform the solutions of a traditional robust approach. In addition, a discussion of the trade-off between the different objectives is provided.

27 citations


Journal ArticleDOI
TL;DR: This work presents alternative approaches to integrate various modelling features and to optimize various performance indicators based on the resolution of mixed-integer linear programs via dedicated solvers and shows different trade-off solutions when prioritizing different indicators.
Abstract: This paper deals with the problem of efficiently scheduling take-off and landing operations at a busy terminal manoeuvring area (TMA). This problem is particularly challenging, since the TMAs are becoming saturated due to the continuous growth of traffic demand and the limited available infrastructure capacity. The mathematical formulation of the problem requires taking into account several features simultaneously: the trajectory of each aircraft should be accurately predicted in each TMA resource, the safety rules between consecutive aircraft need to be modelled with high precision, the aircraft timing and ordering decisions have to be taken in a short time by optimizing performance indicators of practical interest, including the minimization of aircraft delays, travel times and fuel consumption. This work presents alternative approaches to integrate various modelling features and to optimize various performance indicators. The approaches are based on the resolution of mixed-integer linear programs via dedicated solvers. Computational experiments are performed on real-world data from Milano Malpensa in case of multiple delayed aircraft. The results obtained for the proposed approaches show different trade-off solutions when prioritizing different indicators.

26 citations


Journal ArticleDOI
TL;DR: The proposed classification yields for the first time a unified view of scarce production factors and helps to develop a new model formulation generalizing and extending the currently used approaches that are specific for some settings.
Abstract: Typical simultaneous lotsizing and scheduling models consider the limited capacity of the production system by respecting a maximum time the respective machines or production lines can be available. Further limitations of the production quantities can arise by the scarce availability of, e.g., setup tools, setup operators or raw materials which thus cannot be neglected in optimization models. In the literature on simultaneous lotsizing and scheduling, these production factors are called “secondary resources”. This paper provides a structured overview of the literature on simultaneous lotsizing and scheduling involving secondary resources. The proposed classification yields for the first time a unified view of scarce production factors. The insights about different types of secondary resources help to develop a new model formulation generalizing and extending the currently used approaches that are specific for some settings. Some illustrative examples demonstrate the functional principle and flexibility of this new formulation which can thus be used for a wide range of applications.

24 citations


Journal ArticleDOI
TL;DR: Computational results indicate that the proactive project schedules with composite robustness not only can effectively protect the payment plan from disruptions through allocating appropriate time buffers, but also can achieve a remarkable performance with respect to the project NPV.
Abstract: This study investigates the robust resource-constrained max-NPV project problem with stochastic activity duration. First, the project net present value (NPV) and the expected penalty cost are proposed to measure quality robustness and solution robustness from the perspective of discounted cash flows, respectively. Then, a composite robust scheduling model is proposed in the presence of activity duration variability and a two-stage algorithm that integrates simulated annealing and tabu search is developed to deal with the problem. Finally, an extensive computational experiment demonstrates the superiority of the combination between quality robustness and solution robustness as well as the effectiveness of the proposed two-stage algorithm for generating project schedules compared with three other algorithms, namely simulated annealing, tabu search, and multi-start iterative improvement method. Computational results indicate that the proactive project schedules with composite robustness not only can effectively protect the payment plan from disruptions through allocating appropriate time buffers, but also can achieve a remarkable performance with respect to the project NPV.

18 citations


Journal ArticleDOI
TL;DR: An optimisation model using integer linear programming is developed to determine the optimal schedule considering several constraints such as the availability of vessels and planning delays and an exact method is also proposed to find near optimal solutions for a test set of problems.
Abstract: An optimisation model is proposed for scheduling the decommissioning of an offshore wind farm in order to minimise the total cost which is comprised of jack-up vessel, barge (transfer) vessel, inventory, processing and on-land transportation costs. This paper also presents a comprehensive review of the strategic issues relating to the decommissioning process and of scheduling models that have been applied to offshore wind farms. A mathematical model using integer linear programming is developed to determine the optimal schedule considering several constraints such as the availability of vessels and planning delays. As the decommissioning problem is challenging to solve, a matheuristic approach based on the hybridisation of a heuristic approach and an exact method is also proposed to find near optimal solutions for a test set of problems. A set of computational experiments has been carried out to assess the proposed approach.

15 citations


Journal ArticleDOI
TL;DR: This paper develops a lightly robust interactive multiobjective optimization method, LiRoMo, to support a decision maker to find a most preferred lightly robust efficient solution with a good balance between robustness and the objective function values in the most typical scenario.
Abstract: As an emerging research field, multiobjective robust optimization employs minmax robustness as the most commonly used concept. Light robustness is a concept in which a parameter, tolerable degradations, can be used to control the loss in the objective function values in the most typical scenario for gaining in robustness. In this paper, we develop a lightly robust interactive multiobjective optimization method, LiRoMo, to support a decision maker to find a most preferred lightly robust efficient solution with a good balance between robustness and the objective function values in the most typical scenario. In LiRoMo, we formulate a lightly robust subproblem utilizing an achievement scalarizing function which involves a reference point specified by the decision maker. With this subproblem, we compute lightly robust efficient solutions with respect to the decision maker’s preferences. With LiRoMo, we support the decision maker in understanding the lightly robust efficient solutions with an augmented value path visualization. We use two measures ‘price to be paid for robustness’ and ‘gain in robustness’ to support the decision maker in considering the trade-offs between robustness and quality. As an example to illustrate the advantages of the method, we formulate and solve a simple investment portfolio optimization problem.

15 citations


Journal ArticleDOI
TL;DR: This paper formalizes the flexible layout design problem (FLDP) in a mixed-integer linear program and develops a decomposition-based solution approach that can optimally solve small- to mid-sized instances and transforms this solution approach to a matheuristic that generates high-quality solutions in acceptable time for large-sized instance.
Abstract: The increasing vehicle heterogeneity is pushing the widespread mixed-model assembly line to its limit. The paced, serial design is incapable of coping with the diversity in workloads and task requirements. As an alternative, the automotive industry has started to introduce flexible layouts for segments of the assembly. In flexible layouts, the stations are no longer arranged serially and no longer linked by a paced transportation system but by automated guided vehicles. This paper investigates the initial configuration of such systems. The flexible layout design problem (FLDP) is the problem of designing a flexible layout for a segment of the assembly of heterogeneous vehicles. It comprises an integrated station formation and station location problem. Moreover, the FLDP anticipates the operational flow allocation of the automated guided vehicles. We formalize the FLDP in a mixed-integer linear program and develop a decomposition-based solution approach that can optimally solve small- to mid-sized instances. In addition, we transform this solution approach to a matheuristic that generates high-quality solutions in acceptable time for large-sized instances. We compare the efficiency of flexible layouts to mixed-model assembly lines and quantify the benefits of flexible layouts which increase with vehicle heterogeneity.

15 citations


Journal ArticleDOI
TL;DR: Turkey’s organ transplantation network can be improved by re-clustering using a variety of performance measures and a simulation model to mimic the uncertain environment realistically while being able to model the components of hierarchical systems is developed.
Abstract: Logistics is one of the key elements of organ transplantation operations. In this study, maximizing the potential compatible donor–recipient matches within the cold ischemia time bounds (duration that an organ can survive without blood supply) is the main problem that is addressed. While addressing this problem, the effects of clustering structures on the potential organ matches are investigated. We analyze Turkey’s organ transplantation logistics structure based on its dynamics and provide new mathematical models for maximizing potential-weighted intra-regional organ transplantation flow via evaluating different types of transportation modes while meeting the specified time bounds. This approach considers only maximizing the potential flow within a single time bound, so that it may not perform effectively for every organ type. To remedy this situation, another mathematical model that maximizes the potential flow of multiple organ types has also been developed. Additionally, in order to evaluate the performance of our results, using the outputs of the deterministic mathematical models, we developed a simulation model to mimic the uncertain environment realistically while being able to model the components of hierarchical systems. Extensive computational analysis using a variety of performance measures has revealed that Turkey’s organ transplantation network can be improved by re-clustering.

13 citations


Journal ArticleDOI
TL;DR: A generic model is presented and outline what is captured by the test data itself and what is left to be estimated by the user and how data are generated to capture the considerable variety of container characteristics, which can be found in major terminals.
Abstract: We present a test data generator that can be used for simulating processes of cranes handling containers The concepts originate from container storage areas at seaports, but the generator can also be used for other applications, particularly for train terminals A key aspect is that one or multiple cranes handle containers, that is, they store containers, receiving the containers in a designated handover area; retrieve containers, handing the containers over in the handover area; or reshuffle containers We present a generic model and outline what is captured by the test data itself and what is left to be estimated by the user Furthermore, we detail how data are generated to capture the considerable variety of container characteristics, which can be found in major terminals Finally, we present examples to illustrate the variety of research projects supported by our test data generator

12 citations


Journal ArticleDOI
TL;DR: It is proved that pHCVRP is NP-hard and therefore only very small instances can be solved to optimality using a MIP solver and two heuristics based on ant colony system (ACS) and discrete particle swarm optimization (DPSO) are developed to obtain solutions for realistic instance sizes.
Abstract: Given a network with n nodes, the p-hub center problem locates p hubs and allocates the remaining non-hub nodes to the hubs in such a way that the maximum distance (or time) between all pairs of nodes is minimized. Commonly, it is assumed that a vehicle is available to operate between each demand center and hub. Thus traditional p-hub center models assume that vehicles do not visit more than one non-hub node. However, in many-to-many distribution systems, there are some cases where nodes do not have enough demand to justify direct connections between the non-hub nodes and the hubs. This results in unnecessarily increasing the total number of vehicles on the network. Therefore, the optimal hub network design ought to include location-allocation and routing decisions simultaneously to form the routes among the nodes allocated to the same hubs. In this paper, through the observations from real-life hub networks, we introduce the p-hub center and routing network design problem (pHCVRP) and propose a mixed integer programming (MIP) formulation to model this problem formally. The aim is to locate p hubs, allocate demand centers to the hubs and determine the routes of vehicles for each hub such that the maximum travel time between all origin-destination pairs is minimized. We prove that pHCVRP is NP-hard and therefore only very small instances can be solved to optimality using a MIP solver. Hence, we develop two heuristics based on ant colony system (ACS) and discrete particle swarm optimization (DPSO) to obtain solutions for realistic instance sizes. Our design of the DPSO is quite different to the standard DPSO methods. In our DPSO, we combine concepts from simulated annealing (SA) and ACS to update the particles. We also use iterated local search (ILS) as a baseline algorithm to observe the improvements from a pure local search through more complex algorithms. We test the performance of the heuristics that we develop on the Turkish network and Australia Post data set and compare the performance of these methods.

Journal ArticleDOI
TL;DR: The results of a comparison of the DT and RF approach with two priority dispatching rules, the original CP solutions and tight lower bounds retrieved from a strengthened mixed-integer programming (MIP) formulation show that the proposed machine learning approach performs well in most instance sets for the makespan objective and in all set for the total tardiness objective.
Abstract: In proposing a machine learning approach for a flow shop scheduling problem with alternative resources, sequence-dependent setup times, and blocking, this paper seeks to generate a tree-based priority rule in terms of a well-performing decision tree (DT) for dispatching jobs. Furthermore, generating a generic DT and RF that yields competitive results for instance scenarios that structurally differ from the training instances was another goal of our research. The proposed DT relies on high quality solutions, obtained using a constraint programming (CP) formulation. Novel aspects include a unified representation of job sequencing and machine assignment decisions, as well as the generation of random forests (RF) to counteract overfitting behaviour. To show the performance of the proposed approaches, different instance scenarios for two objectives (makespan and total tardiness minimisation) were implemented, based on randomised problem data. The background of this approach is a real-world physical system of an industrial partner that represents a typical shop floor for many production processes, such as furniture and window construction. The results of a comparison of the DT and RF approach with two priority dispatching rules, the original CP solutions and tight lower bounds retrieved from a strengthened mixed-integer programming (MIP) formulation show that the proposed machine learning approach performs well in most instance sets for the makespan objective and in all sets for the total tardiness objective.

Journal ArticleDOI
TL;DR: This paper designs and analyzes a framework for the efficient storage and retrieval of a large number of storage items based on a multi-agent routing algorithm in a simulation-based case study at a major German airport.
Abstract: Grid-based storage systems consist of many adjacent square cells arranged in a rectangular grid. Cells are either empty or occupied by a storage item. Items are stored on conveyors and are movable simultaneously and independently into the four cardinal directions. This technology allows for very dense storage. Previous research on grid-based storages has mainly focused on retrieval performance analysis of a single storage item. In this paper, we contribute a framework for the efficient storage and retrieval of a large number of storage items based on a multi-agent routing algorithm. We evaluate the framework using different storage and retrieval strategies in a simulation-based case study, in which we design and analyze a grid-based early baggage storage system at a major German airport.

Journal ArticleDOI
TL;DR: It is proved that the problem of finding a path which satisfies two bounds, one for each criterion, is NP-complete, even in the acyclic case.
Abstract: We study a bi-criteria path problem on a directed multigraph with cycles, where each arc is associated with two parameters. The first is the survival probability of moving along the arc, and the second is the length of the arc. We evaluate the quality of a path by two independent criteria. The first is to maximize the survival probability along the entire path, which is the product of the arc probabilities, and the second is to minimize the total path length, which is the sum of the arc lengths. We prove that the problem of finding a path which satisfies two bounds, one for each criterion, is NP-complete, even in the acyclic case. We further develop approximation algorithms for the optimization versions of the studied problem. One algorithm is based on approximate computing of logarithms of arc probabilities, and the other two are fully polynomial time approximation schemes (FPTASes). One FPTAS is based on scaling and rounding of the input, while the other FPTAS is derived via the method of K-approximation sets and functions, introduced by Halman et al. (Math Oper Res 34:674–685, 2009).

Journal ArticleDOI
TL;DR: A model in which the prepositioning strategy developed by a major aid agency or a local government considers sharing resources with other aid agencies is developed, and a heuristic approach for solving the uncapacitated deterministic version of the proposed model is provided.
Abstract: Disaster responses are usually joint efforts between agencies of different sizes and specialties. Improving disaster response can be achieved by prepositioning relief items in the appropriate amount and at the appropriate locations. In this paper, we develop a multi-agency prepositioning model under uncertainty. In particular, we develop a model in which the prepositioning strategy developed by a major aid agency or a local government considers sharing resources with other aid agencies. The proposed model considers multiple relief item types, storage capacity, budgetary and equity constraints while integrating supplier selection, inventory and facility location decisions. Uncertainty is modeled using robust optimization. We provide a deterministic model as well as its robust counterpart where demand and link disruptions are considered uncertain. In addition, a heuristic approach for solving the uncapacitated deterministic version of the proposed model is provided. In order to evaluate the proposed model and heuristic, two computational experiments are presented. In the first experiment, we assess the quality of the robust solutions by simulating a number of realizations. In the second experiment, we test the performance of the heuristic compared to the optimal policy.

Journal ArticleDOI
TL;DR: This paper presents a simple myopic budget allocation algorithm for multi-objective problems and proposes several variants for different settings, including a myopic method that only allocates one simulation sample to one alternative in each iteration.
Abstract: Simulation optimisation offers great opportunities in the design and optimisation of complex systems. In the presence of multiple objectives, there is usually no single solution that performs best on all objectives. Instead, there are several Pareto-optimal (efficient) solutions with different trade-offs which cannot be improved in any objective without sacrificing performance in another objective. For the case where alternatives are evaluated on multiple stochastic criteria, and the performance of an alternative can only be estimated via simulation, we consider the problem of efficiently identifying the Pareto-optimal designs out of a (small) given set of alternatives. We present a simple myopic budget allocation algorithm for multi-objective problems and propose several variants for different settings. In particular, this myopic method only allocates one simulation sample to one alternative in each iteration. This paper shows how the algorithm works in bi-objective problems under different settings. Empirical tests show that our algorithm can significantly reduce the necessary simulation budget.

Journal ArticleDOI
TL;DR: This paper investigates the positive effect of duplicating SKUs and storing them at multiple positions along the path and explores the interdependent storage assignment and order sequencing problems and introduces a decomposition heuristic.
Abstract: Trolley line picking is a special warehousing system particularly suited to fulfill high-volume demands for heavy stock keeping units (SKUs). In such a system, unit loads of SKUs are positioned along a given path passed by automated trolleys, i.e., carriers hanging from a monorail or automated guided vehicles. Once a trolley reaches a requested SKU, it automatically stops and announces the requested items on a display. This is the signal for an accompanying human picker to put the demanded items onto the trolley. In this way, picking continues until, at the end of the path, the current picking order is complete and the trolley moves onward to the shipping area. Meanwhile, the picker rushes back to meet the subsequent trolley associated with the next picking order. The picking performance of the trolley line system is mainly influenced by the picker’s unproductive walking from SKU to SKU during order processing and back to the next trolley when switching to the next order. In this paper, we investigate how the storage assignment of SKUs along the path and the order sequence influence picking performance. Specifically, we explore the positive effect of duplicating SKUs and storing them at multiple positions along the path. We formulate the interdependent storage assignment and order sequencing problems and introduce a decomposition heuristic. Our computational study investigates the solution performance of this procedure and shows that SKU duplication can considerably improve picking performance.

Journal ArticleDOI
TL;DR: This paper develops two alternative (decentral) allocation approaches and derive conditions under which they lead to optimal allocations, and shows that these alternative approaches outperform the conventional allocation rules, independent of the conditions underWhich they are used.
Abstract: Matching supply with demand remains a challenging task for many companies, especially when purchasing and production must be planned with sufficient lead time, demand is uncertain, overall supply may not suffice to fulfill all of the projected demands, and customers differ in their level of importance. The particular structure of sales organizations often adds another layer of complexity: These organizations often have multi-level hierarchical structures that include multiple geographic sales regions, distribution channels, customer groups, and individual customers (e.g., key accounts). In this paper, we address the problem of “allocation planning” in such sales hierarchies when customer demand is stochastic, supply is scarce, and the company’s objective is to meet individual customer groups’ service-level targets. Our first objective is to determine when conventional allocation rules lead to optimal (or at least acceptable) results and to characterize their optimality gap relative to the theoretical optimum. We find that these popular rules lead to optimal results only under very restrictive conditions and that the loss in optimality is often substantial. This result leads us to pursue our second objective: to find alternative (decentral) allocation approaches that generate acceptable performance under conditions in which the conventional allocation rules lead to poor results. We develop two alternative (decentral) allocation approaches and derive conditions under which they lead to optimal allocations. Based on numerical analyses, we show that these alternative approaches outperform the conventional allocation rules, independent of the conditions under which they are used. Our results suggest that they lead to near-optimal solutions under most conditions.

Journal ArticleDOI
Barış Tan1
TL;DR: It is shown that controlling the production rate optimally allows producers respond to the fluctuations in price, cost, and demand in an effective way and maximize their profits.
Abstract: An optimal production flow control problem of a make-to-stock manufacturing firm with price, cost, and demand uncertainty is studied. The objective of the flow rate control problem is maximizing the average profit that is the difference between the expected revenue and the expected production, inventory holding, and backlog costs. The uncertainties in the system are captured jointly in discrete environment states. In each environment state, the price, cost, and demand take different levels. The transitions between different environment states evolve according to a time-homogenous Markov chain. By using a continuous flow model, the optimal production policy is stated as a state-dependent hedging policy. The performance of the system where the production cost alternates between a high and a low cost level and the demand is either constant or also alternates between a high and a low level is analyzed under the double-hedging policy. According to this policy, the producer produces only when the cost is low and the surplus is between the two hedging levels. However when the backlog is below the lower hedging level, the producer produces with the maximum capacity regardless of the cost. The effects of production cost, production capacity, demand variability, and the dependence of the demand and the cost on the performance of the system are analyzed analytically and numerically. It is shown that controlling the production rate optimally allows producers respond to the fluctuations in price, cost, and demand in an effective way and maximize their profits.

Journal ArticleDOI
TL;DR: A peer-evaluation mode to evaluate the performance of the DMUs is proposed and a cross-bargaining game approach is developed to improve the cross-directional efficiency approach even further.
Abstract: As one of the most useful performance and productivity evaluation tools, the directional distance function (DDF) has received substantial attention and research. One of the key concerns to address in DDF measurement is selecting the direction along which to measure the distance from an inefficient decision making unit (DMU) to the production frontier. The least distance approach helps the inefficient DMUs find their own most preferred directions that maximize their own efficiency scores with least effort, but some DMUs may not accept the results because of the inconsistent evaluation basis. To overcome this limitation, we propose a peer-evaluation mode to evaluate the performance of the DMUs. We give a cross-directional evaluation approach and further provide a cross-bargaining game approach. In the cross-directional evaluation approach, each inefficient DMU is evaluated using both its own preferred projection direction and the other DMUs’ most preferred projection directions. However, the resulting average cross-directional efficiencies are not Pareto-optimal, so we develop a cross-bargaining game approach to improve the cross-directional efficiency approach even further. In the cross-bargaining game, each pair of inefficient DMUs is treated as two players who will obtain a common projection direction by bargaining with each other. The use of cross-bargaining negotiated projection directions and the Pareto-optimality of the DMUs’ final average cross-bargaining-directional efficiencies make the evaluation results more acceptable to all inefficient DMUs. Finally, an empirical example of 28 international airlines is applied to illustrate the practicality and superiority of our cross-bargaining game approach.

Journal ArticleDOI
TL;DR: Deterministic model formulations that address both the periodic and cyclic service levels, as well as the $$\beta $$β service-level constraints, are presented, and lower bounds generated by the deterministic model are compared with the results of a rolling schedule strategy addressing a stochastic lot-sizing problem with given service- level constraints.
Abstract: We present deterministic model formulations for the capacitated lot-sizing problem, including service-level constraints that address both the periodic and cyclic $$\alpha $$ service levels, as well as the $$\beta $$, $$\gamma $$, and $$\eta $$ service levels. We assume that service levels for individual products are given and controlled over a given reporting period (i.e., one year). These deterministic model formulations may be used when all data are deterministic and decision makers intend to exploit given service levels to minimize setup and holding costs. A further application is to provide lower bounds for capacitated lot-sizing problems with stochastic demand and given service-level constraints. In contrast to well-known stochastic or robust optimization approaches, there are new proposals in the literature that do not require scenario modeling and thus have much greater potential for use in solving (large-scale) real-world production planning problems. However, an evaluation of the quality of solutions resulting from these new approaches is difficult. For this purpose, lower bounds showing the best possible (ideal) solution should be of great help. In a computational study, we provide insight into the computational efforts associated with deterministic model formulations with service-level constraints. Finally, lower bounds generated by the deterministic model with $$\beta $$ service-level constraints are compared with the results of a rolling schedule strategy addressing a stochastic lot-sizing problem with given $$\beta $$ service-level constraints. The resultant difference in the objective function values (costs) defines the uncertainty gap. We demonstrate its increase with forecast inaccuracy as well as machine utilization.

Journal ArticleDOI
TL;DR: Recursive methods combining numerical integration and Monte Carlo simulation are developed to evaluate the expected cost rate and its standard deviation and recursive methods to calculate the reliability, the availability and the interval reliability of the system are given.
Abstract: This paper deals with the assessment of the maintenance cost and the performance of a system under a finite planning horizon. The system is subject to two dependent causes of failure: internal degradation and sudden shocks. We assume that internal degradation follows a gamma process. When the deterioration level of the degradation process exceeds a threshold, a degradation failure occurs. Sudden shocks arrive at the system following a doubly stochastic Poisson process (DSPP). A sudden shock provokes the system failure. A condition-based maintenance (CBM) with periodic inspection times is implemented. Recursive methods combining numerical integration and Monte Carlo simulation are developed to evaluate the expected cost rate and its standard deviation. Also, recursive methods to calculate the reliability, the availability and the interval reliability of the system are given. Numerical examples are provided to illustrate the analytical results.

Journal ArticleDOI
TL;DR: This paper is dedicated to the cafeteria problem: given a single waiter operating multiple counters for different dishes arranged along a line, find a sequence of customers, which may not overtake each other, and a service schedule for the waiter, such that the makespan is minimized.
Abstract: This paper is dedicated to the cafeteria problem: given a single waiter operating multiple counters for different dishes arranged along a line and a set of customers with given subsets of dishes they desire, find a sequence of customers, which may not overtake each other, and a service schedule for the waiter, such that the makespan is minimized. This generic problem is shown to have different real-world applications in order picking with blocking restrictions. We present different heuristic and exact solution procedures for both problem parts, i.e., customer sequencing and waiter scheduling, and systematically compare these approaches. Our computational results reveal that the largest performance gains are enabled by not strictly processing order after order. Instead, the waiter should be allowed to flexibly swap between customers waiting along the line. Such a flexible service policy considerably reduces the makespan and the total walking distance of the waiter.

Journal ArticleDOI
TL;DR: This numerical study shows that the effects of the reduction in mean or variance of the regular transportation lead time depend on whether the chance credit constraint is loose or tight, and shows that substantially extending the deterministic credit limit is less effective than having a slight increase in the probability parameter of the chancecredit constraint.
Abstract: We study a dual-mode inventory management problem of a high-value component where the customer demand and the regular transportation lead time are stochastic, and the review periods of the two modes are different. The manufacturer is subject to a chance credit constraint that bounds the working capital. To solve the resulting chance-constrained stochastic optimization problem, we develop a hybrid simulation optimization algorithm that combines the modified nested partitions method as the global search framework, a feasibility detection procedure for chance constraint verification, and a $$\hbox {KN}{++}$$ procedure as the final “cleanup” procedure to ensure solution quality. We are then able to analyze the impact of the chance credit constraint on the inventory policies and operational cost. Our numerical study shows that the effects of the reduction in mean or variance of the regular transportation lead time depend on whether the chance credit constraint is loose or tight. We show in this way that this tightness may lead to different mechanisms dominating the observed behavior. Further, we show that substantially extending the deterministic credit limit is less effective than having a slight increase in the probability parameter of the chance credit constraint.

Journal ArticleDOI
TL;DR: This paper proposes a production scheme for a two-step packaging system as part of a make-and-pack production process including parallel production units in all stages, and develops an approach to specifying the parameters of the scheme.
Abstract: In this paper, we propose a production scheme for a two-step packaging system as part of a make-and-pack production process including parallel production units in all stages. In the first stage of the packaging system, flavored liquids are filled into cans of different sizes which are immediately palletized in the second stage, i.e., work-in-progress inventories do not exist. However, each filling unit can feed more than one palletizer at a time. Final products can be stored in a warehouse with limited capacity. Among others, the proposed scheme consists of a periodic production sequence, also referred to as a cycle, for each production unit and a control strategy that keeps cycle lengths close to a target length. In addition, an approach to specifying the parameters of the scheme is developed. This approach accounts for sequence-dependent setup times, downtimes of production units, capacitated storage and uncertain demand for final products that is satisfied from stock or backlogged. We evaluate our approach conducting computational experiments that are based on real-world and random data.

Journal ArticleDOI
TL;DR: This work defines a new Luenberger-type indicator for dynamic profit performance evaluation when a single decision-making unit is of interest, and provides a coherent and systematic way to compare the profit performance changes between the periods of time and the time intervals.
Abstract: We propose a simple and intuitive nonparametric technique to assess the profit performances of a single decision-making unit over time. The particularity of our approach lies in recognizing that technological change may be present in the profit evaluation exercise. We partition the periods of time into several time intervals, in such a way that the technology is fixed within intervals but may differ between intervals. Attractively, our approach defines a new Luenberger-type indicator for dynamic profit performance evaluation when a single decision-making unit is of interest, and provides a coherent and systematic way to compare the profit performance changes between the periods of time and the time intervals. To define the interval-level concepts, we rely on a flexible weighting linear aggregation scheme. We also show how the new indicator can be decomposed into several dimensions. We illustrate the usefulness of our methodology with the case of the Chinese low-end hotel industry in 2005–2015. Our results highlight a performance regression, which is mainly due to the technical components of the indicator decomposition.

Journal ArticleDOI
TL;DR: This work has proposed a mixed integer linear programming model and three heuristic algorithms, namely iterated local search (ILS), tabu search (TS) and variable neighborhood search (VNS), to solve the time-constrained maximal covering routing problem.
Abstract: We introduce the time-constrained maximal covering routing problem (TCMCRP), as a generalization of the covering salesman problem. In this problem, we are given a central depot, a set of facilities and several customers which are located within a pre-determined coverage distance of available facilities. Each facility can supply the demand of some customers which are within its coverage radius. Starting from the depot, the goal is to maximize the total number of covered customers, by constructing a set of p length constraint Hamiltonian cycles. We have proposed a mixed integer linear programming model and three heuristic algorithms, namely iterated local search (ILS), tabu search (TS) and variable neighborhood search (VNS), to solve the problem. Extensive computational tests on this problem and some of its variants clearly indicate the effectiveness of the developed solution methods.

Journal ArticleDOI
TL;DR: This work generalizes to the SPDP the concept of forward time slack, which has proven a versatile tool for feasibility testing of customer or request insertions into a given (feasible) route for many VRP variants.
Abstract: The Synchronized Pickup and Delivery Problem (SPDP) consists of finding a set of minimum-cost routes servicing user-specified transportation requests from pickup to delivery locations subject to pairing and precedence, capacity, time-window, and minimum and maximum time-lag constraints. The temporal constraints of the SPDP impose a complex scheduling problem for the service times at the customer locations which makes the efficient feasibility checking of routes intricate. We present different route feasibility tests for the SPDP and compare their practical runtime on a huge number of randomly generated routes. Furthermore, we generalize to the SPDP the concept of forward time slack, which has proven a versatile tool for feasibility testing of customer or request insertions into a given (feasible) route for many VRP variants.

Journal ArticleDOI
TL;DR: The great versatility of the model is shown within the context of reservoir, hybrid energy and natural gas storage systems.
Abstract: We consider a general storage system with applications to energy systems. Energy inflow, energy demand and costs interact with an environmental process. In each period, a limited amount of energy can be purchased from a supplementary source of energy at a cost per unit lower than a penalty cost for unmet demand. Sufficient conditions are found for the optimality of a zone-based release rule with critical numbers depending on the current state of the environment. There one buffer is needed to build up a storage level in the usual sense and a second one to protect against possible losses due to the variability in the randomly varying costs for unmet demand. The great versatility of our model is shown within the context of reservoir, hybrid energy and natural gas storage systems.

Journal ArticleDOI
TL;DR: A mathematical model is suggested that incorporates three important problem features: a multi-period planning horizon; a hierarchical structure of the neonatal care service system (multi-flow, nested, and non-coherent); and a congestion effect in providing services at each NICU.
Abstract: Strengthening the infrastructure of the neonatal care service system is an important field of work in Korea. Motivated by the efforts of the Korean government, we address the problem of allocating care capacities to neonatal intensive care units (NICUs) to maximize the demand covered over the planning horizon. To address this problem, we suggest a mathematical model that incorporates three important problem features: (1) a multi-period planning horizon; (2) a hierarchical structure of the neonatal care service system (multi-flow, nested, and non-coherent); and (3) a congestion effect in providing services at each NICU. The model is formulated as a mixed-integer linear programming problem and is applied to the design of neonatal care service system in Korea. By evaluating several scenarios through varying the total amount of budget available, our experimental results highlight insightful information for policy makers to establish their rollout plan. In particular, we evaluate three alternatives to improve the quality of neonatal care services. The first two alternatives are related to increasing care capacities, and the third option is related to limiting inappropriate inflow to NICUs. The experimental results confirm that increasing care capacities by upgrading lower levels of NICUs to higher levels is the most effective policy among the three options. We believe that our model and findings are paramount for the development of better neonatal care service systems.