scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Heuristics in 2021"


Journal ArticleDOI
TL;DR: This paper focuses on the feasibility tests for insertions and the impact of a limited cross-dock capacity on the routing cost, and adapts a recently proposed matheuristic based on large neighborhood search for this problem.
Abstract: In this paper, we propose an extension of the vehicle routing problem with cross-docking that takes into account resource constraints at the cross-dock. These constraints limit the number of docks that can be used simultaneously. To solve this new problem, we adapt a recently proposed matheuristic based on large neighborhood search. In particular, we focus on the feasibility tests for insertions and compare heuristics and constraint programming strategies. Finally, computational experiments on instances adapted from the vehicle routing problem with cross-docking are reported. They give insights on the impact of a limited cross-dock capacity on the routing cost.

21 citations


Journal ArticleDOI
TL;DR: This paper presents an iterated local search (ILS) algorithm for the single machine total weighted tardiness batch scheduling problem, one of the first attempts to apply ILS to solve a batching scheduling problem and provides an exact pseudo-polynomial time dynamic programming algorithm for solving such problem.
Abstract: This paper presents an iterated local search (ILS) algorithm for the single machine total weighted tardiness batch scheduling problem. To our knowledge, this is one of the first attempts to apply ILS to solve a batching scheduling problem. The proposed algorithm contains a local search procedure that explores five neighborhood structures, and we show how to efficiently implement them. Moreover, we compare the performance of our algorithm with dynamic programming-based implementations for the problem, including one from the literature and two other ones inspired in biased random-key genetic algorithms and ILS. We also demonstrate that finding the optimal batching for the problem given a fixed sequence of jobs is $$\mathcal {NP}$$ -hard, and provide an exact pseudo-polynomial time dynamic programming algorithm for solving such problem. Extensive computational experiments were conducted on newly proposed benchmark instances, and the results indicate that our algorithm yields highly competitive results when compared to other strategies. Finally, it was also observed that the methods that rely on dynamic programming tend to be time-consuming, even for small size instances.

18 citations


Journal ArticleDOI
TL;DR: R-NSGA-II method is modified using recently proposed Karush–Kuhn–Tucker proximity measure and achievement scalarization function (ASF) metrics, instead of Euclidean distance metric, and a new technique for calculating KKTPM measure of a solution in the presence of an aspiration point is developed.
Abstract: In a preference-based multi-objective optimization task, the goal is to find a subset of the Pareto-optimal set close to a supplied set of aspiration points. The reference point based non-dominated sorting genetic algorithm (R-NSGA-II) was proposed for such problem-solving tasks. R-NSGA-II aims to finding Pareto-optimal points close, in the sense of Euclidean distance in the objective space, to the supplied aspiration points, instead of finding the entire Pareto-optimal set. In this paper, R-NSGA-II method is modified using recently proposed Karush–Kuhn–Tucker proximity measure (KKTPM) and achievement scalarization function (ASF) metrics, instead of Euclidean distance metric. While a distance measure may not produce desired solutions, KKTPM-based distance measure allows a theoretically-convergent local or global Pareto solutions satisfying KKT optimality conditions and the ASF measure allows Pareto-compliant solutions to be found. A new technique for calculating KKTPM measure of a solution in the presence of an aspiration point is developed in this paper. The proposed modified R-NSGA-II methods are able to solve as many as 10-objective problems as effectively or better than the existing R-NSGA-II algorithm.

14 citations


Journal ArticleDOI
TL;DR: A matheuristic approach is proposed that uses an ILP formulation based on positional completion times variables and exploits the structural properties of the problem to show very competitive performances on instances with up to 500 jobs in size.
Abstract: We consider the two-machine total completion time flow shop problem with additional requirements. These requirements are the so-called no-idle constraint where the machines must operate with no inserted idle time and the so-called no-wait constraint where jobs cannot wait between the end of an operation and the start of the following one. We propose a matheuristic approach that uses an ILP formulation based on positional completion times variables and exploits the structural properties of the problem. The proposed approach shows very competitive performances on instances with up to 500 jobs in size.

13 citations


Journal ArticleDOI
TL;DR: The matheuristic is a contribution to the literature on heuristic approaches to solving facility location under uncertainties, can be used to further study the particular variant of the facility location problem, and can also support humanitarian logisticians in their planning of pre-positioning strategies.
Abstract: In this paper, we describe a matheuristic to solve the stochastic facility location problem which determines the location and size of storage facilities, the quantities of various types of supplies stored in each facility, and the assignment of demand locations to the open facilities, which minimize unmet demand and response time in lexicographic order. We assume uncertainties about demands, inventory spoilage, and transportation network availability. A good example where such a formulation makes sense is the the problem of pre-positioning emergency supplies, which aims to increase disaster preparedness by making the relief items readily available to people in need. The matheuristic employs iterated local search techniques to look for good location and inventory configurations, and uses CPLEX to optimize the assignments. Numerical experiments on a number of case studies and random instances for the pre-positioning problem demonstrate the effectiveness and efficiency of the matheuristic, which is shown to be particularly useful for tackling larger instances that are intractable for exact solvers. The matheuristic is therefore a contribution to the literature on heuristic approaches to solving facility location under uncertainties, can be used to further study the particular variant of the facility location problem, and can also support humanitarian logisticians in their planning of pre-positioning strategies.

12 citations


Journal ArticleDOI
TL;DR: This paper addresses the BI-TTP, a bi-objective version of the TTP, where the goal is to minimize the overall traveling time and to maximize the profit of the collected items and provides a comprehensive study showing the influence of each parameter on the performance.
Abstract: In this paper, we propose a method to solve a bi-objective variant of the well-studied traveling thief problem (TTP). The TTP is a multi-component problem that combines two classic combinatorial problems: traveling salesman problem and knapsack problem. We address the BI-TTP, a bi-objective version of the TTP, where the goal is to minimize the overall traveling time and to maximize the profit of the collected items. Our proposed method is based on a biased-random key genetic algorithm with customizations addressing problem-specific characteristics. We incorporate domain knowledge through a combination of near-optimal solutions of each subproblem in the initial population and use a custom repair operator to avoid the evaluation of infeasible solutions. The bi-objective aspect of the problem is addressed through an elite population extracted based on the non-dominated rank and crowding distance. Furthermore, we provide a comprehensive study showing the influence of each parameter on the performance. Finally, we discuss the results of the BI-TTP competitions at EMO-2019 and GECCO-2019 conferences where our method has won first and second places, respectively, thus proving its ability to find high-quality solutions consistently.

12 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated Mixed Integer Linear Programming (MILP) matheuristics for nuclear power plant maintenance planning, to tackle large size instances used in operations with a time scope of 5 years, and few restrictions with time window constraints for the latest maintenance operations.
Abstract: Planning the maintenance of nuclear power plants is a complex optimization problem, involving a joint optimization of maintenance dates, fuel constraints and power production decisions. This paper investigates Mixed Integer Linear Programming (MILP) matheuristics for this problem, to tackle large size instances used in operations with a time scope of 5 years, and few restrictions with time window constraints for the latest maintenance operations. Several constructive matheuristics and a Variable Neighborhood Descent local search are designed. The matheuristics are shown to be accurately effective for medium and large size instances. The matheuristics give also results on the design of MILP formulations and neighborhoods for the problem. Contributions for the operational applications are also discussed. It is shown that the restriction of time windows, which was used to ease computations, induces large over-costs and that this restriction is not required anymore with the capabilities of matheuristics or local searches to solve such size of instances. Our matheuristics can be extended to a bi-objective optimization extension with stability costs, for the monthly re-optimization of the maintenance planning in the real-life application.

11 citations


Journal ArticleDOI
TL;DR: A Quadratic Unconstrained Binary Optimization (QUBO) modeling paradigm that fits naturally with the parameters and constraints required for RNA folding prediction is presented.
Abstract: Ribonucleic acid (RNA) molecules play informational, structural, and metabolic roles in all living cells. RNAs are chains of nucleotides containing bases {A, C, G, U} that interact via base pairings to determine higher order structure and functionality. The RNA folding problem is to predict one or more secondary RNA structures from a given primary sequence of bases. From a mathematical modeling perspective, solutions to the RNA folding problem come from minimizing the thermodynamic free energy of a structure by selecting which bases will be paired, subject to a set of constraints. Here we report on a Quadratic Unconstrained Binary Optimization (QUBO) modeling paradigm that fits naturally with the parameters and constraints required for RNA folding prediction. Three QUBO models are presented along with a hybrid metaheuristic algorithm. Extensive testing results show a strong positive correlation with benchmark results.

9 citations


Journal ArticleDOI
TL;DR: This work investigates different Bayesian network structure learning techniques by thoroughly studying several variants of Hybrid Multi-objective Bayesian Estimation of Distribution Algorithm (HMOBEDA), applied to the MNK Landscape combinatorial problem, showing that, score-based structure learning algorithms appear to be the best choice.
Abstract: This work investigates different Bayesian network structure learning techniques by thoroughly studying several variants of Hybrid Multi-objective Bayesian Estimation of Distribution Algorithm (HMOBEDA), applied to the MNK Landscape combinatorial problem. In the experiments, we evaluate the performance considering three different aspects: optimization abilities, robustness and learning efficiency. Results for instances of multi- and many-objective MNK-landscape show that, score-based structure learning algorithms appear to be the best choice. In particular, HMOBEDA $$_{k2}$$ was capable of producing results comparable with the other variants in terms of the runtime of convergence and the coverage of the final Pareto front, with the additional advantage of providing solutions that are less sensible to noise while the variability of the corresponding Bayesian network models is reduced.

9 citations


Journal ArticleDOI
TL;DR: This work proposes a new approach based on a hybrid metaheuristic algorithm called Construct, Merge, Solve & Adapt, which is compared with four algorithms: a Hybrid algorithm based on Integer Linear Programming, a Hybrid algorithms based onInteger Nonlinear Programming, the Parallel Prioritized Genetic Solver, and a greedy algorithm called prioritized-ICPL.
Abstract: In Software Product Lines, it may be difficult or even impossible to test all the products of the family because of the large number of valid feature combinations that may exist (Ferrer et al. in: Squillero, Sim (eds) EvoApps 2017, LNCS 10200, Springer, The Netherlands, pp 3–19, 2017). Thus, we want to find a minimal subset of the product family that allows us to test all these possible combinations (pairwise). Furthermore, when testing a single product is a great effort, it is desirable to first test products composed of a set of priority features. This problem is called Prioritized Pairwise Test Data Generation Problem. State-of-the-art algorithms based on Integer Linear Programming for this problem are faster enough for small and medium instances. However, there exists some real instances that are too large to be computed with these algorithms in a reasonable time because of the exponential growth of the number of candidate solutions. Also, these heuristics not always lead us to the best solutions. In this work we propose a new approach based on a hybrid metaheuristic algorithm called Construct, Merge, Solve & Adapt. We compare this matheuristic with four algorithms: a Hybrid algorithm based on Integer Linear Programming, a Hybrid algorithm based on Integer Nonlinear Programming, the Parallel Prioritized Genetic Solver, and a greedy algorithm called prioritized-ICPL. The analysis reveals that CMSA is statistically significantly better in terms of quality of solutions in most of the instances and for most levels of weighted coverage, although it requires more execution time.

8 citations


Journal ArticleDOI
TL;DR: The concept of weighted Hamming distance is introduced that allows to design a new method called weighted proximity search, where low weights are associated with the variables whose value in the current solution is promising to change in order to find an improved solution, while high weights are assigned to variables that are expected to remain unchanged.
Abstract: Proximity search is an iterative method to solve complex mathematical programming problems. At each iteration, the objective function of the problem at hand is replaced by the Hamming distance function to a given solution, and a cutoff constraint is added to impose that any new obtained solution improves the objective function value. A mixed integer programming solver is used to find a feasible solution to this modified problem, yielding an improved solution to the original problem. This paper introduces the concept of weighted Hamming distance that allows to design a new method called weighted proximity search. In this new distance function, low weights are associated with the variables whose value in the current solution is promising to change in order to find an improved solution, while high weights are assigned to variables that are expected to remain unchanged. The weights help to distinguish between alternative solutions in the neighborhood of the current solution, and provide guidance to the solver when trying to locate an improved solution. Several strategies to determine weights are presented, including both static and dynamic strategies. The proposed weighted proximity search is compared with the classic proximity search on instances from three optimization problems: the p-median problem, the set covering problem, and the stochastic lot-sizing problem. The obtained results show that a suitable choice of weights allows the weighted proximity search to obtain better solutions, for 75 $$\%$$ of the cases, than the ones obtained by using proximity search and for 96 $$\%$$ of the cases the solutions are better than the ones obtained by running a commercial solver with a time limit.

Journal ArticleDOI
TL;DR: The numerical experiments show that the improved anisotropic Q-learning method can provide stable and dynamic solutions for AGV routing, and achieve 9.5% improvement in optimization efficiency compared to the Jeon Learning Method.
Abstract: Finding short and convenient routes for vehicles is an important issue on efficient operations of Automated Guided Vehicle (AGV) systems at container terminals. This paper proposes an anisotropic Q-learning method for AGVs to find the shortest-time routes in the guide-path network of cross-lane type according to real-time vehicle states, which includes current and destination positions, heading direction and the number of vehicles at anisotropic four-direction neighboring locations. The vehicle waiting time of AGV systems is discussed and its estimation is suggested to improve the policy to select actions in the Q-learning method. An improved anisotropic Q-learning routing algorithm is developed with the vehicle-waiting-time-estimation based selecting-action policy. The parameter settings and performance of the proposed methods are analyzed based on simulations. The numerical experiments show that the improved anisotropic Q-learning method can provide stable and dynamic solutions for AGV routing, and achieve 9.5% improvement in optimization efficiency compared to the Jeon Learning Method (Jeon et al. in Logist Res 3(1):19–27, 2011).

Journal ArticleDOI
TL;DR: Conflict-History Search is proposed, a dynamic and adaptive variable ordering heuristic for CSP solving that considers the temporality of these failures throughout the solving steps and empirically shows that the solving of the constraint optimization problem (COP) can also take advantage of this heuristic.
Abstract: The variable ordering heuristic is an important module in algorithms dedicated to solve Constraint Satisfaction Problems (CSP), while it impacts the efficiency of exploring the search space and the size of the search tree. It also exploits, often implicitly, the structure of the instances. In this paper, we propose Conflict-History Search (CHS), a dynamic and adaptive variable ordering heuristic for CSP solving. It is based on the search failures and considers the temporality of these failures throughout the solving steps. The exponential recency weighted average is used to estimate the evolution of the hardness of constraints throughout the search. The experimental evaluation on XCSP3 instances shows that integrating CHS to solvers based on MAC (Maintaining Arc Consistency) and BTD (Backtracking with Tree Decomposition) achieves competitive results and improvements compared to the state-of-the-art heuristics. Beyond the decision problem, we show empirically that the solving of the constraint optimization problem (COP) can also take advantage of this heuristic.

Journal ArticleDOI
TL;DR: The problem of maximizing the lifetime of a wireless sensor network which uses video cameras to monitor targets is considered and a column generation algorithm based on these properties is proposed for solving three lifetime maximization problems.
Abstract: The problem of maximizing the lifetime of a wireless sensor network which uses video cameras to monitor targets is considered. These video cameras can rotate and have a fixed monitoring angle. For a target to be covered by a video camera mounted on a sensor node, three conditions must be satisfied. First, the distance between the sensor and the target should be less than the sensing range. Second, the direction of the camera sensor should face the target, and third, the focus of the video camera should be such that the picture of the target is sharp. Basic elements on optics are recalled, then some properties are shown to efficiently address the problem of setting the direction and focal distance of a video camera for target coverage. Then, a column generation algorithm based on these properties is proposed for solving three lifetime maximization problems. Targets are considered as points in the first problem, they are considered as discs in the second problem (which allows for considering occlusion) and in the last problem, focal distance is also dealt with for taking image sharpness into account. All of these problems are compared on a testbed of 180 instances and numerical results show the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: A general enhancement of the Benders’ decomposition algorithm can be achieved through the improved use of large neighbourhood search heuristics within mixed-integer programming solvers.
Abstract: A general enhancement of the Benders’ decomposition (BD) algorithm can be achieved through the improved use of large neighbourhood search heuristics within mixed-integer programming solvers. While mixed-integer programming solvers are endowed with an array of large neighbourhood search heuristics, few, if any, have been designed for BD. Further, typically the use of large neighbourhood search heuristics is limited to finding solutions to the BD master problem. Given the lack of general frameworks for BD, only ad hoc approaches have been developed to enhance the ability of BD to find high quality primal feasible solutions through the use of large neighbourhood search heuristics. The general BD framework of SCIP has been extended with a trust region based heuristic and a general enhancement for large neighbourhood search heuristics. The general enhancement employs BD to solve the auxiliary problems of all large neighbourhood search heuristics to improve the quality of the identified solutions. The computational results demonstrate that the trust region heuristic and a general large neighbourhood search enhancement technique accelerate the improvement in the primal bound when applying BD.

Journal ArticleDOI
TL;DR: A time-based CMH has been developed which solves all the difficult instances introduced by Smet et al. (Omega 46:64–73, 2014) to optimality and an automated CMH algorithm that utilizes instance-specific problem features has also been developed that produces high quality solutions over all current benchmark instances.
Abstract: The shift minimization personnel task scheduling problem is an NP-complete optimization problem that concerns the assignment of tasks to multi-skilled employees with a view to minimize the total number of assigned employees. Recent literature indicates that hybrid methods which combine exact and heuristic techniques such as matheuristics are efficient as regards to generating high quality solutions. The present work employs a constructive matheuristic (CMH): a decomposition-based method where sub-problems are solved to optimality using exact techniques. The optimal solutions of sub-problems are subsequently utilized to construct a feasible solution for the entire problem. Based on the study, a time-based CMH has been developed which, for the first time, solves all the difficult instances introduced by Smet et al. (Omega 46:64–73, 2014) to optimality. In addition, an automated CMH algorithm that utilizes instance-specific problem features has also been developed that produces high quality solutions over all current benchmark instances.

Journal ArticleDOI
TL;DR: In this paper, a heuristic based on the linear relaxation of the problem and randomized rounding is presented and compared with state-of-the-art resolution methods either by its scaling performance or by the quality of its solutions.
Abstract: Unsplittable flow problems cover a wide range of telecommunication and transportation problems and their efficient resolution is key to a number of applications. In this work, we study algorithms that can scale up to large graphs and important numbers of commodities. We present and analyze in detail a heuristic based on the linear relaxation of the problem and randomized rounding. We provide empirical evidence that this approach is competitive with state-of-the-art resolution methods either by its scaling performance or by the quality of its solutions. We provide a variation of the heuristic which has the same approximation factor as the state-of-the-art approximation algorithm. We also derive a tighter analysis for the approximation factor of both the variation and the state-of-the-art algorithm. We introduce a new objective function for the unsplittable flow problem and discuss its differences with the classical congestion objective function. Finally, we discuss the gap in practical performance and theoretical guarantees between all the aforementioned algorithms.

Journal ArticleDOI
TL;DR: This paper presents a matheuristic algorithm for the JIT–JSS problem, which operates by decomposing the problem into smaller sub-problems, optimizing the sub-Problems and delivering the optimal schedule for the problem.
Abstract: In the just-in-time job-shop scheduling (JIT–JSS) problem every operation has a distinct due-date, and earliness and tardiness penalties. Any deviation from the due-date incurs penalties. The objective of JIT–JSS is to obtain a schedule, i.e., the completion time for performing the operations, with the smallest total (weighted) earliness and tardiness penalties. This paper presents a matheuristic algorithm for the JIT–JSS problem, which operates by decomposing the problem into smaller sub-problems, optimizing the sub-problems and delivering the optimal schedule for the problem. By solving a set of 72 benchmark instances ranging from 10 to 20 jobs and 20 to 200 operations we show that the proposed algorithm outperforms the state-of-the-art methods and the solver CPLEX, and obtains new best solutions for nearly 56% of the instances, including for 79% of the large instances with 20 jobs.

Journal ArticleDOI
TL;DR: The convex hull heuristic is a heuristic for mixed-integer programming problems with a nonlinear objective function and linear constraints and its purpose is to produce quickly feasible and often near optimal or optimal solutions for convex and nonconvex problems.
Abstract: The convex hull heuristic is a heuristic for mixed-integer programming problems with a nonlinear objective function and linear constraints. It is a matheuristic in two ways: it is based on the mathematical programming algorithm called simplicial decomposition, or SD (von Hohenbalken in Math Program 13:49–68, 1977), and at each iteration, one solves a mixed-integer programming problem with a linear objective function and the original constraints, and a continuous problem with a nonlinear objective function and a single linear constraint. Its purpose is to produce quickly feasible and often near optimal or optimal solutions for convex and nonconvex problems. It is usually multi-start. We have tested it on a number of hard quadratic 0–1 optimization problems and present numerical results for generalized quadratic assignment problems, cross-dock door assignment problems, quadratic assignment problems and quadratic knapsack problems. We compare solution quality and solution times with results from the literature, when possible.

Journal ArticleDOI
TL;DR: In this paper, a heuristic search algorithm based on maximum conflicts was proposed to find a weakly stable matching of maximum size for the stable marriage problem with ties and incomplete lists.
Abstract: In this paper, we propose a heuristic search algorithm based on maximum conflicts to find a weakly stable matching of maximum size for the stable marriage problem with ties and incomplete lists. The key idea of our approach is to define a heuristic function based on the information extracted from undominated blocking pairs from the men’s point of view. By choosing a man corresponding to the maximum value of the heuristic function, we aim to not only remove all the blocking pairs formed by the man but also reject as many blocking pairs as possible for an unstable matching from the women’s point of view to obtain a solution of the problem as quickly as possible. Experiments show that our algorithm is efficient in terms of both execution time and solution quality for solving the problem.

Journal ArticleDOI
Wayne Pullan1
TL;DR: In this study an effective k-plex local search (KLS) is presented for solving this problem on a wide range of graph types and uses data structures suitable for the graph being analysed and has mechanisms for preventing search cycling and promoting search diversity.
Abstract: The maximum k-plex problem is an important, computationally complex graph based problem. In this study an effective k-plex local search (KLS) is presented for solving this problem on a wide range of graph types. KLS uses data structures suitable for the graph being analysed and has mechanisms for preventing search cycling and promoting search diversity. State of the art results were obtained on 121 dense graphs and 61 large real-life (sparse) graphs. Comparisons with three recent algorithms on the more difficult graphs show that KLS performed better or as well as in 93% of 332 significant k-plex problem instances investigated achieving either larger average k-plex sizes (including some new results) or, when these were equivalent, lower CPU requirements.

Journal ArticleDOI
TL;DR: A simple multi-start hyper-heuristic approach for the many-to-many hub location-routing problem, which can be regarded as a general problem which encompasses a diverse set of problems originating from different combinations of values of its constituent parameters.
Abstract: This paper addresses a variant of the many-to-many hub location-routing problem. Given an undirected edge-weighted complete graph $$G = (V, E)$$ , this problem consists in finding a subset of V designated as hub nodes, partitioning all the nodes of V into cycles such that each cycle has exactly one hub node, and determining a Hamiltonian cycle on the subgraph induced by hub nodes. The objective is to minimize the total cost resulting from all these cycles. This problem is referred to as Many-to-Many p-Location-Hamiltonian Cycle Problem (MMpLHP) in this paper. To solve this problem, one has to deal with aspects of subset selection, grouping, and permutation. The characteristics of MMpLHP change according to the values of its constituent parameters. Hence, this problem can be regarded as a general problem which encompasses a diverse set of problems originating from different combinations of values of its constituent parameters. Such a general problem can be tackled effectively by suitably selecting and combining several different heuristics each of which cater to a different characteristic of the problem. Keeping this in mind, we have developed a simple multi-start hyper-heuristic approach for MMpLHP. Further, we have investigated two different selection mechanisms within the proposed approach. Experimental results and their analysis clearly demonstrate the superiority of our approach over best approaches known so far for this problem.

Journal ArticleDOI
Abstract: In this paper we present a novel approach to the dynamic pricing problem for hotel businesses. It includes disaggregation of the demand into several categories, forecasting, elastic demand simulation, and a mathematical programming model with concave quadratic objective function and linear constraints for dynamic price optimization. The approach is computationally efficient and easy to implement. In computer experiments with a hotel data set, the hotel revenue is increased by about 6% on average in comparison with the actual revenue gained in a past period, where the fixed price policy was employed, subject to an assumption that the demand can deviate from the suggested elastic model. The approach and the developed software can be a useful tool for small hotels recovering from the economic consequences of the COVID-19 pandemic.

Journal ArticleDOI
TL;DR: A matheuristic approach that iteratively defines and explores restricted regions of the global solution space that have a high potential of containing good solutions and reduces the complexity of solving the stochastic model without sacrificing the quality of the solution obtained.
Abstract: We propose a solution approach for stochastic network design problems with uncertain demands. We investigate how to efficiently use reduced cost information as a means of guiding variable fixing to define a restriction that reduces the complexity of solving the stochastic model without sacrificing the quality of the solution obtained. We then propose a matheuristic approach that iteratively defines and explores restricted regions of the global solution space that have a high potential of containing good solutions. Extensive computational experiments show the effectiveness of the proposed approach in obtaining high-quality solutions, while reducing the computational effort to obtain them.

Journal ArticleDOI
TL;DR: In this article, an efficient matheuristic of type Balance First, Sequence Last (BFSL) is proposed for the Reconfigurable Transfer Line Balancing Problem (RTLB) with precedence constraints, inclusion, exclusion and accessibility constraints between operations.
Abstract: The Reconfigurable Transfer Line Balancing Problem (RTLB) is considered in this paper This problem is quite recent and motivated by the growing need of reconfigurability in the new industry 40 context The problem consists into allocating a set of operations necessary to machine a single part to different workstations placed into a serial line Each workstation can contain multiple machines operating in parallel and the tasks allocated to a workstation should be sequenced since sequence-dependent setup times between operations are needed to perform tool changes Besides, precedence constraints, inclusion, exclusion and accessibility constraints between operations are considered In this article we propose an efficient matheuristic of type Balance First, Sequence Last (BFSL) This method is a two-step heuristic with a constructive phase and an improvement phase It contains several components from exact methods (linear programming, constraint generation and dynamic programming) and metaheuristics (simulated annealing) In addition, we show that the constructive algorithm approximates the optimal solution when the setup times are bounded by the processing times and give an approximation ratio The obtained results show the effectiveness of the proposed approach The matheuristic clearly outperforms a genetic algorithm from literature on quite large benchmark instances

Journal ArticleDOI
TL;DR: In this article, a constraint-guided evolutionary algorithm (CGEA) was proposed to solve the Winner Determination Problem (WDP) in combinatorial auctions, which is an NP-hard problem.
Abstract: Combinatorial Auctions (CAs) allow the participants to bid on a bundle of items and can result in more cost-effective deals than traditional auctions if the goods are complementary. However, solving the Winner Determination Problem (WDP) in CAs is an NP-hard problem. Since Evolutionary Algorithms (EAs) can find good solutions in polynomial time within a huge search space, the use of EAs has become quite suitable for solving this type of problem. In this paper, we introduce a new Constraint-Guided Evolutionary Algorithm (CGEA) for the WDP. It employs a penalty component to represent each constraint in the fitness function and introduces new variation operators that consider each package value and each type of violated constraint to induce the generation of feasible solutions. CGEA also presents a survivor selection operator that maintains the exploration versus exploitation balance in the evolutionary process. The performance of CGEA is compared with that of three other evolutionary algorithms to solve a WDP in a Combinatorial Reverse Auction (CRA) of electricity generation and transmission line assets. Each of the algorithms compared employs different methods to deal with constraints. They are tested and compared on several problem instances. The results show that CGEA is competitive and results in better performance in most cases.

Journal ArticleDOI
TL;DR: Algorithms which guarantee that rewards are not less than the optimal solution, with a bound on exceeded knapsack capacities are proposed, and a binary-search heuristic combined with these algorithms are proposed to obtain capacity-feasible solutions.
Abstract: The multiple knapsack problem with grouped items aims to maximize rewards by assigning groups of items among multiple knapsacks, without exceeding knapsack capacities. Either all items in a group are assigned or none at all. We study the bi-criteria variation of the problem, where capacities can be exceeded and the second objective is to minimize the maximum exceeded knapsack capacity. We propose approximation algorithms that run in pseudo-polynomial time and guarantee that rewards are not less than the optimal solution of the capacity-feasible problem, with a bound on exceeded knapsack capacities. The algorithms have different approximation factors, where no knapsack capacity is exceeded by more than 2, 1, and $$1/2$$ times the maximum knapsack capacity. The approximation guarantee can be improved to $$1/3$$ when all knapsack capacities are equal. We also prove that for certain cases, solutions obtained by the approximation algorithms are always optimal—they never exceed knapsack capacities. To obtain capacity-feasible solutions, we propose a binary-search heuristic combined with the approximation algorithms. We test the performance of the algorithms and heuristics in an extensive set of experiments on randomly generated instances and show they are efficient and effective, i.e., they run reasonably fast and generate good quality solutions.

Journal ArticleDOI
TL;DR: In this article, the authors consider a coal mine that extracts raw coal by a set of coal mining equipment (CME), separates out multiple products by coal washing equipment, and delivers the products through a fleet of trains over a multi-period horizon.
Abstract: We consider a coal mine that extracts raw coal by a set of coal mining equipment (CME), separates out multiple products by a set of coal washing equipment, and delivers the products through a fleet of trains over a multi-period horizon. The equipment requires a daily preventive maintenance (PM) and each CME is subject to random failures and repairs. We study a joint PM, production, and delivery problem that determines when to perform the PM and how to manage coal production and delivery in each period, to minimize the expected total cost. We formulate a multi-period stochastic optimization model that delicately integrates the static PM decisions with the adaptive production-delivery decisions, which is extremely difficult to solve due to CME’s decision-dependent operating status. We propose a novel two-phase solution approach to overcome this difficulty. Phase 1 firstly determines the PM decisions using a scenario-based variable neighborhood search algorithm. Using the PM solution and the resultant set of scenarios as input parameters, Phase 2 adaptively determines the production-delivery decisions using a forward-looking algorithm in a rolling horizon manner. We show numerically that our approach consistently produces good-quality and robust solutions while preserving tractability for varying problem instances.

Journal ArticleDOI
TL;DR: In this paper, the problem of platelet allocation among three priority-differentiated demand streams was investigated at a regional hospital in India and an allocation heuristic based on revenue management (RM) principles was proposed.
Abstract: Platelets are valuable, but highly perishable, blood components used in the treatment of, among others, viral dengue fever, blood-related illness, and post-chemotherapy following cancer. Given the short shelf-life of 3–5 days and a highly volatile supply and demand pattern, platelet inventory allocation is a challenging task. This is especially prevalent in emerging economies where demand variability is more pronounced due to neglected tropical diseases, and a perpetual shortage of supply. The consequences of which have given rise to an illegal ‘red market’. Motivated by experience at a regional hospital in India, we investigate the problem of platelet allocation among three priority-differentiated demand streams. Specifically we consider a central hospital which, in addition to internal emergency and non-emergency requests, faces external demand from local clinics. We analyze the platelet allocation decision from a social planner’s perspective and propose an allocation heuristic based on revenue management (RM) principles. The objective is to maximize total social benefit in a highly supply-constrained environment. Using data from the aforementioned Indian hospital as a case study, we conduct a numerical simulation and sensitivity analysis to evaluate the allocation heuristic. The performance of the RM-based policy is evaluated against the current sequential first come, first serve policy and two fixed proportion-based rationing policies. It is shown that the RM-based policy overall dominates, serves patients with the highest medical urgency better, and can curtail patients’ need to procure platelets from commercial sources.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new strategy for local search that attempts to avoid low-quality local optima by selecting in each iteration the improving neighbor that has the fewest possible attributes in common with local optimization.
Abstract: Local search is a fundamental tool in the development of heuristic algorithms. A neighborhood operator takes a current solution and returns a set of similar solutions, denoted as neighbors. In best improvement local search, the best of the neighboring solutions replaces the current solution in each iteration. On the other hand, in first improvement local search, the neighborhood is only explored until any improving solution is found, which then replaces the current solution. In this work we propose a new strategy for local search that attempts to avoid low-quality local optima by selecting in each iteration the improving neighbor that has the fewest possible attributes in common with local optima. To this end, it uses inequalities previously used as optimality cuts in the context of integer linear programming. The novel method, referred to as delayed improvement local search, is implemented and evaluated using the travelling salesman problem with the 2-opt neighborhood and the max-cut problem with the 1-flip neighborhood as test cases. Computational results show that the new strategy, while slower, obtains better local optima compared to the traditional local search strategies. The comparison is favourable to the new strategy in experiments with fixed computation time or with a fixed target.