scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 2008"


Journal ArticleDOI
TL;DR: Variable neighbourhood search is a metaheuristic, or a framework for building heuristics, based upon systematic changes of neighbourhoods both in descent phase, to find a local minimum, and in perturbation phase to emerge from the corresponding valley.
Abstract: Variable neighbourhood search (VNS) is a metaheuristic, or a framework for building heuristics, based upon systematic changes of neighbourhoods both in descent phase, to find a local minimum, and in perturbation phase to emerge from the corresponding valley. It was first proposed in 1997 and has since then rapidly developed both in its methods and its applications. In the present paper, these two aspects are thoroughly reviewed and an extensive bibliography is provided. Moreover, one section is devoted to newcomers. It consists of steps for developing a heuristic for any particular problem. Those steps are common to the implementation of other metaheuristics.

480 citations


Journal ArticleDOI
01 Sep 2008
TL;DR: In this article, the authors presented the application and performance comparison of particle swarm optimization (PSO) and genetic algorithms (GA) for flexible ac transmission system (FACTS)-based controller design.
Abstract: Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. Since the two approaches are supposed to find a solution to a given objective function but employ different strategies and computational effort, it is appropriate to compare their performance. This paper presents the application and performance comparison of PSO and GA optimization techniques, for flexible ac transmission system (FACTS)-based controller design. The design objective is to enhance the power system stability. The design problem of the FACTS-based controller is formulated as an optimization problem and both PSO and GA optimization techniques are employed to search for optimal controller parameters. The performance of both optimization techniques in terms of computational effort, computational time and convergence rate is compared. Further, the optimized controllers are tested on a weakly connected power system subjected to different disturbances over a wide range of loading conditions and parameter variations and their performance is compared with the conventional power system stabilizer (CPSS). The eigenvalue analysis and non-linear simulation results are presented and compared to show the effectiveness of both the techniques in designing a FACTS-based controller, to enhance power system stability.

376 citations



Journal ArticleDOI
TL;DR: It is shown that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy in the single-leg, choice-based RM problem.
Abstract: Gallego et al. [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. CORC Technical Report TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York.] recently proposed a choice-based deterministic linear programming model (CDLP) for network revenue management (RM) that parallels the widely used deterministic linear programming (DLP) model. While they focused on analyzing “flexible products”---a situation in which the provider has the flexibility of using a collection of products (e.g., different flight times and/or itineraries) to serve the same market demand (e.g., an origin-destination connection)---their approach has broader implications for understanding choice-based RM on a network. In this paper, we explore the implications in detail. Specifically, we characterize optimal offer sets (sets of available network products) by extending to the network case a notion of “efficiency” developed by Talluri and van Ryzin [Talluri, K. T., G. J. van Ryzin. 2004. Revenue management under a general discrete choice model of consumer behavior. Management Sci.50 15--33.] for the single-leg, choice-based RM problem. We show that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy. This analysis suggests that efficiency is a potentially useful approach for identifying “good” offer sets on networks, as it is in the case of single-leg problems. Second, we propose a practical decomposition heuristic for converting the static CDLP solution into a dynamic control policy. The heuristic is quite similar to the familiar displacement-adjusted virtual nesting (DAVN) approximation used in traditional network RM, and it significantly improves on the performance of the static LP solution. We illustrate the heuristic on several numerical examples.

368 citations


Journal ArticleDOI
TL;DR: This paper presents a genetic algorithm for the resource constrained multi-project scheduling problem based on random keys that builds parameterized active schedules based on priorities, delay times, and release dates defined by the genetic algorithm.

341 citations


Journal ArticleDOI
TL;DR: Two hybrid genetic algorithms (HGAs) are developed and it is proved that the performance of H GA2 is superior to that of HGA1 in terms of the total delivery time.

313 citations


Journal ArticleDOI
TL;DR: In this article, a mathematical programming model for the combined vehicle routing and scheduling problem with time windows and additional temporal constraints is presented, which allows for imposing pairwise synchronization and pairwise temporal precedence between customer visits, independently of the vehicles.

302 citations


Journal ArticleDOI
TL;DR: Preliminary numerical results indicate that the proposed new procedure has the potential to enhance computational efficiency for simulation optimization and the resulting allocations are superior to other methods in the literature that are tested, and the relative efficiency increases for larger problems.
Abstract: We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

236 citations


Proceedings Article
13 Jul 2008
TL;DR: In this article, a novel approach for using landmarks in planning by deriving a pseudo-heuristic and combining it with other heuristics in a search framework was proposed. But the use of landmarks during search was not explored.
Abstract: Landmarks for propositional planning tasks are variable assignments that must occur at some point in every solution plan. We propose a novel approach for using landmarks in planning by deriving a pseudo-heuristic and combining it with other heuristics in a search framework. The incorporation of landmark information is shown to improve success rates and solution qualities of a heuristic planner. We furthermore show how additional landmarks and orderings can be found using the information present in multi-valued state variable representations of planning tasks. Compared to previously published approaches, our landmark extraction algorithm provides stronger guarantees of correctness for the generated landmark orderings, and our novel use of landmarks during search solves more planning tasks and delivers considerably better solutions.

218 citations


Journal ArticleDOI
TL;DR: This paper introduces multiple algorithms to include time buffers in a given schedule while a predefined project due date remains respected and multiple efficient heuristic and meta-heuristic procedures are proposed to allocate buffers throughout the schedule.

209 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to develop a mathematical model encompassing all three essential characteristics of the international intermodal routing problem, and to propose an algorithm that can effectively and efficiently solve the MMMFP with time windows and concave costs.

Journal ArticleDOI
TL;DR: The authors conclude that the fluency heuristic may be one tool in the mind's repertoire of strategies that artfully probes memory for encapsulated frequency information that can veridically reflect statistical regularities in the world.
Abstract: Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the most of an automatic by-product of retrieval from memory, namely, retrieval fluency. In 4 experiments, the authors show that retrieval fluency can be a proxy for real-world quantities, that people can discriminate between two objects' retrieval fluencies, and that people's inferences are in line with the fluency heuristic (in particular fast inferences) and with experimentally manipulated fluency. The authors conclude that the fluency heuristic may be one tool in the mind's repertoire of strategies that artfully probes memory for encapsulated frequency information that can veridically reflect statistical regularities in the world.

Journal ArticleDOI
TL;DR: The biobjective-bilevel model is a rich decision-support tool that allows for the generation of many good solutions to the design problem and is extended to account for the cost/risk trade-off by including cost in the first-level objective.

Journal ArticleDOI
TL;DR: The extreme point concept is introduced and a new extreme point-based rule for packing items inside a three-dimensional container is presented, independent from the particular packing problem addressed and can handle additional constraints, such as fixing the position of the items.
Abstract: One of the main issues in addressing three-dimensional packing problems is finding an efficient and accurate definition of the points at which to place the items inside the bins, because the performance of exact and heuristic solution methods is actually strongly influenced by the choice of a placement rule. We introduce the extreme point concept and present a new extreme point-based rule for packing items inside a three-dimensional container. The extreme point rule is independent from the particular packing problem addressed and can handle additional constraints, such as fixing the position of the items. The new extreme point rule is also used to derive new constructive heuristics for the three-dimensional bin-packing problem. Extensive computational results show the effectiveness of the new heuristics compared to state-of-the-art results. Moreover, the same heuristics, when applied to the two-dimensional bin-packing problem, outperform those specifically designed for the problem.

Book ChapterDOI
15 Sep 2008
TL;DR: In this paper, the authors extend this approach to include explicit coordination between neighboring traffic lights, which is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents.
Abstract: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior This paper extends this approach to include explicit coordination between neighboring traffic lights Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties

Journal ArticleDOI
TL;DR: In this article, a new multiobjective formulation is proposed for the optimal design and rehabilitation of a water distribution network, with minimization of life cycle cost and maximization of performance as objectives.
Abstract: [1] A new multiobjective formulation is proposed for the optimal design and rehabilitation of a water distribution network, with minimization of life cycle cost and maximization of performance as objectives. The life cycle cost is considered to comprise the initial cost of pipes, the cost of replacing old pipes with new ones, the cost of cleaning and lining existing pipes, the expected repair cost for pipe breaks, and the salvage value of the pipes that are replaced. The performance measure proposed in this study is a modification to the resilience index to suit application to water distribution networks with multiple sources. A new heuristic method is proposed to obtain the solution for the design and rehabilitation problem. This heuristic method involves selection of various design and rehabilitation alternatives in an iterative manner on the basis of the improvement in the network performance as compared to the change in the life cycle cost on implementation of the alternatives. The solutions obtained from the heuristic method are used as part of the initial population set of the multiobjective, nondominated sorting genetic algorithm (NSGA-II) in order to improve the search process. Using a sample water distribution network, the modified resilience index proposed is shown to be a good indicator of the uncertainty handling ability of the network.

Journal ArticleDOI
TL;DR: In this paper, an intelligent decision support methodologies for nurse rostering problems in large modern hospital environments is presented. But the amount of computational time that is allowed plays a significant role and the authors analyse and discuss this.

Journal ArticleDOI
TL;DR: A solution approach is presented that integrates heuristic search with optimization by using an integer program to explore promising parts of the search space identified by a tabu search heuristic.
Abstract: The split delivery vehicle routing problem is concerned with serving the demand of a set of customers with a fleet of capacitated vehicles at minimum cost. Contrary to what is assumed in the classical vehicle routing problem, a customer can be served by more than one vehicle, if convenient. We present a solution approach that integrates heuristic search with optimization by using an integer program to explore promising parts of the search space identified by a tabu search heuristic. Computational results show that the method improves the solution of the tabu search in all but one instance of a large test set.

Journal ArticleDOI
TL;DR: The problem presented in this paper adds sequence-dependent setup time considerations to the classical SALBP in the following way: whenever a task is assigned next to another at the same workstation, a setup time must be added to compute the global workstation time.

Journal ArticleDOI
TL;DR: This study considers a variation of PTSP that involves a short shelf life product; hence, there is no inventory of the product in process, and develops lower bounds on the optimal solution, and proposes a two-phase heuristic based on the analysis.
Abstract: The integrated production and transportation scheduling problem (PTSP) with capacity constraints is common in many industries. An optimal solution to PTSP requires one to simultaneously solve the production scheduling and the transportation routing problems, which requires excessive computational time, even for relatively small problems. In this study, we consider a variation of PTSP that involves a short shelf life product; hence, there is no inventory of the product in process. Once a lot of the product is produced, it must be transported with nonnegligible transportation time directly to various customer sites within its limited lifespan. The objective is to determine the minimum time required to complete producing and delivering the product to meet the demand of a given set of customers over a wide geographic region. This problem is NP-hard in the strong sense. We analyze the properties of this problem, develop lower bounds on the optimal solution, and propose a two-phase heuristic based on the analysis. The first phase uses either a genetic or a memetic algorithm to select a locally optimal permutation of the given set of customers; the second phase partitions the customer sequence and then uses the Gilmore-Gomory algorithm to order the subsequences of customers to form the integrated schedule. Empirical observations on the performance of this heuristic are reported.

Journal ArticleDOI
TL;DR: This work introduces two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data and derives the optimal basis and the minimum error of approximation in this framework.
Abstract: We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.

Proceedings ArticleDOI
01 Oct 2008
TL;DR: Experimental results show that, compared to other existing mapping approaches based on communication energy minimization, the contention-aware mapping technique achieves a significant decrease in packet latency (and implicitly, a throughput increase) with a negligible communication energy overhead.
Abstract: In this paper, we analyze the impact of network contention on the application mapping for tile-based network-on-chip (NoC) architectures. Our main theoretical contribution consists of an integer linear programming (ILP) formulation of the contention-aware application mapping problem which aims at minimizing the inter-tile network contention. To solve the scalability problem caused by ILP formulation, we propose a linear programming (LP) approach followed by an mapping heuristic. Taken together, they provide near-optimal solutions while reducing the runtime significantly. Experimental results show that, compared to other existing mapping approaches based on communication energy minimization, our contention-aware mapping technique achieves a significant decrease in packet latency (and implicitly, a throughput increase) with a negligible communication energy overhead.

Journal ArticleDOI
TL;DR: This paper shows one of the possible real applications where Operations Research can help not only to get economic and productive benefits but also certain social aims, and proposes the use of a Branch and Bound-based heuristic for large problems.

Proceedings Article
13 Jul 2008
TL;DR: Using both heuristic UCT and RAVE, MoGo became the first program to achieve human master level in competitive play and forms a rapid online generalisation based on the value of moves.
Abstract: The UCT algorithm uses Monte-Carlo simulation to estimate the value of states in a search tree from the current state. However, the first time a state is encountered, UCT has no knowledge, and is unable to generalise from previous experience. We describe two extensions that address these weaknesses. Our first algorithm, heuristic UCT, incorporates prior knowledge inthe form of avalue function. The value function can be learned offline, using a linear combination of a million binary features, with weights trained by temporal-difference learning. Our second algorithm, UCT‐RAVE, forms a rapid online generalisation based on the value of moves. We applied our algorithms to the domain of 9 ! 9 Computer Go, using the program MoGo. Using both heuristic UCT and RAVE, MoGo became the first program to achieve human master level in competitive play.

Journal ArticleDOI
TL;DR: A two-stage algorithm for robust resource-constrained project scheduling that solves the RCPSP for minimizing the makespan only using a priority-rule-based heuristic, namely an enhanced multi-pass random-biased serial schedule generation scheme.

Journal ArticleDOI
TL;DR: The simulation results of different scenarios with different percentage of dynamic requests reveal that this scheduling scheme can generate high quality schedules and is capable of coping with various stochastic events.

Proceedings Article
13 Jul 2008
TL;DR: An implementation of hindsight optimization for probabilistic planning based on deterministic forward heuristic search is described and its performance on planning-competition benchmarks and other probabilistically interesting problems is evaluated.
Abstract: This paper investigates hindsight optimization as an approach for leveraging the significant advances in deterministic planning for action selection in probabilistic domains. Hindsight optimization is an online technique that evaluates the one-step-reachable states by sampling future outcomes to generate multiple non-stationary deterministic planning problems which can then be solved using search. Hindsight optimization has been successfully used in a number of online scheduling applications; however, it has not yet been considered in the substantially different context of goal-based probabilistic planning. We describe an implementation of hindsight optimization for probabilistic planning based on deterministic forward heuristic search and evaluate its performance on planning-competition benchmarks and other probabilistically interesting problems. The planner is able to outperform a number of probabilistic planners including FF-Replan on many problems. Finally, we investigate conditions under which hindsight optimization is guaranteed to be effective with respect to goal achievement, and also illustrate examples where the approach can go wrong.

Journal Article
TL;DR: This paper presents the first application ofmax-plus to a large-scale problem and verifies its efficacy in realistic settings and provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs.
Abstract: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior This paper extends this approach to include explicit coordination between neighboring traffic lights Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties

Journal ArticleDOI
TL;DR: Investigation of the scheduling problem of parallel identical batch processing machines in which each machine can process a group of jobs simultaneously as a batch finds a hybrid genetic heuristic (HGH) to minimize makespan objective.

Proceedings ArticleDOI
05 Jul 2008
TL;DR: This work relates compressed sensing with Bayesian experimental design and provides a novel efficient approximate method for the latter, based on expectation propagation, which is the first successful attempt at "learning compressed sensing" for images of realistic size.
Abstract: We relate compressed sensing (CS) with Bayesian experimental design and provide a novel efficient approximate method for the latter, based on expectation propagation. In a large comparative study about linearly measuring natural images, we show that the simple standard heuristic of measuring wavelet coefficients top-down systematically outperforms CS methods using random measurements; the sequential projection optimisation approach of (Ji & Carin, 2007) performs even worse. We also show that our own approximate Bayesian method is able to learn measurement filters on full images efficiently which outperform the wavelet heuristic. To our knowledge, ours is the first successful attempt at "learning compressed sensing" for images of realistic size. In contrast to common CS methods, our framework is not restricted to sparse signals, but can readily be applied to other notions of signal complexity or noise models. We give concrete ideas how our method can be scaled up to large signal representations.