scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 2009"


Journal ArticleDOI
TL;DR: A new optimization algorithm based on the law of gravity and mass interactions is introduced and the obtained results confirm the high performance of the proposed method in solving various nonlinear functions.

5,501 citations


Proceedings ArticleDOI
28 Jun 2009
TL;DR: Based on the results, it is believed that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time.
Abstract: Influence maximization is the problem of finding a small subset of nodes (seed nodes) in a social network that could maximize the spread of influence. In this paper, we study the efficient influence maximization from two complementary directions. One is to improve the original greedy algorithm of [5] and its improvement [7] to further reduce its running time, and the second is to propose new degree discount heuristics that improves influence spread. We evaluate our algorithms by experiments on two large academic collaboration graphs obtained from the online archival database arXiv.org. Our experimental results show that (a) our improved greedy algorithm achieves better running time comparing with the improvement of [7] with matching influence spread, (b) our degree discount heuristics achieve much better influence spread than classic degree and centrality-based heuristics, and when tuned for a specific influence cascade model, it achieves almost matching influence thread with the greedy algorithm, and more importantly (c) the degree discount heuristics run only in milliseconds while even the improved greedy algorithms run in hours in our experiment graphs with a few tens of thousands of nodes.Based on our results, we believe that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time. Therefore, contrary to what implied by the conclusion of [5] that traditional heuristics are outperformed by the greedy approximation algorithm, our results shed new lights on the research of heuristic algorithms.

2,073 citations


Journal ArticleDOI
TL;DR: This paper describes a heuristic, based on convex optimization, that gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements.
Abstract: We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.

1,251 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: This paper discusses how the bounding box can be further used to impose a powerful topological prior, which prevents the solution from excessive shrinking and ensures that the user-provided box bounds the segmentation in a sufficiently tight way.
Abstract: User-provided object bounding box is a simple and popular interaction paradigm considered by many existing interactive image segmentation frameworks. However, these frameworks tend to exploit the provided bounding box merely to exclude its exterior from consideration and sometimes to initialize the energy minimization. In this paper, we discuss how the bounding box can be further used to impose a powerful topological prior, which prevents the solution from excessive shrinking and ensures that the user-provided box bounds the segmentation in a sufficiently tight way. The prior is expressed using hard constraints incorporated into the global energy minimization framework leading to an NP-hard integer program. We then investigate the possible optimization strategies including linear relaxation as well as a new graph cut algorithm called pinpointing. The latter can be used either as a rounding method for the fractional LP solution, which is provably better than thresholding-based rounding, or as a fast standalone heuristic. We evaluate the proposed algorithms on a publicly available dataset, and demonstrate the practical benefits of the new prior both qualitatively and quantitatively.

430 citations


Proceedings Article
19 Sep 2009
TL;DR: A new admissible heuristic called the landmark cut heuristic is introduced, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.
Abstract: Current heuristic estimators for classical domain-independent planning are usually based on one of four ideas: delete relaxations, critical paths, abstractions, and, most recently, landmarks. Previously, these different ideas for deriving heuristic functions were largely unconnected. We prove that admissible heuristics based on these ideas are in fact very closely related. Exploiting this relationship, we introduce a new admissible heuristic called the landmark cut heuristic, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.

410 citations


Journal ArticleDOI
01 Jun 2009
TL;DR: A novel proposal to solve the problem of path planning for mobile robots based on Simple Ant Colony Optimization Meta-Heuristic (SACO-MH), named SACOdm, where d stands for distance and m for memory.
Abstract: In the Motion Planning research field, heuristic methods have demonstrated to outperform classical approaches gaining popularity in the last 35 years. Several ideas have been proposed to overcome the complex nature of this NP-Complete problem. Ant Colony Optimization algorithms are heuristic methods that have been successfully used to deal with this kind of problems. This paper presents a novel proposal to solve the problem of path planning for mobile robots based on Simple Ant Colony Optimization Meta-Heuristic (SACO-MH). The new method was named SACOdm, where d stands for distance and m for memory. In SACOdm, the decision making process is influenced by the existing distance between the source and target nodes; moreover the ants can remember the visited nodes. The new added features give a speed up around 10 in many cases. The selection of the optimal path relies in the criterion of a Fuzzy Inference System, which is adjusted using a Simple Tuning Algorithm. The path planner application has two operating modes, one is for virtual environments, and the second one works with a real mobile robot using wireless communication. Both operating modes are global planners for plain terrain and support static and dynamic obstacle avoidance.

366 citations


Proceedings ArticleDOI
02 Nov 2009
TL;DR: This paper proves that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed, and explores theoretical insights to devise a variety of simple methods that scale well in very large networks.
Abstract: In this paper we study approximate landmark-based methods for point-to-point distance estimation in very large networks. These methods involve selecting a subset of nodes as landmarks and computing offline the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, it can be estimated quickly by combining the precomputed distances. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. We therefore explore theoretical insights to devise a variety of simple methods that scale well in very large networks. The efficiency of the suggested techniques is tested experimentally using five real-world graphs having millions of edges. While theoretical bounds support the claim that random landmarks work well in practice, our extensive experimentation shows that smart landmark selection can yield dramatically more accurate results: for a given target accuracy, our methods require as much as 250 times less space than selecting landmarks at random. In addition, we demonstrate that at a very small accuracy loss our techniques are several orders of magnitude faster than the state-of-the-art exact methods. Finally, we study an application of our methods to the task of social search in large graphs.

313 citations


Journal ArticleDOI
TL;DR: To solve the CDLP for real-size networks, it is proved that the associated column generation subproblem is indeed NP-hard and a simple, greedy heuristic is proposed to overcome the complexity of an exact algorithm.
Abstract: During the past few years, there has been a trend to enrich traditional revenue management models built upon the independent demand paradigm by accounting for customer choice behavior. This extension involves both modeling and computational challenges. One way to describe choice behavior is to assume that each customer belongs to a segment, which is characterized by a consideration set, i.e., a subset of the products provided by the firm that a customer views as options. Customers choose a particular product according to a multinomial-logit criterion, a model widely used in the marketing literature. In this paper, we consider the choice-based, deterministic, linear programming model (CDLP) of Gallego et al. (2004) [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. Technical Report CORC TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York], and the follow-up dynamic programming decomposition heuristic of van Ryzin and Liu (2008) [van Ryzin, G. J., Q. Liu. 2008. On the choice-based linear programming model for network revenue management. Manufacturing Service Oper. Management10(2) 288--310]. We focus on the more general version of these models, where customers belong to overlapping segments. To solve the CDLP for real-size networks, we need to develop a column generation algorithm. We prove that the associated column generation subproblem is indeed NP-hard and propose a simple, greedy heuristic to overcome the complexity of an exact algorithm. Our computational results show that the heuristic is quite effective and that the overall approach leads to high-quality, practical solutions.

303 citations


Journal ArticleDOI
TL;DR: A simple, fast and effective iterated local search meta-heuristic to solve the TOPTW and an insert step is combined with a shake step to escape from local optima, produces a heuristic that performs very well on a large and diverse set of instances.

303 citations


Proceedings ArticleDOI
20 Apr 2009
TL;DR: A statistical extraction framework called Statistical Snowball (StatSnowball), which is a bootstrapping system and can perform both traditional relation extraction and Open IE, is proposed and a working entity relation search engine called Renlifang is developed based on it.
Abstract: Traditional relation extraction methods require pre-specified relations and relation-specific human-tagged examples. Bootstrapping systems significantly reduce the number of training examples, but they usually apply heuristic-based methods to combine a set of strict hard rules, which limit the ability to generalize and thus generate a low recall. Furthermore, existing bootstrapping methods do not perform open information extraction (Open IE), which can identify various types of relations without requiring pre-specifications. In this paper, we propose a statistical extraction framework called Statistical Snowball (StatSnowball), which is a bootstrapping system and can perform both traditional relation extraction and Open IE.StatSnowball uses the discriminative Markov logic networks (MLNs) and softens hard rules by learning their weights in a maximum likelihood estimate sense. MLN is a general model, and can be configured to perform different levels of relation extraction. In StatSnwoball, pattern selection is performed by solving an l1-norm penalized maximum likelihood estimation, which enjoys well-founded theories and efficient solvers. We extensively evaluate the performance of StatSnowball in different configurations on both a small but fully labeled data set and large-scale Web data. Empirical results show that StatSnowball can achieve a significantly higher recall without sacrificing the high precision during iterations with a small number of seeds, and the joint inference of MLN can improve the performance. Finally, StatSnowball is efficient and we have developed a working entity relation search engine called Renlifang based on it.

281 citations


Book ChapterDOI
01 Jan 2009
TL;DR: This chapter discusses this class of hyper-heuristics, in which Genetic Programming is the most widely used methodology, and discusses the exciting potential of this innovative approach for automating the heuristic design process.
Abstract: Hyper-heuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyper-heuristic idea is to generate new heuristics which are not currently known. These approaches operate on a search space of heuristics rather than directly on a search space of solutions to the underlying problem which is the case with most meta-heuristics implementations. In the majority of hyper-heuristic studies so far, a framework is provided with a set of human designed heuristics, taken from the literature, and with good measures of performance in practice. A less well studied approach aims to generate new heuristics from a set of potential heuristic components. The purpose of this chapter is to discuss this class of hyper-heuristics, in which Genetic Programming is the most widely used methodology. A detailed discussion is presented including the steps needed to apply this technique, some representative case studies, a literature review of related work, and a discussion of relevant issues. Our aim is to convey the exciting potential of this innovative approach for automating the heuristic design process.

Journal ArticleDOI
TL;DR: A novel Lagrangian relaxation approach is presented that, in combination with a branch-and-bound method, computes provably optimal network alignments and is reasonably fast and has advantages over pure heuristics.
Abstract: In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LI SA library.

Proceedings ArticleDOI
29 Sep 2009
TL;DR: This work proposes a novel approach called Fitnex, a search strategy that uses state-dependent fitness values (computed through a fitness function) to guide path exploration, and shows that this approach consistently achieves high code coverage faster than existing search strategies.
Abstract: Dynamic symbolic execution is a structural testing technique that systematically explores feasible paths of the program under test by running the program with different test inputs to improve code coverage. To address the space-explosion issue in path exploration, we propose a novel approach called Fitnex, a search strategy that uses state-dependent fitness values (computed through a fitness function) to guide path exploration. The fitness function measures how close an already discovered feasible path is to a particular test target (e.g., covering a not-yet-covered branch). Our new fitness-guided search strategy is integrated with other strategies that are effective for exploration problems where the fitness heuristic fails. We implemented the new approach in Pex, an automated structural testing tool developed at Microsoft Research. We evaluated our new approach by comparing it with existing search strategies. The empirical results show that our approach is effective since it consistently achieves high code coverage faster than existing search strategies.

Journal ArticleDOI
TL;DR: This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP) using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm.

Journal ArticleDOI
TL;DR: This work develops a column generation algorithm to solve the problem for a multinomial logit choice model with disjoint consideration sets (MNLD), and derives a bound as a by-product of a decomposition heuristic.
Abstract: We consider a network revenue management problem where customers choose among open fare products according to some prespecified choice model. Starting with a Markov decision process (MDP) formulation, we approximate the value function with an affine function of the state vector. We show that the resulting problem provides a tighter bound for the MDP value than the choice-based linear program. We develop a column generation algorithm to solve the problem for a multinomial logit choice model with disjoint consideration sets (MNLD). We also derive a bound as a by-product of a decomposition heuristic. Our numerical study shows the policies from our solution approach can significantly outperform heuristics from the choice-based linear program.

Journal ArticleDOI
TL;DR: The approach provides an example where an ACO algorithm successfully combines two completely different heuristic measures (with respect to loading and routing) within one pheromone matrix, which clearly outperforms previous heuristics from the literature.

Journal ArticleDOI
TL;DR: The proposed taxonomy is extended to extend an existing taxonomy for hybrid methods involving heuristic approaches in order to consider cooperative schemes between exact methods and metaheuristics.

Journal ArticleDOI
TL;DR: This paper studies an NP-hard multi-period production-distribution problem to minimize the sum of three costs: production setups, inventories and distribution and confirms both the interest of integrating production and distribution decisions and of using the MA|PM template.

Journal ArticleDOI
TL;DR: An effective variable neighbourhood search (VNS) heuristic for the open vehicle routing problem is proposed, based on reversing segments of routes (sub-routes) and exchanging segments between routes.

Journal ArticleDOI
TL;DR: In this paper a portfolio selection model which is based on Markowitz's portfolio selection problem including three of the most important limitations is considered and the results can lead Markowitz’s model to a more practical one.
Abstract: Heuristic algorithms strengthen researchers to solve more complex and combinatorial problems in a reasonable time. Markowitz's Mean-Variance portfolio selection model is one of those aforesaid problems. Actually, Markowitz's model is a nonlinear (quadratic) programming problem which has been solved by a variety of heuristic and non-heuristic techniques. In this paper a portfolio selection model which is based on Markowitz's portfolio selection problem including three of the most important limitations is considered. The results can lead Markowitz's model to a more practical one. Minimum transaction lots, cardinality constraints (both of which have been presented before in other researches) and market (sector) capitalization (which is proposed in this research for the first time as a constraint for Markowitz model), are considered in extended model. No study has ever proposed and solved this expanded model. To solve this mixed-integer nonlinear programming (NP-Hard), a corresponding genetic algorithm (GA) is utilized. Computational study is performed in two main parts; first, verifying and validating proposed GA and second, studying the applicability of presented model using large scale problems.

Proceedings Article
11 Jul 2009
TL;DR: Nested Monte-Carlo Search addresses the problem of guiding the search toward better states when there is no available heuristic, and uses nested levels of random games to guide the search.
Abstract: Many problems have a huge state space and no good heuristic to order moves so as to guide the search toward the best positions. Random games can be used to score positions and evaluate their interest. Random games can also be improved using random games to choose a move to try at each step of a game. Nested Monte-Carlo Search addresses the problem of guiding the search toward better states when there is no available heuristic. It uses nested levels of random games in order to guide the search. The algorithm is studied theoretically on simple abstract problems and applied successfully to three different games: Morpion Solitaire, SameGame and 16×16 Sudoku.

Journal ArticleDOI
TL;DR: The optimization problem is NP-hard and some mixed integer linear programming solution approaches are developed, which are either exact or heuristic in nature, to facilitate the decision process of the operating room scheduler.

Journal ArticleDOI
TL;DR: Guided local search (GLS) is used to improve two of the proposed heuristics of the literature and an extra heuristic is added to regularly diversify the search in order to explore more areas of the solution space.

Journal ArticleDOI
TL;DR: A mathematical model and a genetic algorithm (GA) for two-sided assembly line balancing (two-ALB) are presented and the experimental results show that the proposed GA outperforms the heuristic and the compared GA.

Journal ArticleDOI
TL;DR: Comparisons with an existing heuristic from the literature and a lower bound computed with complete knowledge of customer demands show that the best partial reoptimization heuristics outperform this heuristic and are on average no more than 10%--13% away from this lower bound, depending on the type of instances.
Abstract: We consider the vehicle-routing problem with stochastic demands (VRPSD) under reoptimization. We develop and analyze a finite-horizon Markov decision process (MDP) formulation for the single-vehicle case and establish a partial characterization of the optimal policy. We also propose a heuristic solution methodology for our MDP, named partial reoptimization, based on the idea of restricting attention to a subset of all the possible states and computing an optimal policy on this restricted set of states. We discuss two families of computationally efficient partial reoptimization heuristics and illustrate their performance on a set of instances with up to and including 100 customers. Comparisons with an existing heuristic from the literature and a lower bound computed with complete knowledge of customer demands show that our best partial reoptimization heuristics outperform this heuristic and are on average no more than 10%--13% away from this lower bound, depending on the type of instances.

01 Jan 2009
TL;DR: A heuristic that combines a tabu search scheme with ad hoc designed mixed-integer programming models is presented and the effectiveness of the heuristic is proved over a set of benchmark instances for which the optimal solution is known.

Journal ArticleDOI
TL;DR: This work seeks an energy-optimal topology that maximizes network lifetime while ensuring simultaneously full area coverage and sensor connectivity to cluster heads, which are constrained to form a spanning tree used as a routing topology.
Abstract: Minimizing energy dissipation and maximizing network lifetime are important issues in the design of applications and protocols for sensor networks. Energy-efficient sensor state planning consists in finding an optimal assignment of states to sensors in order to maximize network lifetime. For example, in area surveillance applications, only an optimal subset of sensors that fully covers the monitored area can be switched on while the other sensors are turned off. In this paper, we address the optimal planning of sensors' states in cluster-based sensor networks. Typically, any sensor can be turned on, turned off, or promoted cluster head, and a different power consumption level is associated with each of these states. We seek an energy-optimal topology that maximizes network lifetime while ensuring simultaneously full area coverage and sensor connectivity to cluster heads, which are constrained to form a spanning tree used as a routing topology. First, we formulate this problem as an Integer Linear Programming model that we prove NP-Complete. Then, we implement a Tabu search heuristic to tackle the exponentially increasing computation time of the exact resolution. Experimental results show that the proposed heuristic provides near-optimal network lifetime values within low computation times, which is, in practice, suitable for large-sized sensor networks.

Book ChapterDOI
19 Sep 2009
TL;DR: Temporal Fast Downward (TFD) is presented, a planning system for temporal problems that is capable of finding low-makespan plans by performing a heuristic search in a temporal search space and outperforms all state-of-the-art temporal planning systems.
Abstract: Planning systems for real-world applications need the ability to handle concurrency and numeric fluents Nevertheless, the predominant approach to cope with concurrency followed by the most successful participants in the latest International Planning Competitions (IPC) is still to find a sequential plan that is rescheduled in a post-processing step We present Temporal Fast Downward (TFD), a planning system for temporal problems that is capable of finding low-makespan plans by performing a heuristic search in a temporal search space We show how the context-enhanced additive heuristic can be successfully used for temporal planning and how it can be extended to numeric fluents TFD often produces plans of high quality and, evaluated according to the rating scheme of the last IPC, outperforms all state-of-the-art temporal planning systems

Proceedings Article
11 Jul 2009
TL;DR: This work proposes a methodology for deriving admissible heuristic estimates for cost-optimal planning from a set of planning landmarks, and presents a simple best-first search procedure exploiting such heuristics.
Abstract: Planning landmarks are facts that must be true at some point in every solution plan. Previous work has very successfully exploited planning landmarks in satisficing (non-optimal) planning. We propose a methodology for deriving admissible heuristic estimates for cost-optimal planning from a set of planning landmarks. The resulting heuristics fall into a novel class of multi-path dependent heuristics, and we present a simple best-first search procedure exploiting such heuristics. Our empirical evaluation shows that this framework favorably competes with the state-of-the-art of cost-optimal heuristic search.

Journal ArticleDOI
TL;DR: Several heuristic and meta-heuristic methods for elective surgery planning when operating room capacity is shared by elective and emergency surgery are proposed and compared.