scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 2005"


Journal ArticleDOI
TL;DR: B bounded sparse dynamic programming (BSDP) is introduced to allow rapid implementation of heuristics approximating to many complex alignment models, and has been incorporated into the freely available sequence alignment program, exonerate.
Abstract: Exhaustive methods of sequence alignment are accurate but slow, whereas heuristic approaches run quickly, but their complexity makes them more difficult to implement. We introduce bounded sparse dynamic programming (BSDP) to allow rapid approximation to exhaustive alignment. This is used within a framework whereby the alignment algorithms are described in terms of their underlying model, to allow automated development of efficient heuristic implementations which may be applied to a general set of sequence comparison problems. The speed and accuracy of this approach compares favourably with existing methods. Examples of its use in the context of genome annotation are given. This system allows rapid implementation of heuristics approximating to many complex alignment models, and has been incorporated into the freely available sequence alignment program, exonerate.

2,292 citations


Journal ArticleDOI
TL;DR: This paper presents a hybrid genetic algorithm for the job shop scheduling problem that is based on random keys and tested on a set of standard instances taken from the literature and compared with other approaches.

577 citations


Journal ArticleDOI
TL;DR: Two heuristic methods for rule weight specification in fuzzy rule-based classification systems are proposed and compared with existing ones through computer simulations on artificial numerical examples and real-world pattern classification problems.
Abstract: This paper shows how the rule weight of each fuzzy rule can be specified in fuzzy rule-based classification systems. First, we propose two heuristic methods for rule weight specification. Next, the proposed methods are compared with existing ones through computer simulations on artificial numerical examples and real-world pattern classification problems. Simulation results show that the proposed methods outperform the existing ones in the case of multiclass pattern classification problems with many classes.

454 citations


Journal ArticleDOI
TL;DR: This method finds a solution to the corresponding VRP problem and modifies this solution to make it feasible for the VRPPD, and is capable of solving multi-depot problems, which has not been done before.

448 citations


Proceedings Article
01 Jun 2005
TL;DR: The results show that the new algorithms, especially WHCA*, are robust and efficient solutions to the Cooperative Pathfinding problem, finding more successful routes and following better paths than Local Repair A*.
Abstract: Cooperative Pathfinding is a multi-agent path planning problem where agents must find non-colliding routes to separate destinations, given full information about the routes of other agents. This paper presents three new algorithms for efficiently solving this problem, suitable for use in Real-Time Strategy games and other real-time environments. The algorithms are decoupled approaches that break down the problem into a series of single-agent searches. Cooperative A* (CA*) searches space-time for a non-colliding route. Hierarchical Cooperative A* (HCA*) uses an abstract heuristic to boost performance. Finally, Windowed Hierarchical Cooperative A* (WHCA*) limits the space-time search depth to a dynamic window, spreading computation over the duration of the route. The algorithms are applied to a series of challenging, maze-like environments, and compared to A* with Local Repair (the current video-games industry standard). The results show that the new algorithms, especially WHCA*, are robust and efficient solutions to the Cooperative Pathfinding problem, finding more successful routes and following better paths than Local Repair A*.

355 citations


Journal ArticleDOI
TL;DR: This paper explains the hypothesisation that much human decision-making can be described by simple algorithmic process models (heuristics), and relates it to research in biology on rules of thumb, which is reviewed.

334 citations


Journal ArticleDOI
TL;DR: Simulations of the recognition heuristic demonstrate that forgetting can boost accuracy by increasing the chances that only 1 object is recognized, and that loss of information aids inference heuristics that exploit mnemonic information.
Abstract: Some theorists, ranging from W. James (1890) to contemporary psychologists, have argued that forgetting is the key to proper functioning of memory. The authors elaborate on the notion of beneficial forgetting by proposing that loss of information aids inference heuristics that exploit mnemonic information. To this end, the authors bring together 2 research programs that take an ecological approach to studying cognition. Specifically, they implement fast and frugal heuristics within the ACT-R cognitive architecture. Simulations of the recognition heuristic, which relies on systematic failures of recognition to infer which of 2 objects scores higher on a criterion value, demonstrate that forgetting can boost accuracy by increasing the chances that only 1 object is recognized. Simulations of the fluency heuristic, which arrives at the same inference on the basis of the speed with which objects are recognized, indicate that forgetting aids the discrimination between the objects' recognition speeds.

326 citations


Journal ArticleDOI
TL;DR: A model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies based on weighting the original value function and the risk, which was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column.
Abstract: In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed.

283 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: Sequential parameter optimization as discussed by the authors is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms, and it can be performed algorithmically and requires basically the specification of the relevant algorithm's parameters.
Abstract: Sequential parameter optimization is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms. To demonstrate its flexibility, three scenarios are discussed: (1) no experience how to choose the parameter setting of an algorithm is available, (2) a comparison with other algorithms is needed, and (3) an optimization algorithm has to be applied effectively and efficiently to a complex real-world optimization problem. Although sequential parameter optimization relies on enhanced statistical techniques such as design and analysis of computer experiments, it can be performed algorithmically and requires basically the specification of the relevant algorithm's parameters

283 citations


Book ChapterDOI
01 Jan 2005
TL;DR: This paper discuses some of the most critical aspects of the dynamic simulation of road networks, namely the heuristic dynamic assignment, the implied route choice models, and the validation methodology, a key issue to determine the degree of validity and significance of the simulation results.
Abstract: The deployment of ITS must be assisted by suitable tools to conduct the feasibility studies required for testing the designs and evaluating the expected impacts. Microscopic traffic simulation has proven to be the suitable methodological approach to achieve these goals. This paper discuses some of the most critical aspects of the dynamic simulation of road networks, namely the heuristic dynamic assignment, the implied route choice models, and the validation methodology, a key issue to determine the degree of validity and significance of the simulation results. The paper is structured in two parts, the first provides an overview on how the main features of microscopic simulation have been implemented in AIMSUN, and the second is devoted to discus in detail the heuristic dynamic assignment.

276 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to show that justification is a simple technique that can be easily incorporated in diverse algorithms for the resource-constrained project scheduling problem––improving the quality of the schedules generated without generally requiring more computing time.

Journal ArticleDOI
W. C. Ng1
TL;DR: A dynamic programming-based heuristic to solve the scheduling problem and an algorithm to find lower bounds for benchmarking the schedules found by the heuristic are developed.

Journal ArticleDOI
01 Apr 2005
TL;DR: A hybrid algorithm of two fuzzy genetics-based machine learning approaches (i.e., Michigan and Pittsburgh) for designing fuzzy rule-based classification systems is proposed and shows that the hybrid algorithm has higher search ability.
Abstract: We propose a hybrid algorithm of two fuzzy genetics-based machine learning approaches (i.e., Michigan and Pittsburgh) for designing fuzzy rule-based classification systems. First, we examine the search ability of each approach to efficiently find fuzzy rule-based systems with high classification accuracy. It is clearly demonstrated that each approach has its own advantages and disadvantages. Next, we combine these two approaches into a single hybrid algorithm. Our hybrid algorithm is based on the Pittsburgh approach where a set of fuzzy rules is handled as an individual. Genetic operations for generating new fuzzy rules in the Michigan approach are utilized as a kind of heuristic mutation for partially modifying each rule set. Then, we compare our hybrid algorithm with the Michigan and Pittsburgh approaches. Experimental results show that our hybrid algorithm has higher search ability. The necessity of a heuristic specification method of antecedent fuzzy sets is also demonstrated by computational experiments on high-dimensional problems. Finally, we examine the generalization ability of fuzzy rule-based classification systems designed by our hybrid algorithm.

Book ChapterDOI
06 Jul 2005
TL;DR: A conservative method to automatically fix faults in a finite state program by considering the repair problem as a game is presented and the problem of finding a memoryless strategy is NP-complete and a heuristic is presented.
Abstract: We present a conservative method to automatically fix faults in a finite state program by considering the repair problem as a game. The game consists of the product of a modified version of the program and an automaton representing the LTL specification. Every winning finite state strategy for the game corresponds to a repair. The opposite does not hold, but we show conditions under which the existence of a winning strategy is guaranteed. A finite state strategy corresponds to a repair that adds variables to the program, which we argue is undesirable. To avoid extra state, we need a memoryless strategy. We show that the problem of finding a memoryless strategy is NP-complete and present a heuristic. We have implemented the approach symbolically and present initial evidence of its usefulness.

Journal ArticleDOI
TL;DR: This analysis is the first large-scale demonstration that LP-based approaches are highly effective in finding optimal (and successive near-optimal) solutions for the side-chain positioning problem.
Abstract: Motivation: Side-chain positioning is a central component of homology modeling and protein design. In a common formulation of the problem, the backbone is fixed, side-chain conformations come from a rotamer library, and a pairwise energy function is optimized. It is NP-complete to find even a reasonable approximate solution to this problem. We seek to put this hardness result into practical context. Results: We present an integer linear programming (ILP) formulation of side-chain positioning that allows us to tackle large problem sizes. We relax the integrality constraint to give a polynomial-time linear programming (LP) heuristic. We apply LP to position side chains on native and homologous backbones and to choose side chains for protein design. Surprisingly, when positioning side chains on native and homologous backbones, optimal solutions using a simple, biologically relevant energy function can usually be found using LP. On the other hand, the design problem often cannot be solved using LP directly; however, optimal solutions for large instances can still be found using the computationally more expensive ILP procedure. While different energy functions also affect the difficulty of the problem, the LP/ILP approach is able to find optimal solutions. Our analysis is the first large-scale demonstration that LP-based approaches are highly effective in finding optimal (and successive near-optimal) solutions for the side-chain positioning problem. Availability: The source code for generating the ILP given a file of pairwise energies between rotamers is available online at http://compbio.cs.princeton.edu/scplp Contact: msingh@cs.princeton.edu

Book ChapterDOI
08 Sep 2005
TL;DR: This paper is concerned with ambiguous label classification (ALC), an extension of this setting in which several candidate labels may be assigned to a single example, and shows that appropriately designed learning algorithms can successfully exploit the information contained in ambiguously labeled examples.
Abstract: Inducing a classification function from a set of examples in the form of labeled instances is a standard problem in supervised machine learning. In this paper, we are concerned with ambiguous label classification (ALC), an extension of this setting in which several candidate labels may be assigned to a single example. By extending three concrete classification methods to the ALC setting (nearest neighbor classification, decision tree learning, and rule induction) and evaluating their performance on benchmark data sets, we show that appropriately designed learning algorithms can successfully exploit the information contained in ambiguously labeled examples. Our results indicate that the fundamental idea of the extended methods, namely to disambiguate the label information by means of the inductive bias underlying (heuristic) machine learning methods, works well in practice.

Journal ArticleDOI
TL;DR: This problem is formulated as a non-linear 0-1 programming model in which the distance between the machines is sequence dependent, and a technique is proposed to efficiently implement the proposed algorithm.

Journal ArticleDOI
TL;DR: A heuristic is developed specifying that temperatures in replica-exchange simulations should be spaced such that about 20% of the phase-swap attempts are accepted, finding the result to be independent of the heat capacity.
Abstract: A heuristic is developed specifying that temperatures in replica-exchange simulations should be spaced such that about 20% of the phase-swap attempts are accepted. The result is found to be independent of the heat capacity, suggesting that it may be applied generally despite being based on an assumption of (piecewise-) constant heat capacity.

Proceedings Article
05 Jun 2005
TL;DR: This paper proposes the targeted use of an additional relaxation, mapping the relaxed contingent problem into a relaxed conformant problem, and shows that the resulting planning system, Contingent-FF, is highly competitive with the state-of-the-art contingent planners POND and MBP.
Abstract: Contingent planning is the task of generating a conditional plan given uncertainty about the initial state and action effects, but with the ability to observe some aspects of the current world state. Contingent planning can be transformed into an And-Or search problem in belief space, the space whose elements are sets of possible worlds. In (Brafman & Hoffmann 2004), we introduced a method for implicitly representing a belief state using a propositional formula that describes the sequence of actions leading to that state. This representation trades off space for time and was shown to be quite effective for conformant planning within a heuristic forward-search planner based on the FF system. In this paper we apply the same architecture to contingent planning. The changes required to adapt the search space representation are small. More effort is required to adapt the relaxed planning problems whose solution informs the forward search algorithm. We propose the targeted use of an additional relaxation, mapping the relaxed contingent problem into a relaxed conformant problem. Experimental results show that the resulting planning system, Contingent-FF, is highly competitive with the state-of-the-art contingent planners POND and MBP.

01 Jan 2005
TL;DR: In this paper, the authors compare and contrast the properties of DTA modelled with point queues versus those with physical queues, and discuss their implications on the accuracy and fidelity of the model results.
Abstract: Dynamic Traffic Assignment (DTA) is long recognized as a key component for network planning and transport policy evaluations as well as for real-time traffic operation and management. How traffic is encapsulated in a DTA model has important implications on the accuracy and fidelity of the model results. This study compares and contrasts the properties of DTA modelled with point queues versus those with physical queues, and discusses their implications. One important finding is that with the more accurate physical queue paradigm, under certain congested conditions, solutions for the commonly adopted dynamic user optimal (DUO) route choice principle just do not exist. To provide some initial thinking to accommodate this finding, this study introduces the tolerance-based DUO principle. This paper also discusses its solution existence and uniqueness, develops a solution heuristic, and demonstrates its properties through numerical examples. Finally, we conclude by presenting some prospective future research di...

Journal ArticleDOI
TL;DR: This work extends a cost-based location-inventory model to include a customer service element and proposes a heuristic solution approach based on genetic algorithms that can generate optimal or close-to-optimal solutions in a much shorter time compared to the weighting method.
Abstract: When designing supply chains, firms are often faced with the competing demands of improved customer service and reduced cost. We extend a cost-based location-inventory model (Shen et al. 2003) to include a customer service element and develop practical methods for quick and meaningful evaluation of cost/service trade-offs. Service is measured by the fraction of all demands that are located within an exogenously specified distance of the assigned distribution center. The nonlinear model simultaneously determines distribution center locations and the assignment of demand nodes to distribution centers to optimize the cost and service objectives. We use a weighting method to find all supported points on the trade-off curve. We also propose a heuristic solution approach based on genetic algorithms that can generate optimal or close-to-optimal solutions in a much shorter time compared to the weighting method. Our results suggest that significant service improvements can be achieved relative to the minimum cost solution at a relatively small incremental cost.

Journal ArticleDOI
TL;DR: The molecular interaction map (MIM) notation is described formally and its merits relative to alternative proposals are discussed and it is shown by simple examples how to denote all of the molecular interactions commonly found in bioregulatory networks.
Abstract: A standard for bioregulatory network diagrams is urgently needed in the same way that circuit diagrams are needed in electronics. Several graphical notations have been proposed, but none has become standard. We have prepared many detailed bioregulatory network diagrams using the molecular interaction map (MIM) notation, and we now feel confident that it is suitable as a standard. Here, we describe the MIM notation formally and discuss its merits relative to alternative proposals. We show by simple examples how to denote all of the molecular interactions commonly found in bioregulatory networks. There are two forms of MIM diagrams. "Heuristic" MIMs present the repertoire of interactions possible for molecules that are colocalized in time and place. "Explicit" MIMs define particular models (derived from heuristic MIMs) for computer simulation. We show also how pathways or processes can be highlighted on a canonical heuristic MIM. Drawing a MIM diagram, adhering to the rules of notation, imposes a logical discipline that sharpens one's understanding of the structure and function of a network.

Proceedings Article
09 Jul 2005
TL;DR: Methods for automatically deriving additive hm and PDB heuristics from STRIPS encodings are advanced and improvement over existing heuristic in several domains is shown, although, not surprisingly, no heuristic dominates all the others over all domains.
Abstract: Admissible heuristics are critical for effective domain-independent planning when optimal solutions must be guaranteed. Two useful heuristics are the hm heuristics, which generalize the reachability heuristic underlying the planning graph, and pattern database heuristics. These heuristics, however, have serious limitations: reachability heuristics capture only the cost of critical paths in a relaxed problem, ignoring the cost of other relevant paths, while PDB heuristics, additive or not, cannot accommodate too many variables in patterns, and methods for automatically selecting patterns that produce good estimates are not known. We introduce two refinements of these heuristics: First, the additive hm heuristic which yields an admissible sum of hm heuristics using a partitioning of the set of actions. Second, the constrained PDB heuristic which uses constraints from the original problem to strengthen the lower bounds obtained from abstractions. The new heuristics depend on the way the actions or problem variables are partitioned. We advance methods for automatically deriving additive hm and PDB heuristics from STRIPS encodings. Evaluation shows improvement over existing heuristics in several domains, although, not surprisingly, no heuristic dominates all the others over all domains.

Journal ArticleDOI
TL;DR: The overall investigation gives a rare example of a successful analysis of the connections between typical-case problem structure, and search performance, and gives hints on how the topological phenomena might be automatically recognizable by domain analysis techniques.
Abstract: Between 1998 and 2004, the planning community has seen vast progress in terms of the sizes of benchmark examples that domain-independent planners can tackle successfully. The key technique behind this progress is the use of heuristic functions based on relaxing the planning task at hand, where the relaxation is to assume that all delete lists are empty. The unprecedented success of such methods, in many commonly used benchmark examples, calls for an understanding of what classes of domains these methods are well suited for. In the investigation at hand, we derive a formal background to such an understanding. We perform a case study covering a range of 30 commonly used STRIPS and ADL benchmark domains, including all examples used in the first four international planning competitions. We prove connections between domain structure and local search topology – heuristic cost surface properties – under an idealized version of the heuristic functions used in modern planners. The idealized heuristic function is called h + , and differs from the practically used functions in that it returns the length of an optimal relaxed plan, which is NP-hard to compute. We identify several key characteristics of the topology under h + , concerning the existence/non-existence of unrecognized dead ends, as well as the existence/non-existence of constant upper bounds on the difficulty of escaping local minima and benches. These distinctions divide the (set of all) planning domains into a taxonomy of classes of varying h + topology. As it turns out, many of the 30 investigated domains lie in classes with a relatively easy topology. Most particularly, 12 of the domains lie in classes where FF’s search algorithm, provided with h + , is a polynomial solving mechanism. We also present results relating h + to its approximation as implemented in FF. The behavior regarding dead ends is provably the same. We summarize the results of an empirical investigation showing that, in many domains, the topological qualities of h + are largely inherited by the approximation. The overall investigation gives a rare example of a successful analysis of the connections between typical-case problem structure, and search performance. The theoretical investigation also gives hints on how the topological phenomena might be automatically recognizable by domain analysis techniques. We outline some preliminary steps we made into that direction.

Journal ArticleDOI
TL;DR: An adaptive memory programming method for solving the capacitated vehicle routing problem called Solutions' Elite PArts Search (SEPAS), which generates initial solutions via a systematic diversification technique and stores their routes in an adaptive memory.

Journal ArticleDOI
TL;DR: A semidefinite programming (SDP) relaxation is constructed providing a lower bound on the optimal value of the one-dimensional space-allocation problem (ODSAP), also known as the single-row facility layout problem, which consists in finding an optimal linear placement of facilities with varying dimensions on a straight line.

Journal ArticleDOI
TL;DR: A heuristic algorithm is developed to balance the workload among all pickers so that the utilization of the order picking system is improved and to reduce the time needed for fulfilling each requested order.

Proceedings ArticleDOI
23 Jan 2005
TL;DR: This is the first construction showing that the k-means heuristic requires more than a polylogarithmic number of iterations, and the spread of the point set in this construction is polynomial.
Abstract: We present polynomial upper and lower bounds on the number of iterations performed by the k-means method (a.k.a. Lloyd's method) for k-means clustering. Our upper bounds are polynomial in the number of points, number of clusters, and the spread of the point set. We also present a lower bound, showing that in the worst case the k-means heuristic needs to perform Ω(n) iterations, for n points on the real line and two centers. Surprisingly, the spread of the point set in this construction is polynomial. This is the first construction showing that the k-means heuristic requires more than a polylogarithmic number of iterations. Furthermore, we present two alternative algorithms, with guaranteed performance, which are simple variants of the k-means method.

Journal ArticleDOI
Chinyao Low1
TL;DR: This article addresses a multi-stage flow shop scheduling problem with unrelated parallel machines with a simulated annealing (SA)-based heuristic to solve the addressed problem in a reasonable running time.

Journal ArticleDOI
TL;DR: The proposed solution method is a memetic algorithm based on a sophisticated crossover, able to simultaneously change tactical (planning) decisions, and operational decisions, such as the trips performed for each day.