scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 2004"


Book ChapterDOI
31 Aug 2004
TL;DR: This paper enrichs interactive sensor querying with statistical modeling techniques, and demonstrates that such models can help provide answers that are both more meaningful, and, by introducing approximations with probabilistic confidences, significantly more efficient to compute in both time and energy.
Abstract: Declarative queries are proving to be an attractive paradigm for ineracting with networks of wireless sensors. The metaphor that "the sensornet is a database" is problematic, however, because sensors do not exhaustively represent the data in the real world. In order to map the raw sensor readings onto physical reality, a model of that reality is required to complement the readings. In this paper, we enrich interactive sensor querying with statistical modeling techniques. We demonstrate that such models can help provide answers that are both more meaningful, and, by introducing approximations with probabilistic confidences, significantly more efficient to compute in both time and energy. Utilizing the combination of a model and live data acquisition raises the challenging optimization problem of selecting the best sensor readings to acquire, balancing the increase in the confidence of our answer against the communication and data acquisition costs in the network. We describe an exponential time algorithm for finding the optimal solution to this optimization problem, and a polynomial-time heuristic for identifying solutions that perform well in practice. We evaluate our approach on several real-world sensor-network data sets, taking into account the real measured data and communication quality, demonstrating that our model-based approach provides a high-fidelity representation of the real phenomena and leads to significant performance gains versus traditional data acquisition techniques.

1,218 citations


Journal ArticleDOI
TL;DR: This paper presents an ant colony optimization methodology for optimally clustering N objects into K clusters which employs distributed agents which mimic the way real ants find a shortest path from their nest to food source and back.

496 citations


Book ChapterDOI
01 Jan 2004
TL;DR: Results show Discrete PSO is certainly not as powerful as some specific algorithms, but, on the other hand, it can easily be modified for any discrete/combinatorial problem for which the authors have no good specialized algorithm.
Abstract: The classical Particle Swarm Optimization is a powerful method to find the minimum of a numerical function, on a continuous definition domain. As some binary versions have already successfully been used, it seems quite natural to try to define a framework for a discrete PSO. In order to better understand both the power and the limits of this approach, we examine in detail how it can be used to solve the well known Traveling Salesman Problem, which is in principle very “bad” for this kind of optimization heuristic. Results show Discrete PSO is certainly not as powerful as some specific algorithms, but, on the other hand, it can easily be modified for any discrete/combinatorial problem for which we have no good specialized algorithm.

429 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce Pareto Ant Colony Optimization as an especially effective meta-heuristic for solving the portfolio selection problem and compare its performance to other heuristic approaches by means of computational experiments with random instances.
Abstract: Selecting the “best” project portfolio out of a given set of investment proposals is a common and often critical management issue. Decision-makers must regularly consider multiple objectives and often have little a priori preference information available to them. Given these contraints, they can improve their chances of achieving success by following a two-phase procedure that first determines the solution space of all efficient (i.e., Pareto-optimal) portfolios and then allows them to interactively explore that space. However, the task of determining the solution space is not trivial: brute-force complete enumeration only works for small instances and the underlying NP-hard problem becomes increasingly demanding as the number of projects grows. Meta-heuristics provide a useful compromise between the amount of computation time necessary and the quality of the approximated solution space. This paper introduces Pareto Ant Colony Optimization as an especially effective meta-heuristic for solving the portfolio selection problem and compares its performance to other heuristic approaches (i.e., Pareto Simulated Annealing and the Non-Dominated Sorting Genetic Algorithm) by means of computational experiments with random instances. Furthermore, we provide a numerical example based on real world data.

419 citations


Book ChapterDOI
01 Jan 2004
TL;DR: In the previous three chapters, various classic problem-solving methods, including dynamic programming, branch and bound, and local search algorithms, as well as some modern heuristic methods like simulated annealing and tabu search, were seen to be deterministic.
Abstract: In the previous three chapters we discussed various classic problem-solving methods, including dynamic programming, branch and bound, and local search algorithms, as well as some modern heuristic methods like simulated annealing and tabu search. Some of these techniques were seen to be deterministic. Essentially you “turn the crank” and out pops the answer. For these methods, given a search space and an evaluation function, some would always return the same solution (e.g., dynamic programming), while others could generate different solutions based on the initial configuration or starting point (e.g., a greedy algorithm or the hill-climbing technique). Still other methods were probabilistic, incorporating random variation into the search for optimal solutions. These methods (e.g., simulated annealing) could return different final solutions even when given the same initial configuration. No two trials with these algorithms could be expected to take exactly the same course. Each trial is much like a person’s fingerprint: although there are broad similarities across fingerprints, no two are exactly alike.

416 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of choosing sites through time to include in a network of biological reserves for species conservation is formulated as a stochastic dynamic integer programming problem, and the authors find that the timing of selections is critical; conservation budgets available up front yield significantly greater biodiversity protection.

335 citations


Journal ArticleDOI
TL;DR: A generic real-time multivehicle truckload pickup and delivery problem that captures most features of the operational problem of a real-world trucking fleet that dynamically moves truckloads between different sites according to customer requests that arrive continuously is introduced.
Abstract: In this paper we formally introduce a generic real-time multivehicle truckload pickup and delivery problem. The problem includes the consideration of various costs associated with trucks' empty travel distances, jobs' delayed completion times, and job rejections. Although very simple, the problem captures most features of the operational problem of a real-world trucking fleet that dynamically moves truckloads between different sites according to customer requests that arrive continuously.We propose a mixed-integer programming formulation for the offline version of the problem. We then consider and compare five rolling horizon strategies for the real-time version. Two of the policies are based on a repeated reoptimization of various instances of the offline problem, while the others use simpler local (heuristic) rules. One of the reoptimization strategies is new, while the other strategies have recently been tested for similar real-time fleet management problems.The comparison of the policies is done under a general simulation framework. The analysis is systematic and considers varying traffic intensities, varying degrees of advance information, and varying degrees of flexibility for job-rejection decisions. The new reoptimization policy is shown to systematically outperform the others under all these conditions.

306 citations


Journal ArticleDOI
TL;DR: This article proposes a few simple algorithms for achieving the baseline graph theoretic metric of tolerance to node failures, namely, biconnectivity, and formulate an optimization problem for the creation of a movement plan while minimizing the total distance moved by the robots.
Abstract: Autonomous and semi-autonomous mobile multirobot systems require a wireless communication network in order to communicate with each other and collaboratively accomplish a given task. A multihop communications network that is self-forming, self-healing, and self-organizing is ideally suited for such mobile robot systems that exist in unpredictable and constantly changing environments. However, since every node in a multihop (or ad hoc) network is responsible for forwarding packets to other nodes, the failure of a critical node can result in a network partition. Hence, it is ideal to have an ad hoc network configuration that can tolerate temporary failures while allowing recovery. Since movement of the robot nodes is controllable, it is possible to achieve such fault-tolerant configurations by moving a subset of robots to new locations. In this article we propose a few simple algorithms for achieving the baseline graph theoretic metric of tolerance to node failures, namely, biconnectivity. We formulate an optimization problem for the creation of a movement plan while minimizing the total distance moved by the robots. For one-dimensional networks, we show that the problem of achieving a biconnected network topology can be formulated as a linear program; the latter lends itself to an optimal polynomial time solution. For two-dimensional networks the problem is much harder, and we propose efficient heuristic approaches for achieving biconnectivity. We compare the performance of the proposed algorithms with each other with respect to the total distance moved metric using simulations.

273 citations


Journal ArticleDOI
TL;DR: A new approach for obtaining machine cells and product families is presented that combines a local search heuristic with a genetic algorithm and produced solutions with a grouping efficacy that is at least as good as any results previously reported in literature.

247 citations


Journal ArticleDOI
TL;DR: This paper proposes both an exact and a heuristic method for the car pooling problem, based on two integer programming formulations of the problem, which transforms the solution of a Lagrangean lower bound into a feasible solution.
Abstract: Car pooling is a transportation service organized by a large company which encourages its employees to pick up colleagues while driving to/from work to minimize the number of private cars travelling to/from the company site. The car pooling problem consists of defining the subsets of employees that will share each car and the paths the drivers should follow, so that sharing is maximized and the sum of the path costs is minimized. The special case of the car pooling problem where all cars are identical can be modeled as a Dial-a-Ride Problem. In this paper, we propose both an exact and a heuristic method for the car pooling problem, based on two integer programming formulations of the problem. The exact method is based on a bounding procedure that combines three lower bounds derived from different relaxations of the problem. A valid upper bound is obtained by the heuristic method, which transforms the solution of a Lagrangean lower bound into a feasible solution. The computational results show the effectiveness of the proposed methods.

246 citations


Journal ArticleDOI
TL;DR: A new tabu search algorithm is presented that explores the structure of this type of problem and its performance is compared with another heuristic designed for the same purpose, which has been published recently.

Journal ArticleDOI
TL;DR: An evolutionary search algorithm for finding benchmark partitions using a multilevel heuristic algorithm to provide an effective crossover and it is demonstrated that this method can achieve extremely high quality partitions significantly better than those found by the state-of-the-art graph-partitioning packages.
Abstract: The graph-partitioning problem is to divide a graph into several pieces so that the number of vertices in each piece is the same within some defined tolerance and the number of cut edges is minimised. Important applications of the problem arise, for example, in parallel processing where data sets need to be distributed across the memory of a parallel machine. Very effective heuristic algorithms have been developed for this problem which run in real-time, but it is not known how good the partitions are since the problem is, in general, NP-complete. This paper reports an evolutionary search algorithm for finding benchmark partitions. A distinctive feature is the use of a multilevel heuristic algorithm to provide an effective crossover. The technique is tested on several example graphs and it is demonstrated that our method can achieve extremely high quality partitions significantly better than those found by the state-of-the-art graph-partitioning packages.

Book ChapterDOI
01 Jan 2004
TL;DR: This chapter deals with a new approach which will utilize a log-dynamic penalty function method in the NES algorithm that has been proposed and tested in the previous chapter.
Abstract: Although evolutionary algorithms have proved useful in general function optimization, they appeared particularly apt for addressing nonlinearly constrained optimization problems. Constrained optimization problems present the difficulties with potentially nonconvex or even disjoint feasible regions. Classic linear programming and nonlinear programming methods are often either unsuitable or impractical when applied to these constrained problems [76]. Unfortunately, most of the real-world problems often pose such difficulties. Evolutionary algorithms are global methods, which aim at complex objective functions (e.g., non-differentiable or discontinuous) and they can be constructed to cope effectively with these difficulties. There are, however, no well-established guidelines on how to deal with infeasible solutions. Contemporary evolution strategies usually use “death penalty” heuristic for infeasible solutions. This death penalty offers a few simplifications of the algorithm: for example, there is no need to evaluate infeasible solutions and to compare them with feasible ones. Fortunately, this method may work reasonably well when the feasible search space is convex and it constitutes a reasonable part of the whole search space. Otherwise, such an approach has serious limitations. For example, for many search problems where the initial population consists of infeasible individuals only, it might be essential to improve them [101]. Moreover, quite often the system can reach the optimum solution easier if it is possible to “cross” an infeasible region especially in non-convex feasible search spaces. This chapter deals with a new approach which will utilize a log-dynamic penalty function method in the NES algorithm [61, 62] that has been proposed and tested in the previous chapter.

Journal ArticleDOI
TL;DR: A new heuristic algorithm for solving the bi-objective vehicle routing and scheduling problem with time windows is presented and has been applied to several benchmark problems.

Book ChapterDOI
TL;DR: In this paper, the problem of allocating space at berth for vessels with the objective of minimizing total weighted flow time is considered and two mathematical formulations are considered where one is used to develop a tree search procedure while the other is used for developing a lower bound that can speed up the tree search.
Abstract: In this paper, we consider the problem of allocating space at berth for vessels with the objective of minimizing total weighted flow time. Two mathematical formulations are considered where one is used to develop a tree search procedure while the other is used to develop a lower bound that can speed up the tree search procedure. Furthermore, a composite heuristic combining the tree search procedure and pair-wise exchange heuristic is proposed for large size problems. Finally, computational experiments are reported to evaluate the efficiency of the methods.

Journal ArticleDOI
TL;DR: In this article, a new heuristic approach for minimizing the operating path of automated or computer numerically controlled drilling operations is described, which is first defined as a travelling salesman problem.
Abstract: A new heuristic approach for minimizing the operating path of automated or computer numerically controlled drilling operations is described. The operating path is first defined as a travelling salesman problem. The new heuristic, particle swarm optimization, is then applied to the travelling salesman problem. A model for the approximate prediction of drilling time based on the heuristic solution is presented. The new method requires few control variables: it is versatile, robust and easy to use. In a batch production of a large number of items to be drilled such as in printed circuit boards, the travel time of the drilling device is a significant portion of the overall manufacturing process, hence the new particle swarm optimization–travelling salesman problem heuristic can play a role in reducing production costs.

Journal ArticleDOI
TL;DR: It is shown that a heuristic reduction of the search space can help the algorithm to find better solutions in a shorter computation time.

Book ChapterDOI
26 Jun 2004
TL;DR: An extension of the heuristic called “particle swarm optimization” (PSO) that is able to deal with multiobjective optimization problems that uses the concept of Pareto dominance to determine the flight direction of a particle.
Abstract: In this paper, we present an extension of the heuristic called “particle swarm optimization” (PSO) that is able to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and is based on the idea of having a set of sub-swarms instead of single particles. In each sub-swarm, a PSO algorithm is executed and, at some point, the different sub-swarms exchange information. Our proposed approach is validated using several test functions taken from the evolutionary multiobjective optimization literature. Our results indicate that the approach is highly competitive with respect to algorithms representative of the state-of-the-art in evolutionary multiobjective optimization.

01 Jan 2004
TL;DR: DESSCOM as mentioned in this paper is a decision support system for supply chains through object modeling, which enables strategic, tactical, and operational decision making in supply chains, and has two major components: (1) DESSCOM-MODEL, a modeling infrastructure comprising a library of carefully designed generic objects for modeling supply chain elements and dynamic interactions among these elements, and (2) DESSOM-WORKBENCH, a decision workbench that can potentially include powerful algorithmic and simulation-based solution methods for supply chain decision-making.
Abstract: Numerous algorithms and tools have been deployed in supply chain modeling and problem solving. These are based on stochastic models, mathematical programming models, heuristic techniques, and simulation. Since different decision problems in supply chains entail different approaches to be used for modeling and problem solving, there is a need for a unified approach to modeling supply chains so that any required representation can be created in a rapid and flexible way. In this paper, we develop a decision support system DESSCOM (decision support for supply chains through object modeling) which enables strategic, tactical, and operational decision making in supply chains. DESSCOM has two major components: (1) DESSCOM-MODEL, a modeling infrastructure comprising a library of carefully designed generic objects for modeling supply chain elements and dynamic interactions among these elements, and (2) DESSCOM-WORKBENCH, a decision workbench that can potentially include powerful algorithmic and simulation-based solution methods for supply chain decision-making. Through DESSCOM-MODEL, faithful models of any given supply chain can be created rapidly at any desired level of abstraction. Given a supply chain decision problem to be solved, the object oriented models created at the right level of detail can be transformed into problem formulations that can then be solved using an appropriate strategy from DESSCOM-WORKBENCH. We have designed and implemented a prototype of DESSCOM. We provide a real-world case study of a liquid petroleum gas supply chain to demonstrate the use of DESSCOM to model supply chains and enable decision-making at various levels. � 2002 Elsevier B.V. All rights reserved.

Proceedings ArticleDOI
16 Sep 2004
TL;DR: This paper presents a new classier, HARMONY, which directly mines the nal set of classication rules, and uses an instance-centric rule-generation approach, which outperforms many well-known classiers in terms of both accuracy and computational eciency, and scales well w.r.t. the database size.
Abstract: Many studies have shown that rule-based classiers perform well in classifying categorical and sparse high-dimensional databases. However, a fundamental limitation with many rule-based classiers is that they nd the rules by employing various heuristic methods to prune the search space, and select the rules based on the sequential database covering paradigm. As a result, the nal set of rules that they use may not be the globally best rules for some instances in the training database. To make matters worse, these algorithms fail to fully exploit some more eectiv e search space pruning methods in order to scale to large databases. In this paper we present a new classier, HARMONY, which directly mines the nal set of classication rules. HARMONY uses an instance-centric rule-generation approach and it can assure for each training instance, one of the highest-condence rules covering this instance is included in the nal rule set, which helps in improving the overall accuracy of the classier. By introducing several novel search strategies and pruning methods into the rule discovery process, HARMONY also has high eciency and good scalability. Our thorough performance study with some large text and categorical databases has shown that HARMONY outperforms many well-known classiers in terms of both accuracy and computational eciency , and scales well w.r.t. the database size.

Journal ArticleDOI
TL;DR: An enhanced 0-1 mixed-integer linear programming formulation based on the cell-transmission model is proposed for the traffic signal optimization problem, which has several features that are currently unavailable in other existing models developed with a similar approach.
Abstract: An enhanced 0-1 mixed-integer linear programming formulation based on the cell-transmission model is proposed for the traffic signal optimization problem. This formulation has several features that are currently unavailable in other existing models developed with a similar approach, including the components for handling the number of stops, fixed or dynamic cycle length and splits, and lost time. The problem of unintended vehicle holding, which is common in analytical models, is explicitly treated. The formulation can be utilized in developing strategies for adaptive traffic-control systems. It can also be used as a benchmark for examining the convergence behavior of heuristic algorithms based on the genetic algorithm, fuzzy logic, neural networks, or other approaches that are commonly used in this field. The discussion of extending the proposed model to capture traffic signal preemption in the presence of emergency vehicles is given. In terms of computational efficiency, the proposed formulation has the least number of binary integers as compared with other existing formulations that were developed with the same approach.

Proceedings ArticleDOI
Bowei Xi1, Zhen Liu2, Mukund Raghavachari2, Cathy H. Xia2, Li Zhang2 
17 May 2004
TL;DR: This work proposes a smart hill-climbing algorithm using ideas of importance sampling and Latin Hypercube Sampling and demonstrates that the algorithm is more efficient than and superior to traditional heuristic methods.
Abstract: The overwhelming success of the Web as a mechanism for facilitating information retrieval and for conducting business transactions has ledto an increase in the deployment of complex enterprise applications. These applications typically run on Web Application Servers, which assume the burden of managing many tasks, such as concurrency, memory management, database access, etc., required by these applications. The performance of an Application Server depends heavily on appropriate configuration. Configuration is a difficult and error-prone task dueto the large number of configuration parameters and complex interactions between them. We formulate the problem of finding an optimal configuration for a given application as a black-box optimization problem. We propose a smart hill-climbing algorithm using ideas of importance sampling and Latin Hypercube Sampling (LHS). The algorithm is efficient in both searching and random sampling. It consists of estimating a local function, and then, hill-climbing in the steepest descent direction. The algorithm also learns from past searches and restarts in a smart and selective fashion using the idea of importance sampling. We have carried out extensive experiments with an on-line brokerage application running in a WebSphere environment. Empirical results demonstrate that our algorithm is more efficient than and superior to traditional heuristic methods.

Journal ArticleDOI
TL;DR: In this article, a bilevel programming model for transit network design problem is presented, in which the upper model is a normal transit networks design model, and the lower model are a transit equilibrium assignment model.
Abstract: In this paper, a bilevel programming model for transit network design problem is presented, in which the upper model is a normal transit network design model, and the lower model is a transit equilibrium assignment model. A heuristic solution algorithm based on sensitivity analysis is designed for the model proposed. Finally, a simple numerical example is given to illustrate the application of the model and algorithm and some conclusions are drawn.

Journal ArticleDOI
TL;DR: In this paper, a simulated annealing algorithm is used to minimize both the investment cost for feeder and substations, and the power loss cost, and a set of numerical results are provided.
Abstract: The planning of electrical power distribution systems strongly influences the supply of electrical power to consumers. The problem is to minimize both the investment cost for feeder and substations, and the power-loss cost. When the substations can already provide enough power flow, then the problem reduces to minimize the total cost related to the feeders and their power-loss. The difficulty of dealing with this problem increases rapidly with its size (i.e., the number of customers). It seems appropriate to use heuristic methods to obtain suboptimal solutions, since exact methods are too much time consuming. In this paper, a simulated annealing algorithm is used. A set of numerical results are provided.

Journal ArticleDOI
TL;DR: A decision support system DESSCOM (decision support for supply chains through object modeling) which enables strategic, tactical, and operational decision making in supply chains and enable decision-making at various levels is developed.

Journal ArticleDOI
TL;DR: A modified genetic algorithm using quasi-random sequences in the initial population is tested by solving a large number of continuous benchmark problems from the literature and the numerical results are compared to those of a traditional implementation using pseudorandom numbers.
Abstract: The selection of the initial population in a population-based heuristic optimizationmethod is important, since it affects the search for several iterations and often has an influence on the final solution. If no a priori information about the optima is available, the initial population is often selected randomly using pseudorandom numbers. Usually, however, it is more important that the points are as evenly distributed as possible than that they imitate random points. In this paper, we study the use of quasi-random sequences in the initial population of a genetic algorithm. Sample points in a quasi-random sequence are designed to have good distribution properties. Here a modified genetic algorithm using quasi-random sequences in the initial population is tested by solving a large number of continuous benchmark problems from the literature. The numerical results of two implementations of genetic algorithms using different quasi-random sequences are compared to those of a traditional implementation using pseudorandom numbers. The results obtained are promising.

Journal ArticleDOI
TL;DR: This paper split the set of available locations of the ship into different subsets and force the stowage of containers within them depending on their features and handling operations, and gives a basic 0-1 Linear Programming model for MBPP.
Abstract: In this paper we are involved with the so-called master bay plan problem (MBPP), that is the problem of finding optimal plans for stowing containers into a containership, with respect to a set of structural and operational restrictions. We describe in detail such constraints and give a basic 0–1 Linear Programming model for MBPP. Successively, we present a heuristic approach that enables us to relax some relations from the model and give some prestowage rules for being able to solve this combinatorial optimization problem. In particular, we split the set of available locations of the ship into different subsets and force the stowage of containers within them depending on their features and handling operations. A validation of the proposed approach is given together with the analysis of real instances of the problem coming from a maritime terminal located in the city of Genoa.

Journal ArticleDOI
TL;DR: The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting solution (that will be improved by a local search or another heuristic).

Journal ArticleDOI
TL;DR: It is proved that the problem of finding a broadcast tree such that the energy cost of the broadcast tree is minimized, and three heuristic algorithms are proposed, namely, shortest path tree heuristic, greedyHeuristic, and node weighted Steiner tree-based heuristic which are centralized algorithms.
Abstract: In this paper, we discuss energy efficient broadcast in ad hoc wireless networks. The problem of our concern is: given an ad hoc wireless network, find a broadcast tree such that the energy cost of the broadcast tree is minimized. Each node in the network is assumed to have a fixed level of transmission power. We first prove that the problem is NP-hard and propose three heuristic algorithms, namely, shortest path tree heuristic, greedy heuristic, and node weighted Steiner tree-based heuristic, which are centralized algorithms. The approximation ratio of the node weighted Steiner tree-based heuristic is proven to be (1 + 2 ln(n - 1)). Extensive simulations have been conducted and the results have demonstrated the efficiency of the proposed algorithms.

Journal ArticleDOI
TL;DR: Five rather different multistart tabu search strategies for the unconstrained binary quadratic optimization problem are described and experimentally compared: a random restart procedure, an application of a deterministic heuristic to specially constructed subproblems, anApplication of a randomized procedure to the full problem, a constructive procedure usingtabu search adaptive memory, and an approach based on solving perturbed problems.
Abstract: This paper describes and experimentally compares five rather different multistart tabu search strategies for the unconstrained binary quadratic optimization problem: a random restart procedure, an application of a deterministic heuristic to specially constructed subproblems, an application of a randomized procedure to the full problem, a constructive procedure using tabu search adaptive memory, and an approach based on solving perturbed problems. In the solution improvement phase a modification of a standard tabu search implementation is used. A computational trick applied to this modification – mapping of the current solution to the zero vector – allowed to significantly reduce the time complexity of the search. Computational results are provided for the 25 largest problem instances from the OR-Library and, in addition, for the 18 randomly generated larger and more dense problems. For 9 instances from the OR-Library new best solutions were found.