scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 2004"


Journal ArticleDOI
TL;DR: Some of the work undertaken in the use of metaheuristic search techniques for the automatic generation of test data is surveyed, discussing possible new future directions of research for each of its different individual areas.
Abstract: The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that in general, test data generation is an undecidable problem. Metaheuristic search techniques oer much promise in regard to these problems. Metaheuristic search techniques are highlevel frameworks, which utilise heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey-box properties, for example safety constraints; and also non-functional properties, such as worst-case execution time. This paper surveys some of the work undertaken in this eld, discussing possible new future directions of research for each of its dieren t individual areas.

1,351 citations


Journal ArticleDOI
TL;DR: Modifications are made to the ACO algorithm used to solve the traditional traveling salesman problem in order to allow the search of the multiple routes of the VRP and the use of multiple ant colonies is found to provide a comparatively competitive solution technique especially for larger problems.

678 citations


Journal ArticleDOI
TL;DR: This work presents a comparison of 25 methods, ranging from the classical Johnson's algorithm or dispatching rules to the most recent metaheuristics, including tabu search, simulated annealing, genetic algorithms, iterated local search and hybrid techniques, for the well-known permutation flowshop problem with the makespan criterion.

544 citations


Journal ArticleDOI
TL;DR: This paper presents an ant colony optimization methodology for optimally clustering N objects into K clusters which employs distributed agents which mimic the way real ants find a shortest path from their nest to food source and back.

496 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce Pareto Ant Colony Optimization as an especially effective meta-heuristic for solving the portfolio selection problem and compare its performance to other heuristic approaches by means of computational experiments with random instances.
Abstract: Selecting the “best” project portfolio out of a given set of investment proposals is a common and often critical management issue. Decision-makers must regularly consider multiple objectives and often have little a priori preference information available to them. Given these contraints, they can improve their chances of achieving success by following a two-phase procedure that first determines the solution space of all efficient (i.e., Pareto-optimal) portfolios and then allows them to interactively explore that space. However, the task of determining the solution space is not trivial: brute-force complete enumeration only works for small instances and the underlying NP-hard problem becomes increasingly demanding as the number of projects grows. Meta-heuristics provide a useful compromise between the amount of computation time necessary and the quality of the approximated solution space. This paper introduces Pareto Ant Colony Optimization as an especially effective meta-heuristic for solving the portfolio selection problem and compares its performance to other heuristic approaches (i.e., Pareto Simulated Annealing and the Non-Dominated Sorting Genetic Algorithm) by means of computational experiments with random instances. Furthermore, we provide a numerical example based on real world data.

419 citations


Book ChapterDOI
01 Jan 2004
TL;DR: In the previous three chapters, various classic problem-solving methods, including dynamic programming, branch and bound, and local search algorithms, as well as some modern heuristic methods like simulated annealing and tabu search, were seen to be deterministic.
Abstract: In the previous three chapters we discussed various classic problem-solving methods, including dynamic programming, branch and bound, and local search algorithms, as well as some modern heuristic methods like simulated annealing and tabu search. Some of these techniques were seen to be deterministic. Essentially you “turn the crank” and out pops the answer. For these methods, given a search space and an evaluation function, some would always return the same solution (e.g., dynamic programming), while others could generate different solutions based on the initial configuration or starting point (e.g., a greedy algorithm or the hill-climbing technique). Still other methods were probabilistic, incorporating random variation into the search for optimal solutions. These methods (e.g., simulated annealing) could return different final solutions even when given the same initial configuration. No two trials with these algorithms could be expected to take exactly the same course. Each trial is much like a person’s fingerprint: although there are broad similarities across fingerprints, no two are exactly alike.

416 citations


Journal ArticleDOI
TL;DR: A two-stage hybrid algorithm that minimizes the number of vehicles, using simulated annealing, and minimizes travel cost by using a large neighborhood search that may relocate a large number of customers is proposed.
Abstract: The vehicle routing problem with time windows is a hard combinatorial optimization problem that has received considerable attention in the last decades. This paper proposes a two-stage hybrid algorithm for this transportation problem. The algorithm first minimizes the number of vehicles, using simulated annealing. It then minimizes travel cost by using a large neighborhood search that may relocate a large number of customers. Experimental results demonstrate the effectiveness of the algorithm, which has improved 10 (17%) of the 56 best published solutions to the Solomon benchmarks, while matching or improving the best solutions in 46 problems (82%). More important perhaps, the algorithm is shown to be very robust. With a fixed configuration of its parameters, it returns either the best published solutions (or improvements thereof) or solutions very close in quality on all Solomon benchmarks. Very preliminary results on the extended Solomon benchmarks are also given.

369 citations


Book ChapterDOI
TL;DR: It is shown that real networks are clustered in a well-defined domain of the entropy- noise space and that optimally heterogeneous nets actually cluster around the same narrow domain, suggesting that strong constraints actually operate on the possible universe of complex networks.
Abstract: Complex networks are characterized by highly heterogeneous distributions of links, often pervading the presence of key properties such as robustness under node removal. Several correlation measures have been defined in order to characterize the structure of these nets. Here we show that mutual information, noise and joint en- tropies can be properly defined on a static graph. These measures are computed for a number of real networks and analytically estimated for some simple standard models. It is shown that real networks are clustered in a well-defined domain of the entropy- noise space. By using simulated annealing optimization, it is shown that optimally heterogeneous nets actually cluster around the same narrow domain, suggesting that strong constraints actually operate on the possible universe of complex networks. The evolutionary implications are discussed.

311 citations


Journal ArticleDOI
TL;DR: Various novel heuristic stochastic search techniques have been proposed for optimization of proportional–integral–derivative gains used in Sugeno fuzzy logic based automatic generation control of multi-area thermal generating plants.

293 citations


Journal ArticleDOI
TL;DR: Numerical results using customized local search, simulated annealing, tabu search and genetic algorithm heuristics show that problems of practically relevant size can be solved quickly.

293 citations


Journal ArticleDOI
TL;DR: A new graph-based strategy for the detection of spatial clusters of arbitrary geometric form in a map of geo-referenced populations and cases, based on the likelihood ratio test previously formulated by Kulldorff and Nagarwalla for circular clusters is proposed.

Journal ArticleDOI
TL;DR: The ACS methodology is coupled with a conventional distribution system load-flow algorithm and adapted to solve the primary distribution system planning problem, obtaining improved results with significant reductions in the solution time.
Abstract: The planning problem of electrical power distribution networks, stated as a mixed nonlinear integer optimization problem, is solved using the ant colony system algorithm (ACS). The behavior of real ants has inspired the development of the ACS algorithm, an improved version of the ant system (AS) algorithm, which reproduces the technique used by ants to construct their food recollection routes from their nest, and where a set of artificial ants cooperate to find the best solution through the interchange of the information contained in the pheromone deposits of the different trajectories. This metaheuristic approach has proven to be very robust when applied to global optimization problems of a combinatorial nature, such as the traveling salesman and the quadratic assignment problem, and is favorably compared to other solution approaches such as genetic algorithms (GAs) and simulated annealing techniques. In this work, the ACS methodology is coupled with a conventional distribution system load-flow algorithm and adapted to solve the primary distribution system planning problem. The application of the proposed methodology to two real cases is presented: a 34.5-kV system with 23 nodes from the oil industry and a more complex 10-kV electrical distribution system with 201 nodes that feeds an urban area. The performance of the proposed approach outstands positively when compared to GAs, obtaining improved results with significant reductions in the solution time. The technique is shown as a flexible and powerful tool for the distribution system planning engineers.

Journal ArticleDOI
TL;DR: This paper presents two variants of local search where the search time can be set as an input parameter: a time-predefined variant of simulated annealing and an adaptation of the “great deluge” method.
Abstract: In recent years the processing speed of computers has increased dramatically. This in turn has allowed search algorithms to execute more iterations in a given amount of real-time. Does this necessarily always lead to an improvement in the quality of final solutions? This paper is devoted to the investigation of that question. We present two variants of local search where the search time can be set as an input parameter. These two approaches are: a time-predefined variant of simulated annealing and an adaptation of the “great deluge” method. We present a comprehensive series of experiments which show that these approaches significantly outperform the previous best results (in terms of solution quality) on a range of benchmark exam timetabling problems. Of course, there is a price to pay for such better results: increased execution time. We discuss the impact of this trade-off between quality and execution time. In particular we discuss issues involving the proper estimation of the algorithm's execution tim...

Journal ArticleDOI
TL;DR: A new method for the extraction of roads from remotely sensed images is proposed, under the assumption that roads form a thin network in the image, by connected line segments by minimizing an energy function.
Abstract: In this paper we propose a new method for the extraction of roads from remotely sensed images. Under the assumption that roads form a thin network in the image, we approximate such a network by connected line segments. To perform this task, we construct a point process able to simulate and detect thin networks. The segments have to be connected, in order to form a line-network. Aligned segments are favored whereas superposition is penalized. These constraints are enforced by the interaction model (called the Candy model). The specific properties of the road network in the image are described by the data term. This term is based on statistical hypothesis tests. The proposed probabilistic model can be written within a Gibbs point process framework. The estimate for the network is found by minimizing an energy function. In order to avoid local minima, we use a simulated annealing algorithm, based on a Monte Carlo dynamics (RJMCMC) for finite point processes. Results are shown on SPOT, ERS and aerial images.

Journal ArticleDOI
TL;DR: A new global optimization method for black-box functions is proposed, based on a novel mode-pursuing sampling method that systematically generates more sample points in the neighborhood of the function mode while statistically covering the entire search space.
Abstract: The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This article proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling method that systematically generates more sample points in the neighborhood of the function mode while statistically covering the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive probl...

Journal ArticleDOI
TL;DR: Simulation results of typical complex function optimization show that CSA improves the convergence and is efficient, applicable and easy to implement.
Abstract: Simulated annealing (SA) has been applied with success to many numerical and combinatorial optimization problems in recent years. SA has a rather slow convergence rate, however, on some function optimization problems. In this paper, by introducing chaotic systems to simulated annealing, we propose a optimization algorithm named chaos simulated annealing (CSA). The distinctions between CSA and SA are chaotic initialization and chaotic sequences replacing the Gaussian distribution. Simulation results of typical complex function optimization show that CSA improves the convergence and is efficient, applicable and easy to implement. In addition, we discuss the advantages of CSA, and show the reasons why CSA performs better than SA.

Journal ArticleDOI
TL;DR: This research proposes a simulated annealing approach to minimize makespan for a single batch-processing machine and outperforms CPLEX on all the instances.

Journal ArticleDOI
TL;DR: Some of the work undertaken in the use of metaheuristic search techniques for the automatic generation of test data is surveyed, discussing possible new future directions of research for each of its different individual areas.
Abstract: The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that, in general, test data generation is an undecidable problem. Metaheuristic search techniques offer much promise in regard to these problems. Metaheuristic search techniques are high-level frameworks, which utilize heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey-box properties, for example safety constraints; and also non-functional properties, such as worst-case execution time. This paper surveys some of the work undertaken in this field, discussing possible new future directions of research for each of its different individual areas. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years and previous attempts to automate the test gene selection process have failed.
Abstract: The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test gene...

Journal ArticleDOI
TL;DR: In this paper, an evolutionary programming-based tabu search (TS) method was used to solve the short-term unit commitment problem using an evolutionary algorithm. But, the problem of finding the optimal generating unit commitment in the power system for the next H hours was not addressed.
Abstract: This paper presents a new approach to solving the short-term unit commitment problem using an evolutionary programming-based tabu search (TS) method. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Evolutionary programming, which happens to be a global optimization technique for solving unit commitment problem, operates on a system, which is designed to encode each unit's operating schedule with regard to its minimum up/down time. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all of the units according to their initial status ("flat start"). Here, the parents are obtained from a predefined set of solutions (i.e., each and every solution is adjusted to meet the requirements). Then, a random decommitment is carried out with respect to the unit's minimum downtimes, and TS improves the status by avoiding entrapment in local minima. The best population is selected by evolutionary strategy. The Neyveli Thermal Power Station (NTPS) Unit-II in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different power systems consisting of 10, 26, and 34 generating units. Numerical results are shown comparing the cost solutions and computation time obtained by using the evolutionary programming method and other conventional methods like dynamic programming, Lagrangian relaxation, and simulated annealing and tabu search in reaching proper unit commitment.

Proceedings ArticleDOI
Bowei Xi1, Zhen Liu2, Mukund Raghavachari2, Cathy H. Xia2, Li Zhang2 
17 May 2004
TL;DR: This work proposes a smart hill-climbing algorithm using ideas of importance sampling and Latin Hypercube Sampling and demonstrates that the algorithm is more efficient than and superior to traditional heuristic methods.
Abstract: The overwhelming success of the Web as a mechanism for facilitating information retrieval and for conducting business transactions has ledto an increase in the deployment of complex enterprise applications. These applications typically run on Web Application Servers, which assume the burden of managing many tasks, such as concurrency, memory management, database access, etc., required by these applications. The performance of an Application Server depends heavily on appropriate configuration. Configuration is a difficult and error-prone task dueto the large number of configuration parameters and complex interactions between them. We formulate the problem of finding an optimal configuration for a given application as a black-box optimization problem. We propose a smart hill-climbing algorithm using ideas of importance sampling and Latin Hypercube Sampling (LHS). The algorithm is efficient in both searching and random sampling. It consists of estimating a local function, and then, hill-climbing in the steepest descent direction. The algorithm also learns from past searches and restarts in a smart and selective fashion using the idea of importance sampling. We have carried out extensive experiments with an on-line brokerage application running in a WebSphere environment. Empirical results demonstrate that our algorithm is more efficient than and superior to traditional heuristic methods.

Journal ArticleDOI
01 Oct 2004
TL;DR: This work combines the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network and shows the effectiveness with two difficult combinatorial optimization problems.
Abstract: Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.

Journal ArticleDOI
TL;DR: The proposed ADHDE method utilizes the concept of ant colony search to search the proper mutation operator to accelerate searching out the global solution and is superior to some other methods in terms of solution power loss and costs.
Abstract: This paper presents an ant direction hybrid differential evolution (ADHDE) with integer programming which is effective and efficient for solving large capacitor placement problems in distribution systems. The use of proper mutation operator in hybrid differential evolution (HDE) can accelerate the search of a global solution. However, the selection of mutation operator depends on the problem. In this study, the ADHDE method utilizes the concept of ant colony search to search the proper mutation operator to accelerate searching out the global solution. Various-scale application systems are used to compare the performance of the proposed method with HDE, simulated annealing, and ant system. Numerical results show that the performance of the proposed ADHDE method is better than the other methods. Also, the ADHDE method is superior to some other methods in terms of solution power loss and costs.

Journal ArticleDOI
TL;DR: In this paper, a simulated annealing algorithm is used to minimize both the investment cost for feeder and substations, and the power loss cost, and a set of numerical results are provided.
Abstract: The planning of electrical power distribution systems strongly influences the supply of electrical power to consumers. The problem is to minimize both the investment cost for feeder and substations, and the power-loss cost. When the substations can already provide enough power flow, then the problem reduces to minimize the total cost related to the feeders and their power-loss. The difficulty of dealing with this problem increases rapidly with its size (i.e., the number of customers). It seems appropriate to use heuristic methods to obtain suboptimal solutions, since exact methods are too much time consuming. In this paper, a simulated annealing algorithm is used. A set of numerical results are provided.

01 Jan 2004
TL;DR: The findings of this study show that SCE is computationally much faster when compared with other also widely used algorithms such as GAs, Simulated Annealing, GLOBE and Shuffled Frog Leaping Algorithms is a potential alternative optimization algorithm to solve water distribution network problems.
Abstract: EPANET, a widely used water distribution network simulation model, is used in this study to deal with both the steady state and extended period simulation and is linked with a powerful optimization algorithm, Shuffled Complex Evolution (SCE). SCE deals with a set of population of points and searches in all direction within the feasible space based on objective function. In this present study, SCE is applied for the design of a cost effective water distribution network. The findings of this study show that SCE is computationally much faster when compared with other also widely used algorithms such as GAs, Simulated Annealing, GLOBE and Shuffled Frog Leaping Algorithms. Hence, SCE is a potential alternative optimization algorithm to solve water distribution network problems.

Journal ArticleDOI
TL;DR: The quantum annealing scheme, even with a drastically simple form of kinetic energy, appears definitely superior to the classical one, when tested on a 1002-city instance of the standard TSPLIB.
Abstract: We propose a path-integral Monte Carlo quantum annealing scheme for the symmetric traveling-salesman problem, based on a highly constrained Ising-like representation, and we compare its performance against standard thermal simulated annealing. The Monte Carlo moves implemented are standard, and consist in restructuring a tour by exchanging two links (two-opt moves). The quantum annealing scheme, even with a drastically simple form of kinetic energy, appears definitely superior to the classical one, when tested on a 1002-city instance of the standard TSPLIB.

Book ChapterDOI
01 Jan 2004
TL;DR: Two Go programs are described, Olga and Oleg, developed by a Monte-Carlo approach that is simpler than Bruegmann’s (1993) approach, and the ever-increasing power of computers lead us to think that Monte- carlo approaches are worth considering for computer Go in the future.
Abstract: We describe two Go programs, Olga and Oleg, developed by a Monte-Carlo approach that is simpler than Bruegmann’s (1993) approach. Our method is based on Abramson (1990). We performed experiments, to assess ideas on (1) progressive pruning, (2) all moves as first heuristic, (3) temperature, (4) simulated annealing, and (5) depth-two tree search within the Monte-Carlo framework. Progressive pruning and the all moves as first heuristic are good speed-up enhancements that do not deteriorate the level of the program too much. Then, using a constant temperature is an adequate and simple heuristic that is about as good as simulated annealing. The depth-two heuristic gives deceptive results at the moment. The results of our Monte-Carlo programs against knowledge-based programs on 9x9 boards are promising. Finally, the ever-increasing power of computers lead us to think that Monte-Carlo approaches are worth considering for computer Go in the future.

Journal ArticleDOI
TL;DR: A simple algorithm to compute a temperature which is compatible with a given acceptance ratio is proposed and the properties of the acceptance probability are studied, showing that this function is convex for low temperatures and concave for high temperatures.
Abstract: The classical version of simulated annealing is based on a cooling schedule. Generally, the initial temperature is set such that the acceptance ratio of bad moves is equal to a certain value χ0. In this paper, we first propose a simple algorithm to compute a temperature which is compatible with a given acceptance ratio. Then, we study the properties of the acceptance probability. It is shown that this function is convex for low temperatures and concave for high temperatures. We also provide a lower bound for the number of plateaux of a simulated annealing based on a geometric cooling schedule. Finally, many numerical experiments are reported.

BookDOI
01 Jan 2004
TL;DR: This paper presents a methodology for developing a Genetic Algorithm for Tackling Multiobjective Job-shop Scheduling Problems and some examples of this algorithm's use in the multiobjective vehicle routing problem.
Abstract: I Methodology.- A Tutorial on Evolutionary Multiobjective Optimization.- 2 Bounded Pareto Archiving: Theory and Practice.- 3 Evaluation of Multiple Objective Metaheuristics.- 4 An Introduction to Multiobjective Metaheuristics for Scheduling and Timetabling.- II Problem-oriented Contributions.- 5 A Particular Multiobjective Vehicle Routing Problem Solved by Simulated Annealing.- 6 A Dynasearch Neighborhood for the Bicriteria Traveling Salesman Problem.- 7 Pareto Local Optimum Sets in the Biobjective Traveling Salesman Problem: An Experimental Study.- 8 A Genetic Algorithm for Tackling Multiobjective Job-shop Scheduling Problems.- 9 RPSGAe - Reduced Pareto Set Genetic Algorithm: Application to Polymer Extrusion.

Journal ArticleDOI
01 Oct 2004
TL;DR: The Metropolis criterion of simulated annealing algorithm is introduced in order to balance exploration and exploitation of Q- learning, and the modified Q-learning algorithm based on this criterion, SA-Q-learning, is presented.
Abstract: The balance between exploration and exploitation is one of the key problems of action selection in Q-learning. Pure exploitation causes the agent to reach the locally optimal policies quickly, whereas excessive exploration degrades the performance of the Q-learning algorithm even if it may accelerate the learning process and allow avoiding the locally optimal policies. In this paper, finding the optimum policy in Q-learning is described as search for the optimum solution in combinatorial optimization. The Metropolis criterion of simulated annealing algorithm is introduced in order to balance exploration and exploitation of Q-learning, and the modified Q-learning algorithm based on this criterion, SA-Q-learning, is presented. Experiments show that SA-Q-learning converges more quickly than Q-learning or Boltzmann exploration, and that the search does not suffer of performance degradation due to excessive exploration.