scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 1998"


Journal ArticleDOI
TL;DR: In this article, the authors introduce quantum fluctuations into the simulated annealing process of optimization problems, aiming at faster convergence to the optimal state. But quantum fluctuations cause transitions between states and thus play the same role as thermal fluctuations in the conventional approach.
Abstract: We introduce quantum fluctuations into the simulated annealing process of optimization problems, aiming at faster convergence to the optimal state. Quantum fluctuations cause transitions between states and thus play the same role as thermal fluctuations in the conventional approach. The idea is tested by the transverse Ising model, in which the transverse field is a function of time similar to the temperature in the conventional method. The goal is to find the ground state of the diagonal part of the Hamiltonian with high accuracy as quickly as possible. We have solved the time-dependent Schr\"odinger equation numerically for small size systems with various exchange interactions. Comparison with the results of the corresponding classical (thermal) method reveals that the quantum annealing leads to the ground state with much larger probability in almost all cases if we use the same annealing schedule.

1,710 citations


Journal ArticleDOI
01 Nov 1998
TL;DR: The deterministic annealing approach to clustering and its extensions has demonstrated substantial performance improvement over standard supervised and unsupervised learning methods in a variety of important applications including compression, estimation, pattern recognition and classification, and statistical regression.
Abstract: The deterministic annealing approach to clustering and its extensions has demonstrated substantial performance improvement over standard supervised and unsupervised learning methods in a variety of important applications including compression, estimation, pattern recognition and classification, and statistical regression. The application-specific cost is minimized subject to a constraint on the randomness of the solution, which is gradually lowered. We emphasize the intuition gained from analogy to statistical physics. Alternatively the method is derived within rate-distortion theory, where the annealing process is equivalent to computation of Shannon's rate-distortion function, and the annealing temperature is inversely proportional to the slope of the curve. The basic algorithm is extended by incorporating structural constraints to allow optimization of numerous popular structures including vector quantizers, decision trees, multilayer perceptrons, radial basis functions, and mixtures of experts.

964 citations


Journal ArticleDOI
TL;DR: Pareto simulated annealing as discussed by the authors uses a sample of so-called generating solutions to explore its neighborhood in a way similar to that of classical simulated anealing. But it does not consider the problem of finding a good approximation of the set of efficient solutions of a combinatorial optimization problem.
Abstract: This paper presents a multiple-objective metaheuristic procedure—Pareto simulated annealing. The goal of the procedure is to find in a relatively short time a good approximation of the set of efficient solutions of a multiple-objective combinatorial optimization problem. The procedure uses a sample, of so-called generating solutions. Each solution explores its neighbourhood in a way similar to that of classical simulated annealing. Weights of the objectives, used for their local aggregation, are tuned in each iteration in order to assure a tendency for approaching the efficient solutions set while maintaining a uniform distribution of the generating solutions over this set. A computational experiment shows that the method is a better tool for approximating the efficient set than some previous proposals. © 1998 John Wiley & Sons, Ltd.

749 citations


Journal ArticleDOI
TL;DR: Line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing are addressed.
Abstract: Many different procedures have been proposed for optimization calculations when first derivatives are not available. Further, several researchers have contributed to the subject, including some who wish to prove convergence theorems, and some who wish to make any reduction in the least calculated value of the objective function. There is not even a key idea that can be used as a foundation of a review, except for the problem itself, which is the adjustment of variables so that a function becomes least, where each value of the function is returned by a subroutine for each trial vector of variables. Therefore the paper is a collection of essays on particular strategies and algorithms, in order to consider the advantages, limitations and theory of several techniques. The subjects addressed are line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing. We study the main features of the methods themselves, instead of providing a catalogue of references to published work, because an understanding of these features may be very helpful to future research.

556 citations


Journal ArticleDOI
TL;DR: A fundamental open problem in computer vision—determining pose and correspondence between two sets of points in space—is solved with a novel, fast, robust and easily implementable algorithm using a combination of optimization techniques.

532 citations


Journal ArticleDOI
TL;DR: A deterministic annealing EM (DAEM) algorithm for maximum likelihood estimation problems to overcome a local maxima problem associated with the conventional EM algorithm is presented and can obtain better estimates free of the initial parameter values.

503 citations


Journal ArticleDOI
TL;DR: A hybrid procedure that embeds GLS (Guided Local Search) into a Shifting Bottleneck framework and takes advantage of the differences between the two neighborhood structures proves to be particularly efficient.
Abstract: Many recently developed local search procedures for job shop scheduling use interchange of operations, embedded in a simulated annealing or tabu search framework. We develop a new variable depth search procedure, GLS (Guided Local Search), based on an interchange scheme and using the new concept of neighborhood trees. Structural properties of the neighborhood are used to guide the search in promising directions. While this procedure competes successfully with others even as a stand-alone, a hybrid procedure that embeds GLS into a Shifting Bottleneck framework and takes advantage of the differences between the two neighborhood structures proves to be particularly efficient. We report extensive computational testing on all the problems available from the literature.

355 citations


Journal ArticleDOI
TL;DR: In this article, a simulated annealing algorithm (SAA) was used to solve the unit commitment problem (UCP) and new rules for randomly generating feasible solutions were introduced.
Abstract: This paper presents a simulated annealing algorithm (SAA) to solve the unit commitment problem (UCP). New rules for randomly generating feasible solutions are introduced. The problem has two subproblems: a combinatorial optimization problem; and a nonlinear programming problem. The former is solved using the SAA while the latter problem is solved via a quadratic programming routine. Numerical results showed an improvement in the solutions costs compared to previously obtained results.

299 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared different simulated annealing schedules in order to find the cooling strategy which has the least total entropy production during the cooling process for given initial and final states and fixed number of iterations.
Abstract: Using computer experiments on a simple three-state system and an NP-complete system of permanents we compare different proposed simulated annealing schedules in order to find the cooling strategy which has the least total entropy production during the annealing process for given initial and final states and fixed number of iterations The schedules considered are constant thermodynamic speed, exponential, logarithmic, and linear cooling schedules The constant thermodynamic speed schedule is shown to be the best We are actually considering two different schedules with constant thermodynamic speed, the original one valid for near-equilibrium processes, and a version based on the natural timescale valid also at higher speeds The latter one delivers better results, especially in case of fast cooling or when the system is far from equilibrium Also with the lowest energy encountered during the entire optimization (the best-so-far-energy) as the indicator of merit, constant thermodynamic speed is superior Finally, we have compared two different estimators of the relaxation time One estimator is using the second largest eigenvalue of the thermalized form of the transition probability matrix and the other is using a simpler approximation for small deviations from equilibrium These two different expressions only agree at high temperatures

263 citations


Journal ArticleDOI
TL;DR: By taking into account the features of the landscape generated by the operators used, a simple genetic algorithm for finding the minimum makespan of the n-job, m-machine permutation flowshop sequencing problem is improved.
Abstract: In a previous paper, a simple genetic algorithm (GA) was developed for finding (approximately) the minimum makespan of the n-job, m-machine permutation flowshop sequencing problem (PFSP). The performance of the algorithm was comparable to that of a naive neighborhood search technique and a proven simulated annealing algorithm. However, recent results have demonstrated the superiority of a tabu search method in solving the PFSP. In this paper, we reconsider the implementation of a GA for this problem and show that by taking into account the features of the landscape generated by the operators used, we are able to improve its performance significantly.

251 citations


Posted Content
TL;DR: It is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler, which can be seen as a generalization of a recently-proposed variant of sequential importance sampling.
Abstract: Simulated annealing - moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions - has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers.

Journal ArticleDOI
TL;DR: The canonical way to derive clustering algorithms within this framework as well as an efficient implementation of mean-field annealing and the closely related Gibbs sampler are presented.
Abstract: We present a novel optimization framework for unsupervised texture segmentation that relies on statistical tests as a measure of homogeneity. Texture segmentation is formulated as a data clustering problem based on sparse proximity data. Dissimilarities of pairs of textured regions are computed from a multiscale Gabor filter image representation. We discuss and compare a class of clustering objective functions which is systematically derived from invariance principles. As a general optimization framework, we propose deterministic annealing based on a mean-field approximation. The canonical way to derive clustering algorithms within this framework as well as an efficient implementation of mean-field annealing and the closely related Gibbs sampler are presented. We apply both annealing variants to Brodatz-like microtexture mixtures and real-word images.

Journal ArticleDOI
TL;DR: In this paper, spatial simulated annealing is presented as a method to optimize spatial environmental sampling schemes, and it is shown that SSA is superior to conventional methods of designing sampling schemes.
Abstract: Spatial sampling is an important issue in environmental studies because the sample configuration influences both costs and effectiveness of a survey. Practical sampling constraints and available pre-information can help to optimize the sampling scheme. In this paper, spatial simulated annealing (SSA) is presented as a method to optimize spatial environmental sampling schemes. Sampling schemes are optimized at the point-level, taking into account sampling constraints and preliminary observations. Two optimization criteria have been used. The first optimizes even spreading of the points over a region, whereas the second optimizes variogram estimation using a proposed criterion from the literature. For several examples it is shown that SSA is superior to conventional methods of designing sampling schemes. Improvements up to 30% occur for the first criterion, and an almost complete solution is found for the second criterion. Spatial simulated annealing is especially useful in studies with many sampling constraints. It is flexible in implementing additional, quantitative criteria.

Journal ArticleDOI
TL;DR: In this paper, the authors used simulated annealing to fit four commonly used models to predict the dates of flowering of temperatezone trees, the spring warming, sequential, parallel and alternating models.
Abstract: The aim of the present study was to test the four commonly used models to predict the dates of flowering of temperatezone trees, the spring warming, sequential, parallel and alternating models. Previous studies concerning the performance of these models have shown that they were unable to make accurate predictions based on external data. One of the reasons for such inaccuracy may be wrong estimations of the parameters of each model due to the non-convergence of the optimization algorithm towards their maximum likelihood. We proposed to fit these four models using a simulated annealing method which is known to avoid local extrema of any kind of function, and thus is particularly well adapted to fit budburst models, as their likelihood function presents many local maxima. We tested this method using a phenological dataset deduced from aeropalynological data. Annual pollen spectra were used to estimate the dates of flowering of the populations around the sampling station. The results show that simulated annealing provides a better fit than traditional methods. Despite this improvement, classical models still failed to predict external data. We expect the simulated annealing method to allow reliable comparisons among models, leading to a selection of biologically relevant ones.

Journal ArticleDOI
TL;DR: A new variation of the SA algorithm is suggested and is found to be the most effective of all the optimisation algorithms considered, but the appropriate choice of updating parameters is of paramount importance.

Journal ArticleDOI
TL;DR: The test results illustrate that the efficiency of COA is much higher than that of some stochastic algorithms such as the simulated annealing algorithm SAA and chemotaxis algorithm CA, which are often used to optimize complex problems.
Abstract: During past decades, the role of optimization has steadily increased in many fields. It is a hot problem in research on control theory. In practice, optimization problems become more and more complex. Traditional algorithms cannot solve them satisfactorily. Either they are trapped to local minima or they need much more search time. Chaos often exists in nonlinear systems. It has many good properties such as ergodicity, stochastic properties, and ''regularity.'' A chaotic motion can go nonrepeatedly through every state in a certain domain. By use of these properties of chaos, an effective optimization method is proposed: the chaos optimization algorithm COA . With chaos search, some complex optimization problems are solved very well. The test results illustrate that the efficiency of COA is much higher than that of some stochastic algorithms such as the simulated annealing algorithm SAA and chemotaxis algorithm CA , which are often used to optimize complex problems. The chaos optimization method provides a...

Journal ArticleDOI
01 May 1998
TL;DR: In this paper, an extended genetic algorithm for solving the optimal transmission network expansion planning problem is presented, where two main improvements have been introduced in the genetic algorithm: (a) initial population obtained by conventional optimisation based methods; (b) mutation approach inspired in the simulated annealing technique.
Abstract: The paper presents an extended genetic algorithm for solving the optimal transmission network expansion planning problem. Two main improvements have been introduced in the genetic algorithm: (a) initial population obtained by conventional optimisation based methods; (b) mutation approach inspired in the simulated annealing technique. The proposed method is general in the sense that it does not assume any particular property of the problem being solved, such as linearity or convexity. Excellent performance is reported in the test results section of the paper for a difficult large-scale real-life problem: a substantial reduction in investment costs has been obtained with regard to previous solutions obtained via conventional optimisation methods and simulated annealing algorithms; statistical comparison procedures have been employed in benchmarking different versions of the genetic algorithm and simulated annealing methods.

Journal ArticleDOI
TL;DR: The results indicate that the choice of neighbourhood is the most important decision and that neighbourhoods based on the graph-theoretic concept of Kempe chains are the most effective regardless of the objectives or size of the problem.

Journal ArticleDOI
TL;DR: In this paper, a Simulated Annealing-based technique was used to address the assembly line balancing problem for multiple objective problems when paralleling of workstations is permitted, and the resulting performance of each solution was studied through a simulation experiment.
Abstract: This research presents a Simulated Annealing based technique to address the assembly line balancing problem for multiple objective problems when paralleling of workstations is permitted. The Simulated Annealing methodology is used for 23 line balancing strategies across seven problems. The resulting performance of each solution was studied through a simulation experiment. Many of the problems consisted of multiple products, which were sequenced in a mixed model fashion, task times were assumed to be stochastic, and parallel workstations were permitted. Two primary performance objectives were of most interest: total cost (labour and equipment) per part, and the degree to which the desired cycle time was achieved. Other traditional line balancing and production performance measures were also collected. This paper demonstrates how Simulated Annealing can be used to obtain line balancing solutions when one or more objectives are important. The experimental results showed that Simulated Annealing approaches yi...

Journal ArticleDOI
TL;DR: A new approach to chaotic simulated annealing with guaranteed convergence and minimization of the energy function is suggested by gradually reducing the time step in the Euler approximation of the differential equations that describe the continuous Hopfield neural network.
Abstract: Chen and Aihara (1995) proposed a chaotic simulated annealing approach to solving optimization problems. By adding a negative self coupling to a network model proposed earlier by Aihara et al. and gradually removing this negative self-coupling, they used the transient chaos for searching and self-organizing, thereby achieving great improvement over other neural-network approaches to optimization problems with or without simulated annealing. In this paper we suggest a new approach to chaotic simulated annealing with guaranteed convergence and minimization of the energy function by gradually reducing the time step in the Euler approximation of the differential equations that describe the continuous Hopfield neural network. This approach eliminates the need to carefully select other system parameters. We also generalize the convergence theorems of Chen and Aihara to arbitrarily increasing neuronal input-output functions and to less restrictive and yet more compact forms.

Journal ArticleDOI
TL;DR: This work studies global optimization methods that find the minimum of the least-squares error function of the current dipole estimation problem: clustering method, simulated annealing, and genetic algorithms.
Abstract: The locations of active brain areas can be estimated from the magnetic field produced by the neural current sources. In many cases, the actual current distribution can be modeled with a set of stationary current dipoles with time-varying amplitudes. This work studies global optimization methods that find the minimum of the least-squares error function of the current dipole estimation problem. Three different global optimization methods were investigated: clustering method, simulated annealing, and genetic algorithms. In simulation studies, the genetic algorithm was the most effective method. The methods were also applied to analysis of actual measurement data.

Journal ArticleDOI
TL;DR: This paper introduces a new binary encoding scheme to represent solutions, together with a heuristic to decode the binary representations into actual sequences, and compares it to the usual "natural" permutation representation for descent, simulated annealing, threshold accepting, tabu search and genetic algorithms on a large set of test problems.
Abstract: This paper presents several local search heuristics for the problem of scheduling a single machine to minimize total weighted tardiness. We introduce a new binary encoding scheme to represent solutions, together with a heuristic to decode the binary representations into actual sequences. This binary encoding scheme is compared to the usual "natural" permutation representation for descent, simulated annealing, threshold accepting, tabu search and genetic algorithms on a large set of test problems. Computational results indicate that all of the heuristics which employ our binary encoding are very robust in that they consistently produce good quality solutions, especially when multistart implementations are used instead of a single long run. The binary encoding is also used in a new genetic algorithm which performs very well and requires comparatively little computation time. A comparison of neighborhood search methods which use the permutation and binary representations shows that the permutation-based methods have a higher likelihood of generating an optimal solution, but are less robust in that some poor solutions are obtained. Of the neighborhood search methods, tabu search clearly dominates the others. Multistart descent performs remarkably well relative to simulated annealing and threshold accepting.

Journal ArticleDOI
01 Nov 1998
TL;DR: Perturbation analysis shows that the solutions obtained by MOSST are truly pareto-optimal, i.e. no objective can be further improved without degrading the others, all in a single run.
Abstract: A new multi-objective stochastic search technique (MOSST) for the multi-objective economic dispatch problem in power systems is presented. It is a highly constrained problem with both equality and inequality constraints. The MOSST heuristic has been designed as a combination of real coded genetic algorithms (GA) and simulated annealing (SA). It incorporates a genetic crossover operator BLX-/spl alpha/ and a problem specific mutation operator with a local search heuristic to provide a better search capability. Extensive simulations are carried out on standard test systems, considering various aspects, and the results are compared with other methods. These results indicate that the new MOSST heuristic converges rapidly to improved solutions. MOSST is a truly multi-objective technique, as it provides the values of various parameters for optimising different objectives, as well as the best compromise between them, all in a single run. Perturbation analysis shows that the solutions obtained by MOSST are truly pareto-optimal, i.e. no objective can be further improved without degrading the others.

Journal ArticleDOI
TL;DR: A new method is introduced to create artificial time sequences that fulfil given constraints but are random otherwise, to avoid certain artifacts generated by Fourier-based randomization schemes.
Abstract: A new method is introduced to create artificial time sequences that fulfil given constraints but are random otherwise. Constraints are usually derived from a measured signal for which surrogate data are to be generated. They are fulfilled by minimizing a suitable cost function using simulated annealing. A wide variety of structures can be imposed on the surrogate series, including multivariate, nonlinear, and nonstationary properties. When the linear correlation structure is to be preserved, the new approach avoids certain artifacts generated by Fourier-based randomization schemes.

Journal ArticleDOI
TL;DR: This work proposes an approach based on a modified genetic algorithm that generates and manipulates individuals with fixed size and a minimum-separation encoding scheme that eliminates redundant zeros in the solution representation, which indicates that this approach is indeed a good method for solving the channel-assignment problem.
Abstract: With the limited frequency spectrum and an increasing demand for cellular communication services, the problem of channel assignment becomes increasingly important. However, finding a conflict-free channel assignment with the minimum channel span is NP hard. Therefore, we formulate the problem by assuming a given channel span. Our objective is to obtain a conflict-free channel assignment among the cells, which satisfies both the electromagnetic compatibility (EMC) constraints and traffic demand requirements. We propose an approach based on a modified genetic algorithm (GA). The approach consists of a genetic-fix algorithm that generates and manipulates individuals with fixed size (i.e., in binary representation, the number of ones is fixed) and a minimum-separation encoding scheme that eliminates redundant zeros in the solution representation. Using these two strategies, the search space can be reduced substantially. Simulations on the first four benchmark problems showed that this algorithm could achieve at least 80%, if not 100%, convergence to solutions within reasonable time. In the fifth benchmark problem, our algorithm found better solutions with shorter channel span than any existing algorithms. Such significant results indicate that our approach is indeed a good method for solving the channel-assignment problem.

Journal ArticleDOI
TL;DR: This paper presents a method for establishing an optimal network design for the estimation of areal averages of rainfall events by using the well known geostatistical variance-reduction method in combination with simulated annealing as an algorithm of minimisation.

Journal ArticleDOI
TL;DR: The proposed implementation of the tabu search approach suggests simple techniques for generating neighborhoods of a given sequence and a combined scheme for intensification and diversification that has not been considered before that results in an implementation that improves upon previoustabu search implementations that use mechanisms of comparable simplicity.

01 Jan 1998
TL;DR: Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness.
Abstract: The problem of programming an artificial ant to follow the Santa Fe trail is used as an example program search space. Previously reported genetic programming, simulated annealing and hill climbing performance is shown not to be much better than random search on the Ant problem. Enumeration of a small fraction of the total search space and random sampling characterise it as rugged with multiple plateaus split by deep valleys and many local and global optima. This suggests it is dicult for hill climbing algorithms. Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness. In some cases we show solutions cannot be assembled using a fixed representation from small building blocks of above average fitness. This suggest the Ant problem is dicult for

Journal ArticleDOI
Moon-Won Park1, Yeong-Dae Kim1
TL;DR: A systematic procedure to find appropriate values for parameters quickly without much human intervention by using a nonlinear optimization method, the simplex method for nonlinear programming is suggested.

Journal Article
TL;DR: In this article, the authors tackled the problem of academic class scheduling at the university level using simulated annealing with adaptive cooling and reheating as a function of cost, and a rule-based preprocessor.
Abstract: In this study we have tackled the NP-hard problem of academic class scheduling (or timetabling) at the university level. We have investigated a variety of approaches based on simulated annealing, including mean-field annealing, simulated annealing with three different cooling schedules, and the use of a rule-based preprocessor to provide a good initial solution for annealing. The best results were obtained using simulated annealing with adaptive cooling and reheating as a function of cost, and a rule-based preprocessor. This approach enabled us to obtain valid schedules for the timetabling problem for a large university, using a complex cost function that includes student preferences. None of the other methods were able to provide a complete valid schedule.