scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 1992"


Journal ArticleDOI
15 Jul 1992-EPL
TL;DR: In this article, the authors proposed a new global optimization method (Simulated Tempering) for simulating effectively a system with a rough free-energy landscape (i.e., many coexisting states) at finite nonzero temperature.
Abstract: We propose a new global optimization method (Simulated Tempering) for simulating effectively a system with a rough free-energy landscape (i.e., many coexisting states) at finite nonzero temperature. This method is related to simulated annealing, but here the temperature becomes a dynamic variable, and the system is always kept at equilibrium. We analyse the method on the Random Field Ising Model, and we find a dramatic improvement over conventional Metropolis and cluster methods. We analyse and discuss the conditions under which the method has optimal performances.

1,723 citations


Journal ArticleDOI
TL;DR: In this article, an approximation algorithm for the problem of finding the minimum makespan in a job shop is presented, which is based on simulated annealing, a generalization of the well known iterative improvement approach to combinatorial optimization problems.
Abstract: We describe an approximation algorithm for the problem of finding the minimum makespan in a job shop. The algorithm is based on simulated annealing, a generalization of the well known iterative improvement approach to combinatorial optimization problems. The generalization involves the acceptance of cost-increasing transitions with a nonzero probability to avoid getting stuck in local minima. We prove that our algorithm asymptotically converges in probability to a globally minimal solution, despite the fact that the Markov chains generated by the algorithm are generally not irreducible. Computational experiments show that our algorithm can find shorter makespans than two recent approximation approaches that are more tailored to the job shop scheduling problem. This is, however, at the cost of large running times.

1,107 citations


Journal ArticleDOI
TL;DR: Two stochastic algorithms are derived from this general Classification EM algorithm, incorporating random perturbations, to reduce the initial-position dependence of the classical optimization clustering algorithms.

810 citations


Journal ArticleDOI
TL;DR: This work compares Genetic Algorithms with a functional search method, Very Fast Simulated Reannealing (VFSR), that not only is efficient in its search strategy, but also is statistically guaranteed to find the function optima.

481 citations


Journal ArticleDOI
TL;DR: An approach to solving the task allocation problem using a technique known as simulated annealing is described and a distributed hard real-time architecture is defined and new analysis is presented which enables timing requirements to be guaranteed.
Abstract: A distributed hard real time system can be composed from a number of communicating tasks. One of the difficulties with building such systems is the problem of where to place the tasks. In general there are PT ways of allocating T tasks to P processors, and the problem of finding an optimal feasible allocation (where all tasks meet physical and timing constraints) is known to be NP-Hard. This paper describes an approach to solving the task allocation problem using a technique known as simulated annealing. It also defines a distributed hard real-time architecture and presents new analysis which enables timing requirements to be guaranteed.

367 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a class of simulated annealing algorithms for global minimization of a continuous function defined on a subset of points and consider the case where the selection Markov kernel is absolutely continuous and has a density which is uniformly bounded away from 0.
Abstract: We study a class of simulated annealing algorithms for global minimization of a continuous function defined on a subset of We consider the case where the selection Markov kernel is absolutely continuous and has a density which is uniformly bounded away from 0. This class includes certain simulated annealing algorithms recently introduced by various authors. We show that, under mild conditions, the sequence of states generated by these algorithms converges in probability to the global minimum of the function. Unlike most previous studies where the cooling schedule is deterministic, our cooling schedule is allowed to be adaptive. We also address the issue of almost sure convergence versus convergence in probability.

311 citations


Journal ArticleDOI
TL;DR: A simulated annealing algorithm is proposed for the equilibrium network design problem and the ability of this algorithm to determine a globally optimal solution for two different networks is demonstrated.
Abstract: The equilibrium network design problem can be formulated as a mathematical program with variational inequality constraints. We know this problem is nonconvex; hence, it is difficult to solve for a globally optimal solution. In this paper we propose a simulated annealing algorithm for the equilibrium network design problem. We demonstrate the ability of this algorithm to determine a globally optimal solution for two different networks. One of these describes an actual city in the midwestern United States.

294 citations


Journal ArticleDOI
TL;DR: A deterministic annealing approach is suggested to search for the optimal vector quantizer given a set of training data and the resulting codebook is independent of the codebook used to initialize the iterations.
Abstract: A deterministic annealing approach is suggested to search for the optimal vector quantizer given a set of training data. The problem is reformulated within a probabilistic framework. No prior knowledge is assumed on the source density, and the principle of maximum entropy is used to obtain the association probabilities at a given average distortion. The corresponding Lagrange multiplier is inversely related to the 'temperature' and is used to control the annealing process. In this process, as the temperature is lowered, the system undergoes a sequence of phase transitions when existing clusters split naturally, without use of heuristics. The resulting codebook is independent of the codebook used to initialize the iterations. >

280 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the non-linear inversion of marine seismic refraction waveforms and show that genetic algorithms are inherently superior to random search techniques and can also perform better than iterative matrix inversion which requires a good starting model.
Abstract: SUMMARY Recently a new class of methods, to solve non-linear optimization problems, has generated considerable interest in the field of Artificial Intelligence. These methods, known as genetic algorithms, are able to solve highly non-linear and non-local optimization problems and belong to the class of global optimization techniques, which includes Monte Carlo and Simulated Annealing methods. Unlike local techniques, such as damped least squares or conjugate gradients, genetic algorithms avoid all use of curvature information on the objective function. This means that they do not require any derivative information and therefore one can use any type of misfit function equally well. Most iterative methods work with a single model and find improvements by perturbing it in some fashion. Genetic algorithms, however, work with a group of models simultaneously and use stochastic processes to guide the search for an optimal solution. Both Simulated Annealing and genetic algorithms are modelled on natural optimization systems. Simulated Annealing uses an analogy with thermodynamics; genetic algorithms have an analogy with biological evolution. This evolution leads to an efficient exchange of information between all models encountered, and allows the algorithm to rapidly assimilate and exploit the information gained to find better data fitting models. To illustrate the power of genetic algorithms compared to Monte Carlo, we consider a simple multidimensional quadratic optimization problem and show that its relative efficiency increases dramatically as the number of unknowns is increased. As an example of their use in a geophysical problem with real data we consider the non-linear inversion of marine seismic refraction waveforms. The results show that genetic algorithms are inherently superior to random search techniques and can also perform better than iterative matrix inversion which requires a good starting model. This is primarily because genetic algorithms are able to combine both local and global search mechanisms into a single efficient method. Since many forward and inverse problems involve solving an optimization problem, we expect that the genetic approach will find applications in many other geophysical problems; these include seismic ray tracing, earthquake location, non-linear data fitting and, possibly seismic tomography.

273 citations


Journal ArticleDOI
TL;DR: The hybrid simulated annealing (HSA) algorithm as discussed by the authors uses the modified penalty algorithm to generate an initial solution and then improves it using simulated anealing, which is tested on single-row layout problems with facilities of unequal area.

206 citations


Journal ArticleDOI
TL;DR: A new class of optimization heuristics which combine local searches with stochastic sampling methods, allowing one to iterate local optimization heURistics is considered, improving 3-opt by over 1.6% and Lin-Kernighan by 1.3%.

Book
01 Jan 1992
TL;DR: Sequential Simulated Annealing: Speed of Convergence and Acceleration Techniques (R. Azencott).
Abstract: Sequential Simulated Annealing: Speed of Convergence and Acceleration Techniques (R. Azencott). A Common Large Deviations Mathematical Framework for Sequential Annealing and Parallel Annealing (R. Azencott). Rates of Convergence for Sequential Annealing: A Large Deviations Approach (O. Catoni). Parallel Simulated Annealing: An Overview of Basic Techniques (R. Azencott). Parallel Annealing: Simultaneous Periodically Interacting Searches (C. Graffigne). Simultaneous Periodically Interacting Searches: Convergence Rates (R. Azencott & C. Graffigne). Parallel Annealing: Multiple Trials (P. Roussel & G. Dreyfus). Parallel Annealing: Multiple Trials (B. Virot). Parallel Annealing: Multiple Trials (O. Catoni & A. Trouv?). Massive Parallelization (I. Gaudron & A. Trouv?). Parallel Annealing: Partitioning of Configurations (C. Lacote, et al.). Parallel Annealing: Implementation on Hardware Architecture A Qualitative Study (P. Garda).

Journal ArticleDOI
TL;DR: This paper investigates GA to rapidly sample the most significant portion or portions of the PPD, when very little prior information is available, and addresses the problem of ‘genetic drift’ which causes the finite GAs to converge to one peak or the other when the algorithm is applied to a highly multimodal fitness function with several peaks of nearly the same height.
Abstract: SUMMARY The seismic waveform inversion problem is usually cast into the framework of Bayesian statistics in which prior information on the model parameters is combined with the data and physics of the forward problem to estimate the a posteriori probability density (PPD) in model space. The PPD is a function of an objective or fitness function computed from the observed and synthetic data. In general, the PPD or the fitness function is multimodal and its shape is unknown. Global optimization methods such as simulated annealing (SA) and genetic algorithms (GA) do not require that the shape of the fitness function be known. In this paper, we investigate GA to rapidly sample the most significant portion or portions of the PPD, when very little prior information is available. First, we use a simple three operator (selection, crossover and mutation) GA acting on a randomly chosen finite population of haploid binary coded models. We use plane wave transformed synthetic seismic data and a normalized cross-correlation function [E(m)] in the frequency domain as a fitness function. A moderate value of crossover probability, a low value of mutation probability, a high value of update probability and a proper population size are required to reach very close to the global maximum of the fitness function. Next, with an attempt to accelerate convergence we show that the concepts from simulated annealing can be used in stretching of the fitness function, i.e., we use exp [E(m)/T] rather than E(m) as the fitness function, where T is a control parameter analogous to temperature in simulated annealing. By a schemata analysis, we show that at low temperatures, schemata with above average fitness values are reproduced in large numbers causing a much more rapid convergence of the algorithm. A high value of temperature T assigns nearly equal selection probability to most of the schemata and thus retains diversity among the members of the population. Thus a GA with a step function type cooling schedule (very high temperature in the beginning followed by rapid cooling to a very low temperature) improves the performance dramatically: high values of the fitness function are obtained rapidly using only half as many models as would be required by a conventional GA. Similar performance could also be achieved by first using a high mutation probability and then decreasing the mutation probability to a very low value, while retaining the same low temperature throughout. We also address the problem of ‘genetic drift’ which causes the finite GAs to converge to one peak or the other when the algorithm is applied to a highly multimodal fitness function with several peaks of nearly the same height. A parallel genetic algorithm based on the concept of ‘punctuated equilibria’ is implemented to circumvent the problem. We run several GAs each with a finite subpopulation in parallel and collect many good models from each one of these runs. These are then used to grasp the most significant portion(s) of the PPD in model space. We then compute the weighted mean model and use the derived good models to estimate uncertainty in the derived model parameters.

Journal ArticleDOI
TL;DR: In this paper, the authors present a procedure that can be used by facility designers to allocate space to manufacturing cells, taking into account the area and shape requirements of individual cells as well as any occupied regions on a floor plan.
Abstract: This paper describes a procedure that can be used by facility designers to allocate space to manufacturing cells. The procedure takes into consideration the area and shape requirements of individual cells as well as any occupied regions on a floor plan. A layout is represented as a collection of rectangular partitions organized as a slicing tree. The solution method involves searching through the space of all slicing trees of a given structure. An effective simulated annealing algorithm capable of minimizing inter-cell traffic flow and enforcing geometric constraints is presented. The algorithm is compared with two local search methods with encouraging results.

ReportDOI
01 Jan 1992
TL;DR: It is shown that, for some particular mixture situations, the SEM algorithm is almost always preferable to the EM and simulated annealing versions SAEM and MCEM, and the SEM stationary distribution provides a contrasted view of the loglikelihood by emphasizing sensible maxima.
Abstract: We compare three different stochastic versions of the EM algorithm: The SEM algorithm, the SAEM algorithm and the MCEM algorithm. We suggest that the most relevant contribution of the MCEM methodology is what we call the simulated annealing MCEM algorithm, which turns out to be very close to SAEM. We focus particularly on the mixture of distributions problem. In this context, we review the available theoretical results on the convergence of these algorithms and on the behavior of SEM as the sample size tends to infinity. The second part is devoted to intensive Monte Carlo numerical simulations and a real data study. We show that, for some particular mixture situations, the SEM algorithm is almost always preferable to the EM and simulated annealing versions SAEM and MCEM. For some very intricate mixtures, however, none of these algorithms can be confidently used. Then, SEM can be used as an efficient data exploratory tool for locating significant maxima of the likelihood function. In the real data case, we show that the SEM stationary distribution provides a contrasted view of the loglikelihood by emphasizing sensible maxima.

Journal ArticleDOI
TL;DR: In this article, the authors derived finite time estimates for simulated annealing and gave a sharp upper bound for the probability that the energy is close to its minimum value, which involves a new constant, the difficulty of the energy landscape.
Abstract: Simulated annealing algorithms are time inhomogeneous controlled Markov chains used to search for the minima of energy functions defined on finite state spaces. The control parameters, the so-called cooling schedule, control the probability that the energy should increase during one step of the algorithm. Most of the studies on simulated annealing have dealt with limit theorems, such as characterizing convergence conditions on the cooling schedule, or giving an equivalent of the law of the process for one fixed cooling schedule. In this paper we derive finite time estimates. These estimates are uniform in the cooling schedule and in the energy function. With new technical tools, we gain a new insight into the algorithm. We give a sharp upper bound for the probability that the energy is close to its minimum value. Hence we characterize the optimal convergence rate. This involves a new constant, the "difficulty" of the energy landscape. We calculate two cooling schedules for which our bound is almost reached. In one case it is reached up to a multiplicative constant for one energy function. In the other case it is reached in the sense of logarithmic equivalence uniformly in the energy function. These two schedules are both triangular: There is one different schedule for each finite simulation time. For each fixed finite time the second schedule has the currently used but previously mathematically unjustified exponential form. Finally, the title is "Rough large deviation estimates" because we have computed sharper ones (i.e., with sharp multiplicative constants) in two other papers.

Journal ArticleDOI
TL;DR: It is shown that including spatial modulation leads to a wider separation between the dose-volume histograms of the target volume and organs at risk, and is quantified in terms of the tumour control probability at constant normal tissue complication probability.
Abstract: For pt.I see ibid., vol.36, p.1201-26 (1991). Interest is rapidly growing in using multiple X-radiation fields defined by a multileaf collimator to achieve conformal radiotherapy. Three-dimensional treatment planning in such situations is in its infancy and most 3D planning systems provide no tools for optimizing therapy. A previous paper addressed how to calculate optimum beamweights when both the target volume and all or some parts of organs at risk were in the fields-of-view. The present work extends this technique to allow each radiation port to be spatially modulated across the geometrically shape field. An optimization method based on simulated annealing is presented. It is shown that including spatial modulation leads to a wider separation between the dose-volume histograms of the target volume and organs at risk. The improvement is quantified in terms of the tumour control probability at constant normal tissue complication probability. Possible limitations of a posteriori applied biological model are discussed in detail.

Journal ArticleDOI
TL;DR: Conformational searches by molecular dynamics and different types of Monte Carlo or build-up methods should be reformulated and appropriate methods found to extract different local minima from the search trajectory and allow visualization in the search space.

Journal ArticleDOI
TL;DR: A unified formulation and study of vector quantizer design methods that couple stochastic relaxation (SR) techniques with the generalized Lloyd algorithm is presented, showing that four existing techniques all fit into a general methodology for vector quantizers design aimed at finding a globally optimal solution.
Abstract: The authors present a unified formulation and study of vector quantizer design methods that couple stochastic relaxation (SR) techniques with the generalized Lloyd algorithm. Two new SR techniques are investigated and compared: simulated annealing (SA) and a reduced-complexity approach that modifies the traditional acceptance criterion for simulated annealing to an unconditional acceptance of perturbations. It is shown that four existing techniques all fit into a general methodology for vector quantizer design aimed at finding a globally optimal solution. Comparisons of the algorithms' performances when quantizing Gauss-Markov processes, speech, and image sources are given. The SA method is guaranteed to perform in a globally optimal manner, and the SR technique gives empirical results equivalent to those of SA. Both techniques result in significantly better performance than that obtained with the generalized Lloyd algorithm. >

Journal ArticleDOI
TL;DR: It is demonstrated that multiple runs of the simulated annealing algorithm can result in an optimal or near-optimal solution to the problem.

Journal ArticleDOI
TL;DR: In this article, simulated annealing has been explored as an alternative to conventional powder diffraction or model-building methods for real-space solution of zeolite framework crystal structures, as well as its success in predicting the framework structures of known zeolites.
Abstract: Direct, real-space solution of zeolite framework crystal structures by simulated annealing has been explored as an alternative to conventional powder diffraction or model-building methods. The method, as well as its success in predicting the framework structures of known zeolites, is described in detail. Data taken as input to the method are unit cell dimensions, symmetry, and framework density

Journal ArticleDOI
TL;DR: The authors cast edge detection as a problem in cost minimization by the formulation of a cost function that evaluates the quality of edge configurations and gives a mathematical description of edges and analyze the cost function in terms of the characteristics of the edges in minimum cost configurations.
Abstract: The authors cast edge detection as a problem in cost minimization. This is achieved by the formulation of a cost function that evaluates the quality of edge configurations. The function is a linear sum of weighted cost factors. The cost factors capture desirable characteristics of edges such as accuracy in localization, thinness, and continuity. Edges are detected by finding the edge configurations that minimize the cost function. The authors give a mathematical description of edges and analyze the cost function in terms of the characteristics of the edges in minimum cost configurations. Through the analysis, guidelines are provided on the choice of weights to achieve certain characteristics of the detected edges. The cost function is minimized by the simulated annealing method. A set of strategies is presented for generating candidate states and to devise a suitable temperature schedule. >

Journal ArticleDOI
TL;DR: The mathematics of MFA are shown to provide a powerful and general tool for deriving optimization algorithms.
Abstract: Optimization problems are approached using mean field annealing (MFA), which is a deterministic approximation, using mean field theory and based on Peierls's inequality, to simulated annealing. The MFA mathematics are applied to three different objective function examples. In each case, MFA produces a minimization algorithm that is a type of graduated nonconvexity. When applied to the 'weak-membrane' objective, MFA results in an algorithm qualitatively identical to the published GNC algorithm. One of the examples, MFA applied to a piecewise-constant objective function, is then compared experimentally with the corresponding GNC weak-membrane algorithm. The mathematics of MFA are shown to provide a powerful and general tool for deriving optimization algorithms. >

Journal ArticleDOI
TL;DR: In this article, a new method (Computerized LAyout Solutions using Simulated annealing) that considers the intercell and intra-cell layout problems in a cellular manufacturing environment is presented.
Abstract: A new method (Computerized LAyout Solutions using Simulated annealing —CLASS) that considers the inter-cell and intra-cell layout problems in a cellular manufacturing environment is presented. It a...

Journal ArticleDOI
TL;DR: These approaches suggest neural network methods as an alternative for solving certain optimization tasks as compared to classical optimization techniques and other novel approaches like simulated annealing.

Journal ArticleDOI
TL;DR: This paper investigates the applicability of a Monte Carlo technique known as ‘simulated annealing’ to achieve optimum or sub-optimum decompositions of probabilistic networks under bounded resources and proves that cost-function changes can be computed locally.
Abstract: This paper investigates the applicability of a Monte Carlo technique known as ‘simulated annealing’ to achieve optimum or sub-optimum decompositions of probabilistic networks under bounded resources. High-quality decompositions are essential for performing efficient inference in probabilistic networks. Optimum decomposition of probabilistic networks is known to be NP-hard (Wen, 1990). The paper proves that cost-function changes can be computed locally, which is essential to the efficiency of the annealing algorithm. Pragmatic control schedules which reduce the running time of the annealing algorithm are presented and evaluated. Apart from the conventional temperature parameter, these schedules involve the radius of the search space as a new control parameter. The evaluation suggests that the inclusion of this new parameter is important for the success of the annealing algorithm for the present problem.

Journal ArticleDOI
TL;DR: A continuous ID3 algorithm is proposed that converts decision trees into hidden layers that allows self-generation of a feedforward neural network architecture and interpretation of the knowledge embedded in the generated connections and weights.
Abstract: The relation between the decision trees generated by a machine learning algorithm and the hidden layers of a neural network is described. A continuous ID3 algorithm is proposed that converts decision trees into hidden layers. The algorithm allows self-generation of a feedforward neural network architecture. In addition, it allows interpretation of the knowledge embedded in the generated connections and weights. A fast simulated annealing strategy, known as Cauchy training, is incorporated into the algorithm to escape from local minima. The performance of the algorithm is analyzed on spiral data. >

Journal ArticleDOI
TL;DR: In this article, the authors used simulated annealing and genetic algorithms to find a one-dimensional earth structure which produces a seismogram that agrees with an observed seismogram in a geophysical inverse problem.

Journal ArticleDOI
TL;DR: An algorithm based on simulated annealing is presented to solve the machine-component grouping problem for the design of cells in a manufacturing system and it is found that it fares better for large problems.

Journal ArticleDOI
TL;DR: It turns out that several Tabu Search ideas can be subjected to mathematical analyses similar to those applied to Simulated Annealing, making it possible to establish corresponding convergence properties based on a broader foundation.
Abstract: During recent years, much work has gone into the exploration of general fundamental principles underlying local search strategies for combinatorial optimization. Many of these strategies can be subsumed under the general framework of Tabu Search, which introduces mechanisms of guidance and control based on flexible memory processes, broadening the range of strategic possibilities beyond those incorporated in memoryless search heuristics such as Simulated Annealing. We consider some examples of such memory based strategies for modifying both the generation and acceptance probabilities and investigate their impact on convergence results. It turns out that several Tabu Search ideas can be subjected to mathematical analyses similar to those applied to Simulated Annealing, making it possible to establish corresponding convergence properties based on a broader foundation.