scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 1995"


Journal Article
TL;DR: A real-coded crossover operator is developed whose search power is similar to that of the single-point crossover used in binary-coded GAs, and SBX is found to be particularly useful in problems having mult ip le optimal solutions with a narrow global basin where the lower and upper bo unds of the global optimum are not known a priori.
Abstract: Abst ract . T he success of binary-coded gene t ic algorithms (GA s) in problems having discrete sear ch space largely depends on the coding used to represent the prob lem var iables and on the crossover ope ra tor that propagates buildin g blocks from parent strings to children st rings . In solving optimization problems having continuous search space, binary-coded GAs discr et ize the search space by using a coding of the problem var iables in binary strings. However , t he coding of realvalued vari ables in finit e-length st rings causes a number of difficulties: inability to achieve arbit rary pr ecision in the obtained solution , fixed mapping of problem var iab les, inh eren t Hamming cliff problem associated wit h binary coding, and processing of Holland 's schemata in cont inuous search space. Although a number of real-coded GAs are developed to solve optimization problems having a cont inuous search space, the search powers of these crossover operators are not adequate . In t his paper , t he search power of a crossover operator is defined in terms of the probability of creating an arbitrary child solut ion from a given pair of parent solutions . Motivated by the success of binarycoded GAs in discrete search space problems , we develop a real-coded crossover (which we call the simulated binar y crossover , or SBX) operator whose search power is similar to that of the single-point crossover used in binary-coded GAs . Simulation results on a nu mber of realvalued test problems of varying difficulty and dimensionality suggest t hat the real-cod ed GAs with the SBX operator ar e ab le to perfor m as good or bet ter than binary-cod ed GAs wit h the single-po int crossover. SBX is found to be particularly useful in problems having mult ip le optimal solutions with a narrow global basin an d in prob lems where the lower and upper bo unds of the global optimum are not known a priori. Further , a simulation on a two-var iable blocked function shows that the real-coded GA with SBX work s as suggested by Goldberg

2,702 citations


Journal ArticleDOI
TL;DR: This work presents a method for reliably determining the lowest energy structure of an atomic cluster in an arbitrary model potential, based on a genetic algorithm that operates on a population of candidate structures to produce new candidates with lower energies.
Abstract: We present a method for reliably determining the lowest energy structure of an atomic cluster in an arbitrary model potential. The method is based on a genetic algorithm, which operates on a population of candidate structures to produce new candidates with lower energies. Our method dramatically outperforms simulated annealing, which we demonstrate by applying the genetic algorithm to a tight-binding model potential for carbon. With this potential, the algorithm efficiently finds fullerene cluster structures up to ${\mathrm{C}}_{60}$ starting from random atomic coordinates.

1,002 citations


Journal ArticleDOI
TL;DR: This approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled, and yields the original function so that the original image estimate can be obtained by joint minimization.
Abstract: One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function that enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce systematic errors; for example, they are incapable of recovering discontinuities and other important image attributes. In contrast, nonlinear estimates are more accurate but are often far less accessible. This is particularly true when the objective function is nonconvex, and the distribution of each data component depends on many image components through a linear operator with broad support. Our approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled. Minimizing over the auxiliary array alone yields the original function so that the original image estimate can be obtained by joint minimization. This can be done efficiently by Monte Carlo methods, for example by FFT-based annealing using a Markov chain that alternates between (global) transitions from one array to the other. Experiments are reported in optical astronomy, with space telescope data, and computed tomography. >

964 citations


Journal ArticleDOI
TL;DR: This work proposes MCMC methods distantly related to simulated annealing, which simulate realizations from a sequence of distributions, allowing the distribution being simulated to vary randomly over time.
Abstract: Markov chain Monte Carlo (MCMC; the Metropolis-Hastings algorithm) has been used for many statistical problems, including Bayesian inference, likelihood inference, and tests of significance. Though the method generally works well, doubts about convergence often remain. Here we propose MCMC methods distantly related to simulated annealing. Our samplers mix rapidly enough to be usable for problems in which other methods would require eons of computing time. They simulate realizations from a sequence of distributions, allowing the distribution being simulated to vary randomly over time. If the sequence of distributions is well chosen, then the sampler will mix well and produce accurate answers for all the distributions. Even when there is only one distribution of interest, these annealing-like samplers may be the only known way to get a rapidly mixing sampler. These methods are essential for attacking very hard problems, which arise in areas such as statistical genetics. We illustrate the methods wi...

874 citations


Journal ArticleDOI
TL;DR: A Genetic Algorithm is developed for finding (approximately) the minimum makespan of the n-job, m-machine permutation flowshop sequencing problem and the performance of the algorithm is compared with that of a naive Neighbourhood Search technique and with a proven Simulated Annealing algorithm.

849 citations


Book
07 Aug 1995
TL;DR: In this paper, the authors present a mathematical model of a GA multimodal fitness function, genetic drift, GA with sharing, and repeat (parallel) GA uncertainty estimates evolutionary programming -a variant of GA.
Abstract: Part 1 Preliminary statistics: random variables random nunmbers probability probability distribution, distribution function and density function joint and marginal probability distributions mathematical expectation, moments, variances and covariances conditional probability Monte Carlo integration importance sampling stochastic processes Markov chains homogeneous, inhomogeneous, irreducible and aperiodic Markov chains the limiting probability. Part 2 Direct, linear and iterative-linear inverse methods: direct inversion methods model based inversion methods linear/linearized inverse methods iterative linear methods for quasi-linear problems Bayesian formulation solution using probabilistic formulation. Part 3 Monte Carlo methods: enumerative or grid search techniques Monte Carlo inversion hybrid Monte Carlo-linear inversion directed Monte Carlo methods. Part 4 Simulated annealing methods: metropolis algorithm heat bath algorithm simulated annealing without rejected moves fast simulated annealing very fast simulated reannealing mean field annealing using SA in geophysical inversion. Part 5 Genetic algorithms: a classical GA schemata and the fundamental theorem of genetic algorithms problems combining elements of SA into a new GA a mathematical model of a GA multimodal fitness functions, genetic drift, GA with sharing, and repeat (parallel) GA uncertainty estimates evolutionary programming - a variant of GA. Part 6 Geophysical applications of SA and GA: 1-D seismic waveform inversion pre-stack migration velocity estimation inversion of resistivity sounding data for 1-D earth models inversion of resistivity profiling data for 2-D earth models inversion of magnetotelluric sounding data for 1-D earth models stochastic reservoir modelling seismic deconvolution by mean field annealing and Hopfield network. Part 7 Uncertainty estimation: methods of numerical integration simulated annealing - the Gibbs' sampler genetic algorithm - the parallel Gibbs' sampler numerical examples.

710 citations


Journal ArticleDOI
TL;DR: In this article, a transiently chaotic neural network (TCNN) model is proposed for combinatorial optimization problems, where the chaotic neurodynamics is temporarily generated for searching and self-organizing, and eventually vanishes with autonomous decrease of a bifurcation parameter corresponding to the temperature in the usual annealing process.

636 citations


Book
01 Feb 1995
TL;DR: In this paper, the mathematical foundations of Bayesian image analysis and its algorithms are discussed, and the necessary background from imaging is sketched and illustrated by a number of concrete applications like restoration, texture segmentation and motion analysis.
Abstract: The book is mainly concerned with the mathematical foundations of Bayesian image analysis and its algorithms. This amounts to the study of Markov random fields and dynamic Monte Carlo algorithms like sampling, simulated annealing and stochastic gradient algorithms. The approach is introductory and elementary: given basic concepts from linear algebra and real analysis it is self-contained. No previous knowledge from image analysis is required. Knowledge of elementary probability theory and statistics is certainly beneficial but not absolutely necessary. The necessary background from imaging is sketched and illustrated by a number of concrete applications like restoration, texture segmentation and motion analysis.

614 citations


Proceedings Article
01 Jan 1995
TL;DR: Examination of the specific cases in which ClustalW outperforms simulated annealing, and vice versa, provides insight into the strengths and weaknesses of current hidden Markov model approaches.
Abstract: A simulated annealing method is described for training hidden Markov models and producing multiple sequence alignments from initially unaligned protein or DNA sequences. Simulated annealing in turn uses a dynamic programming algorithm for correctly sampling suboptimal multiple alignments according to their probability and a Boltzmann temperature factor. The quality of simulated annealing alignments is evaluated on structural alignments of ten different protein families, and compared to the performance of other HMM training methods and the ClustalW program. Simulated annealing is better able to find near-global optima in the multiple alignment probability landscape than the other tested HMM training methods. Neither ClustalW nor simulated annealing produce consistently better alignments compared to each other. Examination of the specific cases in which ClustalW outperforms simulated annealing, and vice versa, provides insight into the strengths and weaknesses of current hidden Markov model approaches.

459 citations


Proceedings Article
20 Aug 1995
TL;DR: Reinforcement learning methods are applied to learn domain-specific heuristics for job shop scheduling to suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems.
Abstract: We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. A repair-based scheduler starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The temporal difference algorithm TD(λ) is applied to tram a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step lookahead search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle pay load processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD scheduler performs better than the best known existing algorithm for this task--Zwehen's iterative repair method based on simulated annealing. The results suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems.

396 citations


Proceedings ArticleDOI
01 Dec 1995
TL;DR: A P-admissible solution space where each packing is represented by a pair of module name sequences is proposed, and hundreds of modules could be successfully packed as demonstrated.
Abstract: The first and the most critical stage in VLSI layout design is the placement, the background of which is the rectangle packing problem: Given many rectangular modules of arbitrary size, place them without overlapping on a layer in the smallest bounding rectangle. Since the variety of the packing is infinite (two- dimensionally continuous) many, the key issue for successful optimization is in the introduction of a P-admissible solution space, which is a finite set of solutions at least one of which is optimal. This paper proposes such a solution space where each packing is represented by a pair of module name sequences. Searching this space by simulated annealing, hundreds of modules could be successfully packed as demonstrated. Combining a conventional wiring method, the biggest MCNC benchmark ami49 is challenged.

Journal ArticleDOI
TL;DR: In this article, the authors investigate a method of optimization using genetic algorithms (GAs) which allows them to consider the two objectives of Meyer et al. (1992), maximizing reliability and minimizing contaminated area at the time of first detection, separately yet simultaneously.
Abstract: This paper builds on the work of Meyer and Brill (1988) and subsequent work by Meyer et al. (1990, 1992) on the optimal location of a network of groundwater monitoring wells under conditions of uncertainty. We investigate a method of optimization using genetic algorithms (GAs) which allows us to consider the two objectives of Meyer et al. (1992), maximizing reliability and minimizing contaminated area at the time of first detection, separately yet simultaneously. The GA-based solution method has the advantage of being able to generate both convex and nonconvex points of the trade-off curve, accommodate nonlinearities in the two objective functions, and not be restricted by the peculiarities of a weighted objective function. Furthermore, GAs have the ability to generate large portions of the trade-off curve in a single iteration and may be more efficient than methods that generate only a single point at a time. Four different codings of genetic algorithms are investigated, and their performance in generating the multiobjective trade-off curve is evaluated for the groundwater monitoring problem using an example data set. The GA formulations are compared with each other and also with simulated annealing on both performance and computational intensity. Simulated annealing relies on a weighted objective function which can find only a single point along the trade-off curve for each iteration, while all of the multiple-objective GA formulations are able to find a larger number of convex and nonconvex points of trade-off curve in a single iteration. Each iteration of simulated annealing is approximately five times faster than an iteration of the genetic algorithm, but several simulated annealing iterations are required to generate a trade-off curve. GAs are able to find a larger number of nondominated points on the trade-off curve, while simulated annealing finds fewer points but with a wider range of objective function values. None of the GA formulations demonstrated the ability to generate the entire trade-off curve in a single iteration. Through manipulation of GA parameters certain sections of the trade-off curve can be targeted for better performance, but as performance improves at one section it suffers at another. Run times for all GA formulations were similar in magnitude.

Journal ArticleDOI
TL;DR: A class of approximation algorithms is described for solving the minimum makespan problem of job shop scheduling and can find shorter makespans than the shifting bottleneck heuristic or a simulated annealing approach with the same running time.


Journal ArticleDOI
TL;DR: In this article, a simulated annealing approach to the long-term transmission expansion planning problem is presented, which is a hard, large scale combinatorial problem and is compared with a more conventional optimization technique based on mathematical decomposition with a zero-one implicit enumeration procedure.
Abstract: This paper presents a simulated annealing approach to the long term transmission expansion planning problem which is a hard, large scale combinatorial problem. The proposed approach has been compared with a more conventional optimization technique based on mathematical decomposition with a zero-one implicit enumeration procedure. Tests have been performed on three different systems. Two smaller systems for which optimal solutions are known have been used to tune the main parameters of the simulated annealing process. The simulated annealing method has then been applied to a larger example system for which no optimal solutions are known: as a result an entire family of interesting solutions have been obtained with costs about 7% less than the best solutions known for that particular example system.

Journal ArticleDOI
11 Jan 1995
TL;DR: The algorithm is implemented on the CM-5 and is run repeatedly on two deceptive problems to demonstrate the added implicit parallelism and faster convergence which can result from larger population sizes.
Abstract: This paper introduces and analyzes a parallel method of simulated annealing. Borrowing from genetic algorithms, an effective combination of simulated annealing and genetic algorithms, called parallel recombinative simulated annealing, is developed. This new algorithm strives to retain the desirable asymptotic convergence properties of simulated annealing, while adding the populations approach and recombinative power of genetic algorithms. The algorithm iterates a population of solutions rather than a single solution, employing a binary recombination operator as well as a unary neighborhood operator. Proofs of global convergence are given for two variations of the algorithm. Convergence behavior is examined, and empirical distributions are compared to Boltzmann distributions. Parallel recombinative simulated annealing is amenable to straightforward implementation on SIMD, MIMD, or shared-memory machines. The algorithm, implemented on the CM-5, is run repeatedly on two deceptive problems to demonstrate the added implicit parallelism and faster convergence which can result from larger population sizes.

Journal ArticleDOI
TL;DR: This paper provides an introduction to the practical aspects of function optimization using this approach to simulated annealing, and uses two examples to illustrate the behaviour of the algorithm in low dimensions.
Abstract: Much work has been published on the theoretical aspects of simulated annealing. This paper provides a brief overview of this theory and provides an introduction to the practical aspects of function optimization using this approach. Different implementations of the general simulated annealing algorithm are discussed, and two examples are used to illustrate the behaviour of the algorithm in low dimensions. A third example illustrates a hybrid approach, combining simulated annealing with traditional techniques.

Journal ArticleDOI
TL;DR: A new algorithm for solving the problem of clustering m objects into c clusters based on a tabu search technique is developed that compares favorably with both the k-means and the simulated annealing algorithms.

Journal ArticleDOI
TL;DR: In this article, a feed-forward neural network is used to associate the cutting parameters with the cutting performance and a simulated annealing (SA) algorithm is applied to the neural network for solving the optimal cutting parameters based on a performance index within the allowable working conditions.
Abstract: Owing to the complexity of wire electrical discharge machining (wire-EDM), it is very difficult to determine optimal cutting parameters for improving cutting performance. The paper utilizes a feedforward neural network to associate the cutting parameters with the cutting performance. A simulated annealing (SA) algorithm is then applied to the neural network for solving the optimal cutting parameters based on a performance index within the allowable working conditions. Experimental results have shown that the cutting performance of wire-EDM can be greatly enhanced using this new approach.

Journal ArticleDOI
TL;DR: In this paper, the genetic algorithm is examined as a method for solving optimization problems in econometric estimation and compared to Nelder-Mead simplex, simulated annealing, adaptive random search, and MSCORE.
Abstract: The genetic algorithm is examined as a method for solving optimization problems in econometric estimation. It does not restrict either the form or regularity of the objective function, allows a reasonably large parameter space, and does not rely on a point-to-point search. The performance is evaluated through two sets of experiments on standard test problems as well as econometric problems from the literature. First, alternative genetic algorithms that vary over mutation and crossover rates, population sizes, and other features are contrasted. Second, the genetic algorithm is compared to Nelder–Mead simplex, simulated annealing, adaptive random search, and MSCORE.

Journal ArticleDOI
TL;DR: An improved simple genetic algorithm developed for reactive power system planning and a new population selection and generation method which makes the use of Benders' cut is presented.
Abstract: This paper presents an improved simple genetic algorithm developed for reactive power system planning. Successive linear programming is used to solve operational optimization sub-problems. A new population selection and generation method which makes the use of Benders' cut is presented in this paper. It is desirable to find the optimal solution in few iterations, especially in some test cases where the optimal results are expected to be obtained easily. However, the simple genetic algorithm has failed in finding the solution except through an extensive number of iterations. Different population generation and crossover methods are also tested and discussed. The method has been tested for 6 bus and 30 bus power systems to show its effectiveness. Further improvement for the method is also discussed.

Journal ArticleDOI
TL;DR: A simulated annealing optimization algorithm is formulated to optimize parameters of ecosystem models to justify the use of a more complicated model at Station P, without additional ammonium and bacteria measurements.
Abstract: A simulated annealing optimization algorithm is formulated to optimize parameters of ecosystem models. The optimization is used to directly determine the model parameters required to reproduce the observed data. The optimization routine is formulated in a general manner and is easily modified to include additional information on both the desired model output and the model parameters. From the optimization routine, error analysis of the optimal parameters is provided by the error-covariance matrix which gives both the sensitivity of the model to each model parameter and the correlation coefficients between all pairs of model parameters. In addition, the optimization analysis provides a means of assessing the necessary model complexity required to model the available data. To demonstrate the technique, optimal parameters of three different ecosystem model configurations are determined from nitrate, phytoplankton, mesozooplankton and net phytoplankton productivity measurements at Station P. At Station P, error analysis of the optimal parameters indicates that the data are able to resolve up to 10 independent model parameters. This is always less than the number of unknown model parameters indicating that the optimal solutions are not unique. A simple nitrate-phosphate-zooplankton ecosystem is successful at reproducing the observations. To justify the use of a more complicated model at Station P requires additional data to constrain the optimization routine. Although there is evidence supporting the importance of the microbial loop at Station P, without additional ammonium and bacteria measurements one cannot validate a more complicated model that includes these processes.

Journal ArticleDOI
TL;DR: In this article, generalized simulated annealing (GSA) is employed to select an optimal set of descriptors for the neural network to evaluate the effectiveness of the descriptors.
Abstract: The central steps in developing QSARs are generation and selection of molecular structure descriptors and development of the model. Recently, computational neural networks have been employed as nonlinear models for QSARs. Neural networks can be trained efficiently with a quasi-Newton method, but the results are dependent on the descriptors used and the initial parameters of the network. Thus, two potential opportunities for optimization arise. The first optimization problem is the selection of the descriptors for use by the neural network. In this study, generalized simulated annealing (GSA) is employed to select an optimal set of descriptors. The cost function used to evaluate the effectiveness of the descriptors is based on the performance of the neural network. The second optimization problem is selecting the starting weights and biases for the network. GSA is also used for this optimization. The result is an automated descriptor selection algorithm that is an optimization inside of an optimization. Application of the method to a QSAR problem shows that effective descriptor subsets are found, and they support models that are as good or better than those obtained using traditional linear regression methods

Journal ArticleDOI
TL;DR: The proposed method constructs an optimal structure of the simplified fuzzy inference that minimizes model errors and the number of the membership functions to grasp nonlinear behavior of power system short-term loads.
Abstract: This paper proposes an optimal fuzzy inference method for short-term load forecasting. The proposed method constructs an optimal structure of the simplified fuzzy inference that minimizes model errors and the number of the membership functions to grasp nonlinear behavior of power system short-term loads. The model is identified by simulated annealing and the steepest descent method. The proposed method is demonstrated in examples.

Journal ArticleDOI
TL;DR: In this paper, annealed neural network model, which merges many features of simulated annealing and the Hopfield neural network is employed to solve the problem, and a program written in C called SitePlan is built on a personal computer to implement the algorithm.
Abstract: Construction-site layout is an important construction planning activity. The impact of good layout practices on money and timesaving becomes more obvious on larger construction projects. In this study, we formulate the problem as a combinatorial optimization problem. Construction-site layout is delimited as the design problem of arranging a set of predetermined facilities on a set of predetermined sites, while satisfying a set of constraints and optimizing an objective. In this paper, the annealed neural network model, which merges many features of simulated annealing and the Hopfield neural network is employed to solve the problem, and a program written in C, called SitePlan, is built on a personal computer to implement the algorithm. In addition, a strategy to set a reasonable initial temperature in the simulated annealing procedure is proposed, the effects of various parameters in annealed neural network are examined, and two case studies are used to illustrate the practical applications and to demonstrate this model's efficiency in solving the construction-site layout problem.

Proceedings ArticleDOI
20 Mar 1995
TL;DR: This paper introduces a structure strength function as clustering criterion, which is valid for any membership assignments, thereby being capable of determining the plausible number of clusters according to the authors' subjective requisition.
Abstract: In this paper, we propose a new approach to fuzzy clustering by means of a maximum-entropy inference (MEI) method. The resulting formulas have a better form and clearer physical meaning than those obtained by means of the fuzzy c-means (FCM) method. In order to solve the cluster validity problem, we introduce a structure strength function as clustering criterion, which is valid for any membership assignments, thereby being capable of determining the plausible number of clusters according to our subjective requisition. With the proposed structure strength function, we also discuss global minimum problem in terms of simulated annealing. Finally, we simulate a numerical example to demonstrate the approach discussed, and compare our results with those obtained by the traditional approaches. >

Patent
04 Apr 1995
TL;DR: In this paper, a rule-based greedy algorithm is used to find a locally optimal allocation, and the locally optimal can be improved upon further by a simulated annealing technique which is more likely to produce a globally optimal allocation.
Abstract: Trading in pooled securities (e.g., pooled mortgages) requires allocation of securities from pools to contracts subject to certain rules or constraints. To improve upon manual allocation procedures, computer techniques for fast and profitable allocation have been developed. Advantageously, a locally optimal allocation can be found by a rule-based greedy algorithm, and the locally optimal allocation can be improved upon further by a simulated annealing technique which is more likely to produce a globally optimal allocation.

Journal ArticleDOI
TL;DR: By computer simulations on randomly generated test problems, it is shown that the performance of the proposed algorithms is less sensitive to the choice of a cooling schedule than that of the standard simulated annealing algorithm.

Journal ArticleDOI
TL;DR: The simulated annealing-based improvement methods are compared against their descent alternatives as well as other metaheuristics implementations on a set of classical test problems.

Journal ArticleDOI
TL;DR: Three heuristic solution approaches to operational forest planning problems are presented based on Interchange, Simulated Annealing and Tabu search and indicate that these approaches provide near optimal solutions in relatively short amounts of computer time.
Abstract: Operational forest planning problems are typically very difficult problems to solve due to problem size and constraint structure. This paper presents three heuristic solution approaches to operational forest planning problems. We develop solution procedures based on Interchange, Simulated Annealing and Tabu search. These approaches represent new and unique solution strategies to this problem. Results are provided for applications to two actual forest planning problems and indicate that these approaches provide near optimal solutions in relatively short amounts of computer time.